security

DOJ to Launch Emerging Tech Board, Ensure Ethical Use of AI – BankInfoSecurity.com


Standards, Regulations & Compliance

Board to Set Ethical Framework for DOJ Use of Facial Recognition, Other AI Tools

DOJ to Launch Emerging Tech Board, Ensure Ethical Use of AI
The new board will seek to regulate the use of AI and emerging technology across the Justice Department. (Image: Shutterstock)

The Justice Department plans to help define the ethical and legal implications of using AI tools in a law enforcement and national security investigations. A top DOJ official announced plans to create an emerging technology board on Wednesday, just eight days after President Joe Biden had signed an executive order on responsible AI.

See Also: Live Webinar | Generative AI: Myths, Realities and Practical Use Cases


Deputy Attorney General Lisa Monaco told the IBM Security Summit the new board will be tasked with advising Justice leaders “on the ethical, lawful use of AI” within the agency “to ensure that we are coordinating the understanding and use of emerging technologies across the department.”


Monaco added that the board will aim to share information about best practices for the use of AI, while developing a set of principles around the deployment of emerging technologies. The department has already employed AI systems and emerging technologies in a number of high-profile scenarios, Monaco said, including the use of facial recognition throughout its ongoing investigations of the Jan. 6 insurrection.


The department also uses AI technologies to support its national security initiatives, such as helping to identify anomalies in drug samples, Monaco said. The new board will seek to standardize the use of AI and ensure that new technologies are deployed in a manner that aligns with the agency’s mission.

Readers Also Like:  'Cascading Supply Chain Compromise' Led to 3CX Compromise - TechDecisions


The announcement followed an October executive order the White House issued to begin setting new standards and regulations for the use of AI systems throughout federal agencies and requires developers of advanced AI models to share safety test results with the government.


Government watchdog reports have long called on agencies – including Justice – to assess privacy risks and concerns associated with the federal use of facial recognition and emerging technologies for law enforcement purposes.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.