security

CMU's Summer of AI Experts on the Hill – Carnegie Mellon University


Carnegie Mellon University has been at the epicenter of artificial intelligence from the creation of the first AI computer program in 1956 to pioneering work in self-driving cars and natural language processing. So it only makes sense that CMU experts would be at the forefront of advising national decision-makers on the fast-paced changes taking place in the field.

This summer, CMU faculty and leaders conducted AI policy briefings in Washington, accepting invitations from key federal agencies and Congressional committees and offices to discuss how the U.S. can continue to innovate and lead in the AI space. And more are underway this fall.

Amid growing concerns about the rapid development of AI platforms and tools and even talk of the existential threat AI may pose, federal officials need to get a balanced assessment of the facts, recognizing the legitimate threats posed by AI (job losses and racial bias) as well as the benefits of a transformative technology that can support humanity. That’s where CMU comes in. With a focus on using AI for the betterment and advancement of society, while ensuring it is developed in an ethical, equitable, inclusive and responsible way, CMU experts are well-positioned to engage with federal agencies, members of Congress and their staffs on where we are with this technology and how to best move forward. 

Martial Hebert

Martial Hebert

A central theme of many of these high-level conversations is the importance of effectively managing the creation of AI tools, while recognizing best practices for AI safety that promote consumer trust and protection. The dialogue prompted a significant proposal co-authored by Ramayya Krishnan(opens in new window), dean of CMU’s Heinz College of Information Systems and Public Policy, and Martial Hebert(opens in new window), dean of CMU’s School of Computer Science.

Writing in The Hill last July(opens in new window), Krishnan and Hebert advocated the creation of a federal AI Lead Rapid Response Team (ALRT) to address the uncertainties of AI by tracking emerging technologies, sharing best practices to ensure consistent approaches, and developing a system to test and verify the efficacy of new AI technologies and applications. “With this unified mission and federal funding, ALRT would form a proven industry and academia partnership, leveraging proprietary information in a trusted manner to combat the uncertainties of AI,” they said.

Their plan is modeled on the pioneering work done in the mid-1980s to address cybersecurity concerns during the dawn of the internet age. At that time, the government formed the Computer Emergency Response Team (CERT) at CMU, bringing together government, industry and academia to better prepare computer systems for potential cybersecurity threats.

According to Krishnan and Hebert, using that approach to confront the rapidly changing development and deployment of AI tools would be a major step toward establishing essential guardrails while ensuring American leadership and competitiveness in the industry.



READ SOURCE

Readers Also Like:  OpenAI says Sam Altman exiting as CEO because 'board no longer has confidence' in ability to lead - CNBC

This website uses cookies. By continuing to use this site, you accept our use of cookies.