security

Biden Order on AI Tackles Tech-Enabled Discrimination in Schools – The 74


As artificial intelligence rapidly expands its presence in classrooms, President Biden signed an executive order Monday requiring federal education officials to create guardrails that prevent tech-driven discrimination. 

The wide-ranging, all-of-government order, which the White House called “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems,” 

offers several directives that are specific to the education sector. The order dealing with emerging technologies like ChatGPT directs the Justice Department to coordinate with federal civil rights officials on ways to investigate discrimination perpetuated by algorithms. 

Within a year, the education secretary must release guidance on the ways schools can use the technology equitably, with a particular focus on the tools’ effects on “vulnerable and underserved communities.” Meanwhile, an Education Department “AI toolkit” released within the next year will offer guidance on how to implement the tools so that they enhance trust and safety while complying with federal student privacy rules. 

For civil rights advocates who have decried AI’s potentially unintended consequences, the order was a major step forward. 

The order’s focus on civil rights investigations “aligns with what we’ve been advocating for over a year now,” said Elizabeth Laird, the director of equity and civic technology at the nonprofit Center for Democracy and Technology. Her group has called on the Education Department’s Office for Civil Rights to open investigations into the ways AI-enabled tools in schools could have a disparate impact on students based on their race, disability, sexual orientation and gender identity. 

“It’s really important that this office, which has been focused on protecting marginalized groups of students for literally decades, is more involved in conversations about AI and can bring that knowledge and skill set to bear on this emerging technology,” Laird told The 74. 

An Education Department spokesperson didn’t respond to a request for comment Monday on how the agency plans to respond to Biden’s order. 

Schools nationwide have adopted artificial intelligence in divergent ways, including in personalized learning to provide students individualized lessons and with the growing use of chatbots like ChatGPT by both students and teachers. It’s also generated heated debates over technology’s role in exacerbating harms to at-risk youth, including educators’ use of early warning systems that mine data about students — including their race and disciplinary records — to predict their odds of dropping out of school. 

“We’ve heard reported cases of using data to predict who might commit a crime, so very Minority Report,” Laird said. “The bar that schools should be meeting is that they should not be targeting students based on protected characteristics unless it meets a very narrowly defined purpose that is within the government’s interests. And if you’re going to make that argument, you certainly need to be able to show that this is not causing harm to the groups that you’re targeting.” 

Readers Also Like:  Partnership, Presence Key to U.S. Strategy In the Arctic - National Defense Magazine

AI and student monitoring tools

An unprecedented degree of student surveillance has also been facilitated by AI, including online activity monitoring tools, remote proctoring software to detect cheating on tests and campus security cameras with facial recognition capabilities. 

Beyond its implications on schools, the Biden order requires certain technology companies to conduct AI safety testing before their products are released to the public and to provide their results to the government. It also orders new regulations to ensure AI won’t be used to produce nuclear weapons, recommends that AI-generated photos and videos be transparently identified as such with watermarks and calls on Congress to pass federal data privacy rules “to protect all Americans, especially kids.”

In September, The Center for Democracy and Technology released a report that warned that schools’ use of AI-enabled digital monitoring tools, which track students’ behaviors online, could have a disparate impact on students — particularly LGBTQ+ youth and those with disabilities — in violation of federal civil rights laws. As teachers punish students for using ChatGPT to allegedly cheat on classroom assignments, a survey suggested that children in special education were more likely to face discipline than their general education peers. They also reported higher levels of surveillance and subsequent discipline as a result. 

In response to the report, a coalition of Democratic lawmakers penned a letter urging the Education Department’s civil rights office to investigate districts that use digital surveillance and other AI tools in ways that perpetuate discrimination. 

Education technology companies that use artificial intelligence could come under particular federal scrutiny as a result of the order, said consultant Amelia Vance, an expert on student privacy regulations and president of the Public Interest Privacy Center. The order notes that the federal government plans to enforce consumer protection laws and enact safeguards “against fraud, unintended bias, discrimination, infringements on privacy and other harms from AI.” 

“Such protections are especially important in critical fields like healthcare, financial services, education, housing, law and transportation,” the order notes, “where mistakes by or misuse of AI could harm patients, cost consumers or small businesses or jeopardize safety or rights.”

Schools rely heavily on third-party vendors like education technology companies to provide services to students, and those companies are subject to Federal Trade Commission rules against deceptive and unfair business practices, Vance noted. The order’s focus on consumer protections, she said, “was sort of a flag for me that maybe we’re going to see not only continuing interest in regulating ed tech, but more specifically regulating ed tech related to AI.”

While the order was “pretty vague when it came to education,” Vance said it was important that it did acknowledge AI’s potential benefits in education, including for personalized learning and adaptive testing. 

Readers Also Like:  July 2023 in Review - afrc.af.mil

“As much as we keep talking about AI as if it showed up in the past year, it’s been there for a while and we know that there are valuable ways that it can be used,” Vance said. “It can surface particular content, it can facilitate better connections to people when they need certain content.” 

AI and facial recognition cameras

As school districts pour billions of dollars into school safety efforts in the wake of mass school shootings, security vendors have heralded the promises of AI. Yet civil rights groups have warned that facial recognition and other AI-driven technology in schools could perpetuate biases — and could miss serious safety risks. 

Just last month, the gun-detection company Evolv Technology, which pitches its hardware to schools, acknowledged it was the subject of a Federal Trade Commission inquiry into its marketing practices. The agency is reportedly probing whether the company employs artificial intelligence in the ways that it claims. 

In September, New York became the first state to ban facial recognition in schools, a move that followed outcry when an upstate school district announced plans to roll out a surveillance camera system that tracked students’ biometric data. 

A new Montana law bans facial recognition statewide with one notable exception — schools. Citing privacy concerns, the law adopted this year prohibits government agencies from using facial recognition, but with a specific carveout for schools. One rural education system, the 250-student Sun River School District, employs a 30-camera security system from Verkada that uses facial recognition to track the identities of people on its property. As a result, the district has a camera-to-student ratio of 8-to-1. 

District and Verkada representatives didn’t respond to requests for comment. But Verkada offers a cautionary tale about the potential security vulnerabilities of campus surveillance systems. In 2021, the company suffered a massive data breach when hackers exposed the live feeds of 150,000 surveillance cameras — including those in place at Sandy Hook Elementary School in Newtown, Connecticut, the site of a mass shooting in 2012. 

Hikvision has similarly made inroads in the school security market with its facial recognition surveillance cameras — including during a pandemic-era push to enforce face mask compliance. Yet the company, owned in part by the Chinese government, has also faced significant allegations of civil rights abuses and in 2019 was placed on a U.S. trade blacklist after being implicated in the country’s “campaign of repression, mass arbitrary detention and high-technology surveillance” against Muslim ethnic minorities. 

Though multiple U.S. school districts continue to use Hikvision cameras, a recent investigation found the company’s software seeks to detect ethnic minorities despite claiming for years it had ended the practice.

 In an email, a Hikvision spokesperson didn’t comment on how Biden’s executive order could affect its business, including in schools, but offered a letter it shared to its customers in response to the investigation, saying an outdated reference to ethnic detection appeared on its website erroneously.

Readers Also Like:  University of Zurich cyberattack: 'Serious' security incident reported - Tech Monitor

“It has been a longstanding Hikvision policy to prohibit the use of minority recognition technology,” the letter states. “As we have previously stated, that functionality was

phased out and completely prohibited by the company in 2018.“

Data scientist David Riedman, who built a national database to track school shootings dating back decades, said that artificial intelligence is at “the forefront” of the school safety conversation and emerging security technologies can be built in ways that don’t violate students’ rights. 

Riedman became a figure in the national conversation about school shootings as the creator of the K12 School Shooting Database but has since taken on an additional role as director of industry research and content for ZeroEyes, a surveillance software company that uses security cameras to ferret out guns. Instead of using facial recognition, the ZeroEyes algorithm was trained to identify and notify law enforcement within seconds of spotting a firearm. 

The company maintains that its object-detection approach — as opposed to facial recognition — can “evade privacy and bias concerns that plague other AI models,” and internal research found that “only 0.06546% of false positives were humans detected as guns.” 

“The simplicity” of ZeroEye’s technology, Riedman said, puts the company in good standing as far as the Biden order is concerned.

“ZeroEyes isn’t looking for people at all,” he said. “It’s only looking for objects and the only objects it is trying to find, and it’s been trained to find, are images that look like guns. So you’re not getting student records, you’re not getting student demographics, you’re not getting anything related to people or even a school per se. You just have an algorithm that is constantly searching for images to see if there is something that looks like a firearm in them.”

However, false positives remain a concern. Just last week at a high school in Texas, a false alarm from ZeroEyes prompted a campus lockdown that set off student and parent fears of an active shooting. The company said the false alarm was triggered by an image of a student outside who the system believed was armed based on shadows and the way his arm was positioned. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.