security

Ed-Tech Industry Group Calls for Equity, Data Privacy Safeguards in … – Education Week


Ed-tech companies need to ensure that AI tools are designed with equity, data privacy, and building greater understanding in schools of the technology’s power and risks. 

Those are some of the newly published recommendations of the Software and Information Industry Association, an advocacy organization, which released what it calls “guiding principles” for developing the technology. 

The principles were developed with the help of major education companies, such as Pearson, D2L, Instructure, McGraw Hill, and GoGuardian. 

“There are a lot of great possibilities for [AI], but that’s why we released these principles – so that we can provide some early guardrails for companies that are looking to launch tools,” said Sara Kloek, vice president of education and children’s policy at the Software and Information Industry Association. “This will help them start thinking about how they can help transform education in a positive way.” 

The guidelines say that AI tools in education should accomplish the following: 

  • Address the needs of learners, educators, and families, and make use of “established, modern learning principles and design,” where necessary. 
  • Account for educational equity, inclusion, and civil rights as key elements of successful learning environments. 
  • Protect student privacy and data and adhere to relevant state and federal laws. 
  • Strive for transparency to enable the school community to effectively understand and engage with the AI tools. 
  • Be created by companies that engage with education institutions and stakeholders to explain and demystify the opportunities and risks of new AI technologies. 
  • Be grounded in best practices for accountability, assurance, and ethics, calibrated to mitigate risks and achieve the goals of these principles. 
  • Be developed with the greater education community to identify ways to support AI literacy for students and educators. 

There should be a responsibility on companies to protect the safety and security of students online…especially when it comes to AI and the collection, retention, and use of data.Suzanne Bernstein, law fellow, Electronic Privacy Information Center

The SIIA put together the document after reviewing guidelines on AI put out by the White House, as well as the National Institute of Standards and Technology. The organization also looked at guidance offered by international organizations, and policies issued by the U.S. Department of Education to piece together this final iteration of their guiding principles on artificial intelligence. 

Readers Also Like:  Moran Introduces Bipartisan Bill Addressing Security Threats From ... - KFDI

It was important that these principles were easy to understand, not just for companies, but for school districts, too, as a way to increase overall literacy around the technology, Kloek said. 

Education companies “need to be transparent about what [their AI product] does because it’s really opaque right now,” Kloek said. “There’s a lot of cool work they could be doing, but it’s about how it’s actually going to impact students and teachers in a positive way.” 

Education companies have been experimenting with AI for years. But their interest in the technology has grown as the capabilities of more advanced forms of AI, such as generative tools, have come into focus.

An EdWeek Market Brief survey of more than 400 education company officials conducted this year found that 55 percent of them said they’re making use of AI in some way. The top areas where they’re incorporating artificial intelligence include formative assessment and online professional development, the survey showed.

At the same time, many schools are struggling to figure out how to regulate AI in classrooms, and whether they should restrict it or — reasoning that they can’t stop its growth anyway — encourage students’ responsible and creative use of technology.

The AI literacy piece in schools is crucial, said Suzanne Bernstein, law fellow at the Electronic Privacy Information Center, a nonprofit research center that advocates for stronger privacy standards. 

Students and teachers need to understand what to look for in AI products, to a degree that allows them to ask more questions and do more research if AI-produced materials don’t seem accurate, she said.  

Readers Also Like:  Revealed: English neo-Nazi who stabbed asylum seeker was serial stalker

While there still needs to be more regulatory action from the government in the AI space, having industry guidelines is a good first step, she said. 

“There should be a responsibility on companies to protect the safety and security of students online…especially when it comes to AI and the collection, retention, and use of data,” she said.  

“But these tools, when used responsibly, can be a tremendous help for all kinds of learners at different levels.” 

Follow EdWeek Market Brief on Twitter @EdMarketBrief or connect with us on LinkedIn.

Image by Getty.


See also:





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.