security

Facial recognition technology raises concerns over misidentification, implicit bias – NBC2 News


FORT MYERS, Fla. — Using artificial intelligence can save you precious time. Whether you need a grocery list or to sort through thousands of job applicants, there’s likely a bot that can get it done.

But sometimes, it can do more harm than good.

For decades, people of color were kept out of home ownership through a practice called redlining. Though this practice is now illegal the results have not changed.

“The information used in redlining has largely been fed into new algorithms,” Brian Vines said, “that are essentially doing the same kind of thing without the racist overtones.”

Vines said in some cases, technology is failing in home lending, medicine, facial recognition, and security.

“Frankly, tech can be racist. If tech is fed bad information, it will continue to give us bad outputs,” Vines said.

Florida Gulf Coast University professor and A.I. ethicist Chrissann Ruehle said over time, these biases have shifted to personal characteristics like facial expressions, glasses and skin tone.

“Unfortunately there are some biases that can pop up in that area,” Ruehle said. “A number of the technology companies have actually stepped back and said we do not want to use that facial recognition technology because it does have some challenges.”

During the COVID-19 pandemic, pulse oximeters helped save lives. A study done by experts at the University of Michigan showed that the technology was not as accurate for Black patients compared to White patients.

“People of color were presenting and getting wrong readings. It delayed the care that they were able to receive,” Vines said, “and could really have some dire consequences if you’re showing up and your blood oxygen level is incorrect.”

Readers Also Like:  VMware Wins Five Global Infosec Awards at RSA Conference 2023 - VMware News

You may not notice it, but facial recognition technology can be found everywhere: on your phone, at the self-check-out at a store, or standing in line at an event where security is scanning.

“We’ve seen cases across the country of people being misidentified and facing criminal charges,” Vines said.

Artificial intelligence isn’t going anywhere soon. A Pew Research Center report released this year showed 62 percent of those polled think A.I. will have a major impact on workers over the next 20 years.

Ruehle said it’s important to catch any biases early and train them out of the technology before they can impact real people.

“I think a key is education upfront,” Ruehle said. “I have an ethical responsibility to make sure that I understand those blind spots. And I also have an ethical responsibility that I need to direct the work of artificial intelligence. Can’t get lazy. I need to retain the decision-making authority.”




READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.