security

AI Evolution: Tackling Fears, Bias, Security, and Efficiency – TechNewsWorld


With the rise in popularity of artificial intelligence, C-level bosses are pressuring managers to utilize AI and machine learning. The fallout is causing problems as mid-level execs struggle to find ways to meet the demand for next-generation AI solutions.

As a result, a growing number of unprepared businesses are lagging behind. At stake is the negative impact businesses in various industries may suffer by not quickly integrating generative AI and large language models (LLMs).

These AI technologies are the new big deal in workplace automation and productivity. They have the potential to revolutionize how work is done, increasing efficiency, fostering innovation, and reshaping the nature of certain jobs.

Generative AI is one of the more promising AI derivatives. It can facilitate collaborative problem-solving based on real company data to optimize business processes. LLMs can assist by automating routine tasks, freeing time for more complex and creative projects.

Three nagging issues organizations face with getting AI transformation to work rise to the top of the pile. Until companies solve them, they will continue to flounder in moving the use of AI forward productively, according to Morgan Llewellyn, chief data and strategy officer for Stellar. He explained that they must:

  • Get a handle on AI capabilities,
  • Understand what is possible for their internal work processes, and
  • Step up workers’ capacity to handle the changes.

Perhaps an even more perplexing struggle lies within the unresolved concerns about security safeguards to keep AI operations from overstepping human-imposed concepts of privacy, added Mike Mason, chief AI officer at Thoughtworks. He makes the case that relying on regulation is the wrong approach.

“Too often, regulators have struggled to keep pace with technology and enact legislation that dampens innovation. The pressure for regulation will continue unless the industry addresses the issue of trust with consumers,” Mason told TechNewsWorld.

Pursuing an Unpopular View

Mason makes the case that relying on regulation is the wrong approach. Businesses can win consumers’ trust and potentially avoid cumbersome lawmaking through a responsible approach to generative AI.

He contends that the solution to the safety issue lies within the industries using the new technology to ensure the responsible and ethical use of generative AI. It is not up to the government to mandate guardrails.

Readers Also Like:  Security Automation: Types, Benefits & 5 Best Practices - CrowdStrike

“Our message is that businesses should be aware of this consumer opinion. And you should realize that even if there aren’t government regulations coming out in the rest of the world, you are still held accountable in the court of public opinion,” he argued.

Mason’s view counters recent studies that favor a heavy regulatory hand. A majority (56%) of consumers do not trust businesses to deploy gen AI responsibly.

Those studies show that 10,000 consumers across 10 countries reveal that a vast majority (90%) of consumers agree that new regulations are necessary to hold businesses accountable for how they use gen AI, he admitted.


Mason based his opposing viewpoint on other responses in those studies, showing businesses can create their social license to operate responsibly.

He noted that 83% of consumers agreed that businesses can use generative AI to be more innovative to serve them better. Roughly the same amount (85%) prefers firms that stand for transparency and equity in their use of gen AI.

Thoughtworks is a technology consultancy that integrates strategy, design, and software engineering to enable enterprises and technology disruptors to thrive.

“We have a strong history of being a systems integrator and understanding not just how to use new technology but how to get it to really work and play well with all of those existing systems legacy systems. So, I’d definitely say that’s a problem,” Mason said.

Control Bad Actors, Not Good AI

Stellar’s Llewellyn supports the notion that security concerns over AI safety violations are manageable without a heavy hand in government regulation. He confided that holes exist in computer systems that can give bad actors new opportunities to do harm.

“Just like with implementing any other technology, the security concern is not insurmountable when implemented properly,” Llewellyn told TechNewsWorld.

Generative AI exploded on the scene about a year ago. No one had the staffing resources to handle the new technology along with everything else people were already doing, he observed.

Readers Also Like:  Hackers reveal diverse set of flaws in largest security test of AI ... - Axios

All industries are still looking for answers to four troubling questions about the role of AI in their organization. What is it, how does it benefit my business, how can I do it safely and securely, and how do I even find the talent to implement this new thing?

That is the role Stellar fills for companies facing those questions. It helps with strategy so adopters understand what approach AI gets in their business.

Then Stellar does the infrastructure design work where all those security concerns get addressed. Lastly, Stellar can come in and help deploy a business credible solution, Llewellyn explained.

The Sci-Fi Specter of AI Dangers

From a software developer’s perch, Mason sees two equally troubling views of AI’s potential dangers. One is the Sci-Fi concerns. The other is its invasive use.

He sees people thinking about AI in terms of whether it creates a runaway superintelligence that decides that humans are getting in the way of its other goals and ends us all.

“I think it is definitely true that not enough research has been done, and not enough spending has occurred on AI safety,” he allowed.

Mason noted that the U.K. government recently started talking about increasing investment in AI safety. Part of the problem today is that most of the AI safety research comes from the AI companies themselves. That’s a little bit like asking the foxes to guard the henhouse.

“Good AI safety work has been done. There is independent academic research, but it is not funded the way it should be,” he mused.


The other existing problem with artificial intelligence is its use and modeling, which produces biased results. All of these AI systems learn from the training data provided to them. If you have biased data, overt or subtle, the AI systems that you build on top of that training data will exhibit the same bias.

Maybe it does not matter too much if a big box retailer markets to customers and makes a few mistakes because of the data bias. However, a court relying on an AI system for sentencing guidelines needs to be very sure biased data is not involved, he offered.

Readers Also Like:  Snowden, MI5 and me: how the leak of the century came to be published

“The first thing we must look at is: ‘What can companies do?’ You still need to start looking at bias and data because if you lose your customer trust on this, it can have a significant impact on a business,” said Mason. “The next topic is data privacy and security.”

The Power Within AI

Use cases for AI’s ability to save time, speed up data analysis, and solve human problems are far too numerous to expound upon here. However, Mason offered an example that clearly shows how using AI can benefit efficiency and economy of cost to get stuff done.

Food and beverage company Mondelez International, whose brand lineup includes Oreo, Cadbury, Ritz, and others, tapped AI to help develop tasty new snacks.

Developing those products involves testing literally hundreds of ingredients to make into a recipe. Then, cooking instructions are needed. Ultimately, expert human tasters try to figure out the best results.

That process is costly, labor-intensive, and time-consuming. Thoughtworks built an AI system that lets the snack developers feed in data on previous recipes and human expert taster results.

The end result was an AI-generated list of 10 new recipes to try. Oreo could then make all 10, give them to the human tasters again, get the expert feedback, and get those 10 new data points. Ultimately, the AI program would chew on all the results and spit out the winning concoction.

“We found this thing was able to much more quickly converge on the actual flavor profile that Mondelez wanted for its products and shave literally millions of dollars and months of work cycles,” Mason said.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.