security

20 issues shaping generative AI strategies today – CIO


Organizations are rushing to figure out how to extract business value from generative AI — without falling prey to the myriad pitfalls arising.

The adoption curve here is by no means gradual, with most enterprise leaders quickly working to harness the technology’s potential mere months after the November 2022 launch of gen AI tool ChatGPT kicked off a wave of enthusiasm (and worry).

Just look at the stats:Some 45% of 2,500 executives polled for a May 2023 report from research firm Gartner said the publicity around ChatGPT prompted them to increase their AI investments, 70% said their organization is already exploring gen AI, and 19% are in actual pilot or production mode. Those results align with Foundry’s more recent July 2023 CIO Tech Talk survey that found 60% of IT leaders are actively using gen AI in their enterprise, with 28% more in the exploratory phase.

The majority of IT leaders are upskilling employees on gen AI (54%) and getting gen AI tools in users’ hands (52%), while 42% are establishing gen AI policies and guidelines, according to the Foundry survey.Still, tech leaders, executive advisors, management consultants, and AI enthusiasts agree that the C-suite is facing multiple issues as they test and adopt generative AI. They note, too, that CIOs — being top technologists within their organizations — will be running point on those concerns as companies establish their gen AI strategies.

So, what are the concerns that will complicate the enterprise gen AI playbook? Here’s a rundown of the top 20 issues shaping gen AI strategies today.

1. To allow or not

According to various news reports, some big-name companies initially blocked generative AI tools such as ChatGPT for various reasons, including concerns about protecting proprietary data. Some companies have lifted their bans and are allowing limited use of the technology; others have not.

As vendors add generative AI to their enterprise software offerings, and as employees test out the tech, CIOs must advise their colleagues on the pros and cons of gen AI’s use as well as the potential consequences of banning or limiting it.

“The No. 1 question now is to allow or not allow,” says Mir Kashifuddin, data risk and privacy leader with the professional services firm PwC US. “Many companies are sitting in the middle of the do-we-block-it-or-do-we-allow-it discussion.”


And this includes the top tech officer positions, as CIOs themselves debate whether to push or pause AI exploration.

2. Rapidly evolving risks

Companies that have blocked the use of gen AI are finding that some workers are still testing it out. They’re also realizing that they’ll have to learn to harness the technology’s potential or be left behind.

That leaves companies scrambling to identify the most immediate risks of moving forward with gen AI pilots.

“The CIO’s job is to ask questions about potential scenarios. The CIO should talk about risks and what the risks are,” says Mary Carmichael, managing director of risk advisory at Momentum Technology and a member of both the Emerging Trends Working Group and the Risk Advisory Committee at the governance association ISACA.

3. Acceptable use policies

Carmichael says executives have another big question in front of them when it comes to tools like ChatGPT. The question: “What direction do you give to your staff right now on tech that is free and accessible?”

Carmichael says she recently asked a gathering of executives whether they had an acceptable use policy, and only a few said they had created one to guide their employees on what the company felt was acceptable.


“Staff are already asking, ‘Should I use this tool? When should I use it? What should I be mindful of?’ But right now IT seems to be trying to catch up,” Carmichael says. “So CIOs have to think about an enterprise gen AI with controls; they need to think about setting some policies.”

4. Business disruption

Generative AI is a disruptive technology, so CIOs and their C-suite colleagues must consider whether or how their company will fall victim to that force.

Readers Also Like:  CrowdStrike Prevents 3CXDesktopApp Intrusion Campaign - CrowdStrike

“How is your business impacted by generative AI? How has, say, ChatGPT hit your business model?” Carmichael asks, noting that some companies have already lost significant market value as they faced sudden competition from gen AI.

“This is an issue for CIOs. Some are expecting cost savings using this technology, but there’s also the expectation that they’ll find ways to incorporate this technology [into their company’s products] to strengthen their offerings,” Carmichael adds. “This is where CIOs have to gain knowledge about the technology and provide guidance and advisory.”


5. Building an engine in flight

As if those aren’t big enough challenges to tackle, executives must move fast and tackle them while everything is in flight.

Douglas Merrill, a partner at management consulting firm McKinsey & Co., says CIOs should apply agile processes to their gen AI strategy. “You have to be learning as things move forward but do [iterations] that are safe and controlled and focus on risk management,” he explains.

6. No good guidance yet

As CIOs seek to bring control and risk management to technology that’s generating widespread interest and plenty of experimentation, they’re doing so without pre-existing guidance and support.

“One of the particular issues that we all face is that generative AI is really new and it’s moving really quickly, so there’s not a lot of tooling in place,” Merrill says. “The risk guidelines for gen AI are fragile and new, and there’s no commonly accepted ‘Here’s how to think about risk guardrails.’ There will be eventually, but they don’t exist yet.”

7. Innovating at speed

Another area for CIOs to tackle: how to use generative AI to differentiate their organizations.

CIOs should focus on “where they can use generative AI effectively. It’s not a hammer. They’re looking for the art of the possible. And there are so many possibilities that you have to directly align them to the business strategy,” says Frances Karamouzis, a distinguished vice president analyst with research firm Gartner.

She acknowledges that tech-led innovation is not new to CIOs, but gen AI does indeed present new challenges due to the speed of its evolution, as well as its power and complexity.

“CIOs are going to have to ideate, educate, and execute in a very different way in terms of velocity and viscosity of what the solutions are,” Karamouzis adds.

8. Identifying, prioritizing use cases

Research firm IDC found in its May 2023 Generative AI Findings from Enterprise Intelligence Services Survey that nearly 70% of enterprise intelligence services buyers are considering or actively working on use cases for generative AI.

Companies need a way to collect, vet, and prioritize ideas on how to use the technology for the benefit of the enterprise.

“CIOs should first and foremost establish a clear roadmap for implementing generative AI. Whether the objectives are for productivity gains or other commercial gains, they need to be aligned with [the whole C-suite] on the roadmap before any selection of technologies to enable the roadmap is performed,” says Goh Ser Yoong, head of compliance at Advance.AI and a member of the ISACA Emerging Trends Working Group.

Karamouzis says CIOs must leverage their prior experience in prioritizing tech-driven initiatives and apply the same discipline to gen AI ideas to ensure their organizations invest wisely.

In its survey, IDC found that the highest priority use cases fall under the categories of knowledge management, code generation, and product or service design and engineering. Buyers agreed most strongly with the sentiment that generative AI will enable their employees to focus on higher-value tasks. And buyers most often disagreed that generative AI will expose them to greater risks.

Foundry’s gen AI survey found that retail CIOs are leading the way on identifying use cases (49%), followed by IT leaders in the manufacturing (42%), technology (42%), and financial services (32%) sectors.

Readers Also Like:  Aryaka Delivers Mission-Critical Managed Network and Security ... - PR Newswire

9. Buy vs. modify vs. build

Companies can bring generative AI into their organizations in three different ways, and CIOs will be leading their companies on deciding which way works best for them.

McKinsey has developed a propriety framework that describes the different use types as “taker,” “shaper” and “maker.”

A taker uses what someone else built, leveraging the capabilities straight out-of-the-box. The shaper will essentially customize such capabilities to work for its own needs or with its own proprietary data. The maker — the least common of the three types, McKinsey’s Merrill says — has the greatest need for structure and control and so will build its own models to meet its highly specialized requirements.

10. Enterprise readiness

Regardless of whether a company is a taker, shaper, or maker, it will need a modern enough technology stack and data program to make effective use of generative AI. 

“The data, the structures in the cloud, compute network and storage, that’s certainly the purview of the CIO,” Karamouzis says, adding that CIOs similarly have a responsibility to identify and address any bottlenecks or barriers within the tech or data stack that could keep their organizations from realizing the gen AI objectives they set.

She adds: “CIOs are the ones who will be building the tech to enable this.”

11. Data privacy and security

In mid-spring 2023 South Korean electronics company Samsung banned employee use of generative AI tools after finding that some of its internal source code had been uploaded to ChatGPT.

Other companies are worried that their workers are doing the same, Kashifuddin says.

“As the workforce is interacting with these tools, companies have to make sure they’re not putting in proprietary data that’s sent back up to the foundational models,” he adds.

Executives are looking to their CIOs, as well as their data leaders and privacy execs, to take action, develop governance policies, implement controls, and deploy monitoring tools to make sure employees are following acceptable use policies and not exposing proprietary data and intellectual property.

12. Data and privacy law adherence

Another issue that’s confronting CIOs — as well as data, compliance, risk, and security leaders — is this: how to monitor and ensure that workers don’t violate data privacy laws and data protection best practices if they’re using generative AI.

“When you put your data into an AI engine controlled by someone else, then you’ve lost control of that data,” warns Carl Froggett, CIO at Deep Instinct, maker of a cybersecurity platform.

13. The need for an audit trail

CIOs also are being asked to devise ways to discern and audit the results produced by generative AI to ensure the results are accurate, unbiased, and free from infringement on protected intellectual property, according to experts.

As Carmichael explains: “Responsible and explainable AI are other challenges that CIOs are going to have to deal with.”

14. New regulations

Governments around the world are debating whether and how to regulate the use of AI. And while corporate legal departments and outside counsel will help executives discern what new laws mean to them, CIOs will have a role to play in making the AI technology within the enterprise adhere to any new laws.

“CIOs must stay abreast of evolving regulations and legal frameworks related to AI usage such as the recent European Union’s AI Act,” Goh said. “Compliance with data protection laws, intellectual property regulations, industry-specific guidelines, and ethical standards is crucial. CIOs should work closely with legal teams to understand and address any legal implications associated with generative AI.”

15. Accuracy concerns

Similarly, Kashifuddin and others call out the need for CIOs to help their organizations and the workers themselves adopt quality assurance procedures that verify any AI-produced insights they receive. That includes implementing QA within IT for any such code that developers want to use, Kashifuddin adds.

The consequences of falling short on this step can be significant, as some well-publicized examples are showing. One notable example is the case of a New York lawyer who is facing his own court hearing after he filed a legal document drafted using ChatGPT, which included citations of legal cases made up by the technology.

Readers Also Like:  VW struggles against Tesla and BYD's better pricing and tech - Automotive News Europe

16. Upskilling requirements

AI is quickly reshaping the demand for IT skills and talent, and workers across the organization will need to learn how to work with generative AI tools.

Meanwhile, IT workers must retrain to work with generative AI tools to do their jobs and pick up the skills required to support the technology as it’s used throughout the organization.

As Goh says: “CIOs must assess the skill gaps within their IT departments and provide adequate training and resources to understand and work with generative AI effectively.”

17. AI-enhanced cybersecurity threats

Another area of concern for CIOs: how hackers are using generative AI.

“You’re talking about a new level of sophistication in [hackers] crafting of emails and messages and bypassing biometrics because the generative AI takes on the persona, the mannerisms, and the phrases that a real person would use,” Froggett says, adding that generative AI can generate fake videos and audio that seems authentic.

Such sophisticated attacks can render existing security controls, such as voice authentication, obsolete – forcing CIOs and CISOs to quickly find alternatives.

“The controls we had in place may not work because of what generative AI is already doing,” Froggett adds.

18. Nervous workers

In a March 2023 report, Goldman Sachs calculated that some two-thirds of current jobs “are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work.” In other words, the firm said, the equivalent of 300 million full-time jobs could be lost to automation.

“There’s a lot of uncertainty. People are thinking, ‘How is this going to affect my career? Do I need to reskill?’” Carmichael says.

As part of the executive team, CIOs should be working to engage their employees and address their concerns, she says, adding that she believes gen AI will produce jobs — and perhaps more than what will be lost. That’s a message CIOs could share, too, as they tackle the thorny gen AI change management challenge.

19. Leading change through this new technology

CIOs will be among the executives responsible for leading their organizations through all the disruption that gen AI is expected to bring.

CIOs are a natural fit for this work, Carmichael and others say; they’ve been building their skills in this space for the past decade as they led their organizations through digital transformations.

However, some say leading through the upcoming change will be different than prior change management scenarios, as the pace of adoption and the disruption gen AI brings could dwarf previous technology revolutions.

“The pace has surprised everyone, and now we have to keep up,” Carmichael says. “So the question is how well you manage all that and move forward with confidence.”

20. Pulling together the teams

The CIO, of course, cannot tackle all these issues alone — nor should the CIO try. But as the primary technologist in many organizations, the CIO will likely be a key advisor to the executive team about the risks and opportunities that gen AI presents.

“The CIO is enabling or equipping, is acting as the facilitator, advisor and delivery person,” Karamouzis says. “So now the role of the CIO is one of the integral roles in the C-suite in helping others understand the whole list of things that needs to be done.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.