Generative AI has prompted widespread discussion about what role a human should play, if any, with the technology as it grows better at rote tasks. This is particularly true in lower-level IT service desk positions.
The majority of analysts and tech leaders say some job disruption across the economy is inevitable thanks to generative AI. Though when asked directly about the technology replacing workers, most are hesitant to entertain the idea — others say it’s possible but not yet advisable.
“It depends, but if you have a good knowledge base and it’s well trained, it can really replace — for the most part — tier one service desk specialists, which is perhaps a little bit scary for some people,” said Mark Tauschek, VP, infrastructure and operations research at Info-Tech Research Group.
Tier one specialists are often the front lines of an IT department, connecting with users and performing basic troubleshooting. The role is also an entry point for aspiring technologists, with the average tier one help desk worker bringing in an annual salary of around $48,000 in the U.S., according to a Glassdoor data analysis last updated in June.
Tier one service desk employees, despite their entry-level role, can have significant influence on the rest of the company’s perception of its IT department. Their delivery of solutions and treatment of end users is the foundation for employee experience.
“For us to not stay ahead of this would be really foolish.”
Jeremy Rafuse
VP and head of digital workplace at GoTo
Yet, experts wonder what role these specialists will play in technology departments moving forward, even if many analysts believe fears of imminent widespread job loss are overblown.
More than 1 in 4 professionals say workforce displacement is one of their concerns regarding generative AI implementation, according to a June Insight Enterprises survey conducted by The Harris Poll of 405 U.S. employees who serve as director or higher within their company.
Jeremy Rafuse, VP and head of digital workplace at software development company GoTo, has heard conversations among IT workers at the company about fears of job disruption as teams look to automate some tasks with generative AI.
“I think it’s hard not to when you’re talking about automating certain workloads instantly,” said Rafuse, who oversees the IT department at GoTo. “We are trying to promote that this is an opportunity to learn something new and not only is this going to potentially upskill your job so you could be working on different things, but it’s going to create jobs that don’t even exist now.”
GoTo, the parent company of LastPass, has automated routine tasks within the service desk for years. More recently, the IT team has dedicated time to learn about generative AI and identify low-risk use cases, Rafuse said.
“We don’t want to just hide under the blanket,” Rafuse said. “But teams are aware, and they’re pretty optimistic about this being a chance to learn something new.”
In the service desk ecosystem, the company wants to use generative AI to analyze large amounts of data, identify trends related to satisfaction ratings and pinpoint customer pain points.
“For us to not stay ahead of this would be really foolish,” Rafuse said, a sentiment that most tech executives relate to.
Nearly all — 95% — of tech executives say they feel pressured to adopt generative AI in the next six months to a year, according to a July IDC survey of 900 respondents sponsored by Teradata. More than half said they were under “high or significant” levels of pressure.
Just because you can, doesn’t mean you should
Despite fears of generative AI technology replacing workers, the makers of popularly used models reject the narrative that LLMs and generative AI capabilities should — or can — stand in place of an employee.
“Humans should always be involved with AI systems,” Sandy Banerjee, GTM lead at Anthropic, said in an email. “Even where an AI system is answering questions or triaging tickets, there should be a [quality assurance] process where humans are keeping track of the system and evaluating its outputs.”
AI is not without its faults, after all. In research Anthropic published, the company found Claude models still get facts wrong and fill in gaps of knowledge with fabrication, emphasizing models should not be used in high-stakes situations or when an incorrect answer could cause harm.
Researchers from Stanford and UC Berkeley found models made by OpenAI weren’t necessarily getting better over time. In some cases they encountered, performance and accuracy were significantly worse, signaling a need for continuous monitoring.
Even so, models from generative AI start-up Anthropic are available for enterprise use off-the-shelf and through third-party services, such as Slack. As providers of generative AI models continue to release updates in beta and refer to tools as a work-in-progress, they are simultaneously aiming for enterprise use.
OpenAI equated its code interpreter plugin to a “very eager junior programmer working at the speed of your fingertips,” in a blog post when it first unveiled ChatGPT plugins in March, but the tool can still generate incorrect information or “produce harmful instructions or biased content,” according to ChatGPT’s web version homepage.
Despite vendor warnings, decisions to replace workers with generative AI hinge on what it costs and how organizations value particular roles. The National Eating Disorder Association made headlines when it decided to shut down its human-run national helpline and, instead, use a chatbot called Tessa. After users posted their experiences with Tessa recommending weight loss and diet restriction on social media platforms, the nonprofit pulled the tool in June.
“If people don’t like it, they will avoid it.”
Chris Matchett
Senior director analyst at Gartner
At GoTo, there have been conversations and meetings to set expectations for what generative AI technology can and cannot do. Rafuse underlined that the technology should be used as a tool, but not something that can be relied on because of its ability to get information wrong.
“You know sometimes you only have that one chance, and if you tell somebody the wrong thing, they’re not going to come back to you again,” Rafuse said. “First impressions last forever.”
Inaccurate information and bias aren’t the only risks. Before and throughout rollouts, Melanie Siao-Si, VP of international care and services at GoDaddy, said the team made sure to communicate usage guidelines to ensure employees did not expose proprietary information.
The FTC is currently investigating OpenAI to determine whether the company engaged in unfair or deceptive data security practices.
“Especially in the care organization, we’re learning our way into this, hence the experimentation,” Siao-Si told CIO Dive. “Obvious challenges include data security and privacy and potentially threat actors using the same technology to target our customer care organization.”
State of play
In the eight months since ChatGPT debuted, most tech leaders characterize the perception of generative AI technology as somewhere in-between the peak of inflated expectations and the trough of disillusionment when viewed in Gartner’s hype cycle.
The hype cycle identifies five key phases: the innovation trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment and the plateau of productivity. Gartner placed generative AI on the peak of inflated expectations for 2023 on Wednesday.
“Many technologies really follow the cycle and I don’t think generative AI will be any different,” Justin Falciola, SVP and chief technology and insights officer at Papa Johns, said. “The only thing is no one knows exactly where you are.”
Tech leaders are tasked with pushing through the hype to find value. At GoDaddy, Siao-Si is leading the customer care team’s AI experimentation as the company looks to improve customer experience by offering a variety of ways to make contact and solve questions or queries.
In product pages, the team manages a prompt library to help customers formulate different ways to set up their business, according to Siao-Si. AI-powered bots also assist customers fine-tune what kind of help they need before receiving guidance.
Enterprise software provider Atlassian updated its capabilities to support tone adjustments in responses produced by its generative AI tool in Jira Service Management, the company announced in April among a slew of updates related to its Atlassian Intelligence layer.
“The reminder here is that the user is always in control,” Sherif Mansour, head of product, AI at Atlassian, told CIO Dive in a demonstration in April. “They can cancel and not accept the suggestions, [or] they can accept, edit and modify it.”
As businesses prepare systems for generative AI implementation, leaders will have to contend with knowledge gaps throughout the organization and differences in user preference. Inadvertent impacts on customer experience can also occur if businesses don’t provide a seamless transition from chatbot to human touchpoint.
“If people don’t like it, they will avoid it,” Chris Matchett, senior director analyst at Gartner, said. “That’s why some of the advice we give them is to not have it as a bouncer or a doorman to the club that you have to get past before you can speak to a human.”
Generative AI-powered self-service channels should be implemented as an option for end users, Matchett said. “Let people choose it when they want, rather than force it down people’s throats.”