security

Generative AI's momentum casts uncertainty over the future of the IT … – CIO Dive


This audio is auto-generated. Please let us know if you have feedback.

Generative AI has prompted widespread discussion about what role a human should play, if any, with the technology as it grows better at rote tasks. This is particularly true in lower-level IT service desk positions.

The majority of analysts and tech leaders say some job disruption across the economy is inevitable thanks to generative AI. Though when asked directly about the technology replacing workers, most are hesitant to entertain the idea — others say it’s possible but not yet advisable. 

“It depends, but if you have a good knowledge base and it’s well trained, it can really replace — for the most part — tier one service desk specialists, which is perhaps a little bit scary for some people,” said Mark Tauschek, VP, infrastructure and operations research at Info-Tech Research Group. 

Tier one specialists are often the front lines of an IT department, connecting with users and performing basic troubleshooting. The role is also an entry point for aspiring technologists, with the average tier one help desk worker bringing in an annual salary of around $48,000 in the U.S., according to a Glassdoor data analysis last updated in June. 

Tier one service desk employees, despite their entry-level role, can have significant influence on the rest of the company’s perception of its IT department. Their delivery of solutions and treatment of end users is the foundation for employee experience.


“For us to not stay ahead of this would be really foolish.”

Jeremy Rafuse

 VP and head of digital workplace at GoTo


Yet, experts wonder what role these specialists will play in technology departments moving forward, even if many analysts believe fears of imminent widespread job loss are overblown.

More than 1 in 4 professionals say workforce displacement is one of their concerns regarding generative AI implementation, according to a June Insight Enterprises survey conducted by The Harris Poll of 405 U.S. employees who serve as director or higher within their company.

Jeremy Rafuse, VP and head of digital workplace at software development company GoTo, has heard conversations among IT workers at the company about fears of job disruption as teams look to automate some tasks with generative AI. 

“I think it’s hard not to when you’re talking about automating certain workloads instantly,” said Rafuse, who oversees the IT department at GoTo. “We are trying to promote that this is an opportunity to learn something new and not only is this going to potentially upskill your job so you could be working on different things, but it’s going to create jobs that don’t even exist now.”

Readers Also Like:  Lawmakers introduced 563 measures against critical race theory in ... - UCLA Newsroom

GoTo, the parent company of LastPass, has automated routine tasks within the service desk for years. More recently, the IT team has dedicated time to learn about generative AI and identify low-risk use cases, Rafuse said.

“We don’t want to just hide under the blanket,” Rafuse said. “But teams are aware, and they’re pretty optimistic about this being a chance to learn something new.”

In the service desk ecosystem, the company wants to use generative AI to analyze large amounts of data, identify trends related to satisfaction ratings and pinpoint customer pain points.

“For us to not stay ahead of this would be really foolish,” Rafuse said, a sentiment that most tech executives relate to.

Nearly all — 95% — of tech executives say they feel pressured to adopt generative AI in the next six months to a year, according to a July IDC survey of 900 respondents sponsored by Teradata. More than half said they were under “high or significant” levels of pressure. 

Just because you can, doesn’t mean you should

Despite fears of generative AI technology replacing workers, the makers of popularly used models reject the narrative that LLMs and generative AI capabilities should — or can — stand in place of an employee. 

“Humans should always be involved with AI systems,” Sandy Banerjee, GTM lead at Anthropic, said in an email. “Even where an AI system is answering questions or triaging tickets, there should be a [quality assurance] process where humans are keeping track of the system and evaluating its outputs.”

AI is not without its faults, after all. In research Anthropic published, the company found Claude models still get facts wrong and fill in gaps of knowledge with fabrication, emphasizing models should not be used in high-stakes situations or when an incorrect answer could cause harm.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.