Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The pace of AI continues to accelerate, with capabilities never before thought possible now becoming a reality. This is particularly true of AI agents, or virtual co-workers, which will work alongside us and, eventually, autonomously.
In fact, Gartner predicts that by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI (up from 0% in 2024). Further emphasizing the technology’s potential, the firm has named it a top strategic technology trend in 2025.
“It’s happening really, really fast,” Gene Alvarez, distinguished VP analyst with Gartner, told VentureBeat. “Nobody ever goes to bed at night with everything done. Organizations spend a lot of time monitoring things. The ability to create agents to not only do that monitoring but take action will help not just from a productivity perspective but a timing perspective.”
What else does Gartner predict for the coming year? Here are some trends the firm will explore at its Gartner IT Symposium/Xpo 2024 this week.
AI agents both ‘cool and scary’
The entry-level use case for AI agents are mundane tasks that suck up human time and energy, Alvarez explained.
The next level is agentic AI that can autonomously monitor and manage systems. “Agentic AI has the ability to plan and sense and take action,” said Alvarez. “Instead of having something just watching systems, agentic AI can do the analysis, make the fix and report that it happened.”
Looking to even more complex scenarios, agents could one day help upscale the workforce. For instance, a new employee that would normally shadow a human can be instead guided by an AI co-worker.
“You can have an agent be that mentor, to help them climb the learning curve much faster,” said Alvarez.
He acknowledged that all this is simultaneously “cool and scary,” and that there is a fear of job loss. “But if the agent can actually teach me a new set of skills, I can move away from a job that’s going away to a job that’s needed,” he pointed out.
Systematically building trust in AI
Moving on to the next top trend, Alvarez noted: “There’s a whole new workforce out there, how do we govern it?”
This will give rise to AI governance platforms, which enable organizations to manage their AI systems’ legal, ethical and operational performance. New tools will create, manage and enforce policies to ensure that AI is transparent and used responsibly. These platforms can check for bias and provide information on how models were built, as well as the reasoning behind their prompts.
Eventually, Alvarez predicted, such tools will become part of the AI creation process itself to ensure that ethics and governance are built into models from the start.
“We can create trust through transparency,” he said. “If people lose trust in AI, they don’t use it.”
Not just one type of computing model
There are seven computing paradigms “on our doorstep right now,” Alvarez pointed out. These include CPUs, GPUs, edge, application-specific integrated circuits, neuromorphic systems, classical quantum and optical computing.
“We’ve always had a mindset of moving from one to the other,” said Alvarez. “But we’ve never done a good job of making that move complete.”
But the hybrid computing models of the future will combine different compute, storage and network mechanisms, he noted. Orchestration software will move compute from one to the other depending on the task and the method most suited for the job.
“It’s going to be about how to get them to work together,” said Alvarez.
At the same time, new, more specific compute technologies will use significantly less energy, he pointed out. This is important, as there is increased pressure to reduce consumption and carbon footprints. But “at the same time, demand for IT computing capabilities is increasing at an incredible rate.”
Incremental improvements won’t be enough; enterprises need long term solutions, he said. New technologies — such as green cloud providers or new, more efficient algorithms — could improve efficiency by thousands or even tens or hundreds of thousands orders of magnitude.
Proactively addressing disinformation security
AI is allowing threat actors to spread disinformation faster — and more easily — than ever before. They can push out deepfakes and craft convincing phishing emails; exploit vulnerabilities in workforce collaboration tools; use malware to steal credentials; and initiate account takeovers (among other tactics).
This makes disinformation security critical; the emerging category seeks to assess authenticity, track the spread of harmful information and prevent impersonation. Elements include brand impersonation scanning, third-party content evaluation, claim and identity verification, phishing mitigation, account takeover prevention, social/mass media and dark web monitoring and sentiment manipulation. Deepfake detection will also be able to identify synthetic media, Alvarez explained, and watermarking tools will help ensure that users are interacting with real people.
By 2028, Gartner predicts that half of all enterprises will begin adopting products, services or features specifically designed for disinformation security, up from less than 5% today.
“Disinformation security is not going to just be a single technology,” said Alvarez, “it will be a collection of technologies.”
Preparing security for the post-quantum world
Right now, the web runs on public key cryptography, or asymmetrical encryption, which secures two points of communication. This encryption is difficult to break because it simply takes too long, Alvarez explained.
However, quantum is rapidly advancing. “There’s going to be a point where quantum computing is going to work and we’re able to break that encryption because it has the mathematical power to do that in real time,” said Alvarez.
Red teams are already getting ready and waiting it out: Many are harvesting encrypted data and holding onto it until quantum is realized. That won’t be long: Gartner predicts that by 2029, advances in quantum computing will make most conventional asymmetric cryptography unsafe.
“We believe it’s going to be bigger than Y2K, if not bigger,” said Alvarez.
Organizations must be prepping for post-quantum cryptography now, he said, to ensure that their data is resistant to decryption. Alvarez pointed out that it’s not easy to switch cryptography methods and it’s “not a simple patch.”
A good place to start is established standards from the National Institute of Standards and Technology (NIST). Alvarez pointed out that the agency will be releasing the second version of its post quantum cryptography guidelines in spring 2025.
“What do you do when all the locks are broken? You need new locks,” said Alvarez. “We want to make sure we’re updating our security before quantum becomes a reality.”
AI enhancing our brains
Reaching more into the sci-fi arena, Gartner anticipates a rise in the use of bidirectional brain-machine interfaces (BBMIs) that read and decode brain activity and enhance human cognitive abilities. These could be directly integrated into our brains or made possible via wearables such as glasses or headbands, Alvarez explained.
Gartner anticipates that, by 2030, 30% of knowledge workers will be using technologies such as BBMIs to stay relevant in the AI-powered workplace (up from less than 1% in 2024). Alvarez said he sees potential in human upskilling and next-generation marketing — for instance, brands will be able to know what consumers are thinking and feeling to gauge sentiment.
Alvarez ultimately compared it to the 2011 film “Limitless” or Apple TV’s “Severance” (although, to be fair, neither of those portray the technology in the most positive light). “It can reach into your brain and enhance function,” he said.
READ SOURCE