Join leaders in San Francisco on January 10 for an exclusive night of networking, insights, and conversation. Request an invite here.
Just about a year ago, I wrote about 2023 being the year of LLMs. Models like Llama 2, Claude and Cohere emerged as substantial challengers to OpenAI, fueling innovation across the board — but not for lack of speed bumps along the way. After such an explosive 2023, what lies ahead for AI in 2024?
The new year will be one where we see incredibly advanced AI applied in a number of new creative ways, and will undoubtedly lead to tremendous progress across industries writ large.
But there are also clear warning signs that AI will be used by bad actors. So, while the exact future remains unclear, one thing is certain: The advances made in AI in 2024 will have major implications for how we do work — and more importantly, how we live our lives.
Copilot AI takes the stage: The age of agents
We’ve seen this coming for some time, but as I wrote after the recent OpenAI DevDay event, AI development has increasingly been focused on AI agents. These smart, highly adapted tools are already beginning to make an impact in industry after industry, but what we have seen to date is nothing compared to what’s to come.
VB Event
The AI Impact Tour
Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.
The ReAct paper published earlier this year showed how LLMs could effectively learn how to use tools and spurred a lot of research in this direction. Companies like OpenAI and Anthropic have spent the year tuning their models to work better with this technique (OpenAI’s Function Calling, Anthropic’s Claude XML support, for example), and other institutions have trained specialized LLMs for this purpose (Berkeley’s Gorilla LLM). And developments in open-source libraries, like Langchain and Rivet, have made it much easier to apply these techniques.
Now easier and more affordable to develop than ever, AI agents will become ubiquitous. They act as force multipliers on human ingenuity and resourcefulness while connecting deeply into the data that matters most to the user and company. I believe we will look back at 2024 as the dawn of the “age of agents,” the beginning of a fundamentally new direction in how we address needs through software and interact with technology.
Smart, interactive collaboration will no longer be a ‘nice to have’
The other side of the shift into intelligent agents will be a massive change in user and customer expectations. Simply put, customers will begin demanding a new level of responsiveness and interaction from their technology. Users will stop thinking of it as “something we use” and start thinking of it as “something we collaborate with.”
User expectations change any time there is a major shift in technology and user interface (UI). When Apple released the first iPhone, people began expecting more intuitive controls on any mobile device. When cloud apps aimed at consumers became popular, enterprise users began to demand the same simplicity and ease of use from their work tools.
As more of the population gets accustomed to AI tools, particularly AI assistants, they will want that same level of smart, intelligent response in the rest of their work and personal life. Because these agents aren’t simply making the application a little bit better or a little bit easier to use — they are adding entirely new capabilities, allowing users to do new things and accomplish far more.
Assistants like Microsoft Copilot and Google Duet can draft documents, summarize emails, create a presentation or do other creative and analytical work. As agents like this become more prevalent, companies that lack them are likely to alienate their customers.
Breaking through the vision barrier
ChatGPT’s ability to understand and express natural human language was the breakthrough feature that attracted users and developers. But what we are about to see could be even more significant and impactful with AI vision. The major breakthrough was with LLMs’ ability to train not only on text data, but visual data — making them multimodal. OpenAI’s GPT-4 was the first example; Google’s Gemini is also multimodal, and I’m sure many will follow suit in the very near future.
Words are powerful, but images and illustrations can communicate information and sentiment in a much more concentrated way. The spatial representation of ideas is an incredibly powerful tool for communicating complex concepts simply.
Already, we are seeing the development of wearable devices that promise to assist us in our day-to-day life. For example, they can provide background information on people we interact with, visual cues connected to our work or real-time suggestions for completing a task.
Where will the innovation go? And how fast? It’s hard to tell, but being able to interpret images and videos and react instantly to physical changes in the environment adds an incredibly important dimension to how an intelligent AI agent could aid a human user.
AI-powered manipulation reaches crisis levels
Imagine receiving a link from a friend over email. The link takes you to a busy social network group where you see dozens of users, view their profile photos and read their messages and comments to each other. As you’re on the site, someone starts a new text chat with you. It feels so real.
And it could all be fake! We have always, as human beings, lived with the possibility of misinformation, and one of our biggest weapons to combat it has been social proof. “If others trust this, then it must be trustworthy” is no longer an effective principle, for the simple reason that we cannot be sure who is real and who is an AI bot.
Never in human history has the technology to influence and manipulate people at scale ever been so capable and available. Already, AI has made it nearly impossible to discern “real” social interactions and content from the machine-produced. Images and even videos can easily be generated to show just about anything.
And this doesn’t have to be the work of sophisticated hacker farms or nation-states — this technology is now within reach of virtually anyone. The coming year could be the year when the consequences of AI-powered manipulation take hold — from automated blackmail and fraud to the spread of conspiracy theory.
Over the next year, AI will bring many incredible things into the world, but it will also challenge us in new ways. I believe in human society’s ability to harness the good in this technology and adapt to the risks it brings. That adaptation process may feel bumpy next year, but I know we will get there.
Cai GoGwilt is cofounder and CTO of Ironclad.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!