Generative AI uses machine learning to produce diverse forms of content – such as text, images, or audio – based on prompts from natural language. Its widespread adoption began with the emergence of ChatGPT, an OpenAI initiative, which sparked a multitude of innovative applications across various industries.
If you haven’t explored ChatGPT for yourself, I suggest you should ask it some questions. For example, what it knows about you or someone famous, or if it can explain how something works. While caution is advised in the reliability of information when asking ChatGPT questions — as it isn’t always correct — it is an eye-opening experience because it is a new experience.
What this platform has done is provided a proof-of-concept for Generative AI. It has given us a glimpse into what is possible with AI and how it could usher in a new paradigm for how we work. For us in the F5 Office of the CTO, some interesting exploration into the potential of this technology for app delivery and security could bring about the increased inception of AIOps.
Lori MacVittie is a Principal Technical Evangelist at the Office of the CTO at F5.
The Shift from imperative to declarative to generative
One of the challenges in IT infrastructure is configuring the myriad devices, services, and systems needed to deliver and secure even a single application. Businesses often count on an average of 23 different app services, if you don’t count the ‘as a service’ offerings.
It’s not necessary for me to tell you that configuring a web app and API protection service differs from configuring a plain old load balancing service. This implies that individuals, in charge of configuring and operating app services, might have to be experts in a dozen different languages.
For many years, the industry has been trying to address this challenge. When APIs became the primary way of configuring everything, app delivery and security services were no exception. Everyone relied on imperative APIs, which essentially altered how commands were issued. For example, rather than typing commands on a CLI you sent API commands via HTTP.
Soon enough, it became apparent that the API tax incurred by relying on imperative APIs was too expensive, causing the industry to move to declarative APIs. Unfortunately, many businesses took declarative to mean “configuration as JSON”. Therefore, instead of focusing on the intention behind declarative APIs that say “tell me what you want to do, and I’ll do it for you”, the approach became, “here’s the configuration I want, go do the hard work of doing it.”.
While it wasn’t exactly the same, it still required a certain level of expertise in specific operating models of each solution. I don’t believe there was a consensus from the industry regarding whether load balancers used “pools” or “farms”, never mind the more intricate details of how virtual servers interact with real servers and application instances. Essentially, with declarative APIs, the industry shifted offloaded command-level work from operators, to the system.
Today, Generative AI brings forth a form of low code/no code. These provide a more reliable set of results because they’re based on well-formed specifications that guide the generation of results. There is a finite way you can write “hello world” after all, while there are many ways to respond to a question.
With this in mind, I should be able to instruct a trained model, “hey, I want to configure my load balancer to scale App A” and the system should be able to generate a configuration. Moreover, I should be able to tell it, “give me a script to do X on system Y using Z” and BAM! Not only should it generate the configuration, but also the automation necessary to deploy it to the appropriate system.
Oh look. This already happens…
Certainly, this is not production ready code because the IP and credentials are not valid, and it selected Python (which was not my first, second or even third choice)—but it’s 90% of the way there based on documentation in the public domain and a remarkably simple prompt. It goes without saying that the more comprehensive the prompt, the better the results.
Again, it is not ready for deployment, though it is significantly closer to being functional and it only took fifteen seconds to generate. What’s more, it didn’t require any training from me. Moving from generation onto automation. But this is the easy stuff. I should further be able to instruct it, “Oh, by the way, deploy it.” And the technology should do it while I’m enjoying my morning coffee. And if I ask it to even sing me a little song too.
But it doesn’t end there! What if I also want to tell a Generative AI system later, “Hey, users in Green Bay are logging in a lot and performance is down, clone App A and move it to our site in Milwaukee.”
And it does. Because if we look beneath the hood, all of this is just a network of APIs, configurations, and commands that can and are often automated by scripts today. Those scripts are often parameterized, which loosely correlates to the parameters in my AI prompt: Green Bay, Milwaukee, App A. So what changes is the generator, and the speed with which it can be generated.
I often say that AI and automation are force multipliers. Because technology doesn’t know what it needs to do, we do. But AI and automation can do it much faster and efficiently, through automating tasks through a web of APIs, configurations, and commands. Here, AI can effectively amplify productivity, increasing time to value, and freeing up experts’ time to focus on strategic decisions and projects while the AI learns from them. And over time, the AI can further multiply our capacity, exposing new possibilities.
This is no longer science fiction but computer science reality.
Generative AI will enable tomorrow’s AIOps
Many of today’s AIOps solutions rely heavily on pre-existing configurations, and only provide the insights that 98% of organizations are missing.
It’s key to remember that they only answer yesterday’s problems, and not tomorrow’s needs.
In the realm of AIOps platforms, it is those AIOps platforms that have a higher level of autonomy—such as security services — that are increasingly dependent on pre-existing configurations and well-formed responses. It doesn’t typically use AI to enable operations to execute more autonomously across the heterogenous app delivery and security layers. Here, AI is employed for data analysis and uncovering insights that surpass human capabilities and time constraints. But that’s where it often ends, at least for layers above the network and well-understood security problems.
This is where Generative AI can step in, and it is precisely why I’m fully committed to exploring the vast potential of this technology in simplifying app delivery and security processes. Consider this as the forefront of the AI revolution.