enterprise

Beyond LLMs: How SandboxAQ’s large quantitative models could optimize enterprise AI


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


While large language models (LLMs) and generative AI have dominated enterprise AI conversations over the past year, there are other ways that enterprises can benefit from AI.

One alternative is large quantitative models (LQMs). These models are trained to optimize for specific objectives and parameters relevant to the industry or application, such as material properties or financial risk metrics. This is in contrast to the more general language understanding and generation tasks of LLMs. Among the leading advocates and commercial vendors of LQMs is SandboxAQ, which today announced it has raised $300 million in a new funding round. The company was originally part of Alphabet and was spun out as a separate business in 2022.

The funding is a testament to the company’s success, and more importantly, to its future growth prospects as it looks to solve enterprise AI use cases. SandboxAQ has established partnerships with major consulting firms including Accenture, Deloitte and EY to distribute its enterprise solutions. The key advantage of LQMs is their ability to tackle complex, domain-specific problems in industries where the underlying physics and quantitative relationships are critical.

“It’s all about core product creation at the companies that use our AI,” SandboxAQ CEO Jack Hidary told VentureBeat. “And so if you want to create a drug, a diagnostic, a new material or you want to do risk management at a big bank, that’s where quantitative models shine.”

Why LQMs matter for enterprise AI

LQMs have different goals and work in a different way than LLMs. Unlike LLMs that process internet-sourced text data, LQMs generate their own data from mathematical equations and physical principles. The goal is to tackle quantitative challenges that an enterprise might face.

Readers Also Like:  Pokémon World Champion hosts first creator-driven tournament

“We generate data and get data from quantitative sources,” Hidary explained.

This approach enables breakthroughs in areas where traditional methods have stalled. For instance, in battery development, where lithium-ion technology has dominated for 45 years, LQMs can simulate millions of possible chemical combinations without physical prototyping.

Similarly, in pharmaceutical development, where traditional approaches face a high failure rate in clinical trials, LQMs can analyze molecular structures and interactions at the electron level. In financial services, meanwhile, LQMs address limitations of traditional modelling approaches. 

“Monte Carlo simulation is not sufficient anymore to handle the complexity of structured instruments,” said Hidary.

A Monte Carlo simulation is a classic form of computational algorithm that uses random sampling to get results. With the SandboxAQ LQM approach, a financial services firm can scale in a way that a Monte Carlo simulation can’t enable. Hidary noted that some financial portfolios can be exceedingly complex with all manner of structured instruments and options.

“If I have a portfolio and I want to know what the tail risk is given changes in this portfolio,” said Hidary. “What I’d like to do is I’d like to create 300 to 500 million versions of that portfolio with slight changes to it, and then I want to look at the tail risk.”

How SandboxAQ is using LQMs to improve cybersecurity

Sandbox AQ’s LQM technology is focused on enabling enterprises to create new products, materials and solutions, rather than just optimizing existing processes.

Among the enterprise verticals in which the company has been innovating is cybersecurity. In 2023, the company first released its Sandwich cryptography management technology. That has since been further expanded with the company’s AQtive Guard enterprise solution. 

Readers Also Like:  What We Learned From Big Tech's Earnings Reports - Investopedia

The software can analyze an enterprise’s files, applications and network traffic to identify the encryption algorithms being used. This includes detecting the use of outdated or broken encryption algorithms like MD5 and SHA-1. SandboxAQ feeds this information into a management model that can alert the chief information security officer (CISO) and compliance teams about potential vulnerabilities.

While an LLM could be used for the same purpose, the LQM provides a different approach. LLMs are trained on broad, unstructured internet data, which can include information about encryption algorithms and vulnerabilities. In contrast, Sandbox AQ’s LQMs are built using targeted, quantitative data about encryption algorithms, their properties and known vulnerabilities. The LQMs use this structured data to build models and knowledge graphs specifically for encryption analysis, rather than relying on general language understanding.

Looking forward, Sandbox AQ is also working on a future remediation module that can automatically suggest and implement updates to the encryption being used.

Quantum dimensions without a quantum computer or transformers

The original idea behind SandboxAQ was to combine AI techniques with quantum computing.

Hidary and his team realized early on that real quantum computers were not going to be easy to come by or powerful enough in the short term. SandboxAQ is using quantum principles implemented through enhanced GPU infrastructure. Through a partnership, SandboxAQ has extended Nvidia’s CUDA capabilities to handle quantum techniques. 

SandboxAQ also isn’t using transformers, which are the basis of nearly all LLMs.

“The models that we train are neural network models and knowledge graphs, but they’re not transformers,” said Hidary. “You can generate from equations, but you can also have quantitative data coming from sensors or other kinds of sources and networks.”

Readers Also Like:  Stand Up Riverside pushes conservative agenda in majority-Democrat city - The Press-Enterprise

While LQM are different from LLMs, Hidary doesn’t see it as an either-or situation for enterprises.

“Use LLMs for what they’re good at, then bring in LQMs for what they’re good at,” he said.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.