Image Credits: Yuichiro Chino / Getty Images
In a perfect world, companies would vet the security and compliance of every third-party vendor they use. Sales wouldn’t close until these reviews are complete. The trouble is, security reviews require a massive investment of time — and labor.
Questionnaires — the main way companies vet vendor — contain hundreds of questions, covering everything from privacy policies to physical datacenter security. And they can take days to weeks for vendors to complete.
Attempting to streamline the process, Chas Ballew founded Conveyor, a startup building a platform that uses large language models (LLMs) along the lines of OpenAI’s ChatGPT to generate responses to security questions in the original questionnaire format.
Conveyor today announced that it raised $12.5 million in a Series A funding round led by Cervin Ventures, bringing its total raised to $19 million. The proceeds will be put toward expanding Conveyor’s sales and marketing efforts, R&D and 15-person workforce, Ballew said.
“Security reviews are still largely an old-fashioned process,” Ballew told TechCrunch in an email interview. “Most companies are still using manual work to respond to these questions, and there’a a first generation of software-as-a-service products that just match previous answers to spreadsheets and requests for proposals. They still require a lot of manual work. Conveyor … automates the security review response process.”
Ballew is a second-time founder, having co-launched Aptible, a platform-as-a-service for automating security compliance, in 2013. Conveyor began as an experimental product inside Aptible. But Ballew saw an opportunity to build Conveyor into its own business, which he began doing in 2021.
Conveyor offers two complementary products. The first is a self-service portal that allows companies to share security documents and compliance FAQs with sales prospects and clients. The second is a question-answering AI, powered by LLMs from OpenAI and others, that can understand the structure of security questionnaires — including questionnaires in spreadsheets and online portals — and fill them automatically.
Drawing on vendor-specific knowledge databases, Conveyor supplies “human-like” answers to natural language questions in questionnaires such as “Do your employees undergo mandatory training on data protection and security?” and “Where is customer data stored, and how is it segregated?” Customers can upload a questionnaire and export the finished version to the original file format, optionally syncing customer engagement data with Salesforce.
“For example, if a customer asks ‘Do you have a bug bounty program?,’ and the company doesn’t, but they do other types of security testing, a good answer would be ‘No, but we do regular penetration testing, code reviews, etc,’” Ballew said. “That’s very tough to replicate with AI, but something that Conveyor’s software is excellent at.”
Conveyor is one of several companies attempting to automate security reviews using LLMs.
Another is Vendict, which leverages a combination of in-house and third-party LLMs to fill out security questionnaires on behalf of companies. Cybersecurity vendor Purilock has experimented with using ChatGPT to answer questionnaires. Elsewhere, there’s Scrut, which recently released a tool, Kai, to generate security questionnaire answers, and Y Combinator-backed Inventive.
This reporter wonders whether the new crop of AI-powered answering machines, Conveyor included, violate the spirit of security reviews, which (in theory, at least) are meant to source answers from employees across a vendor’s IT and security teams. Can security questionnaires filled out by Conveyor, like cover letters written by ChatGPT, possibly hit the right beats and touch on all the required points? Can they ever?
Ballew asserts that Conveyor isn’t cutting corners. Rather, he says, it’s taking the various data points about a vendor’s security — data points contributed by relevant stakeholders — and rearranging them, padded by prose, in a questionnaire-friendly format.
“Each prospective customer asks the same kinds of questions, but in slightly different formats and phrasings,” Ballew said. “These reviews are manual drudgery.”
But can LLMs answer these questionnaires more reliably than humans, especially given the stakes involved with security reviews? It’s no secret that even the best-performing LLMs can go off the rails or fail in other unexpected ways. For example, ChatGPT frequently fails at summarizing articles, sometimes missing key points or outright inventing content that wasn’t in the articles.
I wonder how Conveyor might handle a question that isn’t relevant to a vendor. Would it skip over it like a human would, or attempt to answer it incorrectly? What about questions containing lots of regulatory jargon? Would Conveyor understand them, or be led astray?
If Conveyor isn’t confident in one of its responses to a security question, it flags the response for human review, Ballew said. But just how Conveyor’s platform distinguishes between a high-confidence versus a low-confidence answer isn’t clear; Ballew didn’t go into detail.
Ballew tried to make the case that Conveyor’s growing customer base — over 100 companies, which have used Conveyor to fill out more than 20,000 security questionnaire — is a sign that the tech is delivering on its promise.
“What sets us apart from everyone else is the accuracy and quality of our AI,” Ballew said. “More accurate outputs means less time correcting and editing … We‘ve built a modular technology system with guardrails and quality assurance to improve accuracy and eliminate errors.”
Ballew envisions a future where evaluating a vendor “is as easy as it is today to tap your phone at checkout to pay for your groceries,” as he puts it. Me, I’m not so sure — not with today’s LLMs. But maybe, just maybe, security questionnaires are narrow enough in scope and subject matter to mitigate the worst of LLMs’ tendencies. We’ll have to watch to see if that’s the case.