Distributional is a testing and validation platform built specifically to handle the complexity of AI-based software. We believe this is a critical differentiator, as traditional testing frameworks are not built to support AI-based software, which leaves businesses without a way to ensure these products are safe and will behave as expected.
Validation of AI-Based Software is Incomplete
Testing is a critical component of software development today; there are entire teams responsible for developing tests to ensure the software behaves in the expected manner. Traditional testing is deterministic, meaning you know what to expect the software to output for any given input — if “A” then “B.”
What if the underlying software leverages generative AI? If you’ve used ChatGPT, you’ve almost certainly experienced different responses to the same question. AI software is non-deterministic; the same input can lead to a huge number of potential outputs. How can we ensure that these outputs are accurate and acceptable before we ship the software into the world?
We believe AI product teams do not have a reliable, standardized, and streamlined way to answer this question today, as existing tools are built for traditional software testing, rather than AI software that is unpredictable and constantly changing. Current, deterministic testing tools compare observed behavior to expected behavior, but in the case of generative AI, defining “expected behavior” may not be possible. We think most teams testing AI software today can at best rely on qualitative validation or basic summary statistics during the model development phase, and many companies are simply ignoring this gap in pre-production testing and only “testing” by monitoring once the software is live.
Many businesses are facing pressure to deploy AI products, yet surveys show that testing has not kept pace, and safety, trustworthiness, and stability of the software is not yet up to par. In our view, companies are scrambling for makeshift solutions, like giving LLMs standardized tests like the LSAT to check their quality. The White House has also noted the problem, issuing an executive order that specifically calls out the need to “develop standards, tools, and tests to ensure AI systems are safe, secure, and trustworthy.” We have seen multiple signals that AI testing is insufficient and risky, and that a secure and reliable solution is needed, and needed fast.
We were excited to connect with Scott Clark, a veteran entrepreneur and AI expert, who had also been thinking about this paradigm shift within software testing and the limiting factors of broad AI adoption. With a team of AI researchers and platform engineers experienced in testing AI systems at Bloomberg, Google, Intel, Meta, SigOpt, Slack, Stripe, and Uber, he started Distributional, a new testing platform designed to achieve exactly that: make AI software safe, robust, and secure. Distributional is collaborating with several design partners to create a user-friendly testing platform to help teams working on AI products easily understand and manage risks before they go live. Their vision is to build a platform that supports all AI model types across industries such as finance, tech, or energy. Distributional’s mission is to enable teams to identify and fix problems before they impact customers.
We believe Distributional’s executive team has the right expertise to execute on this vision. They understand both the technical complexity of model pipelines and the distinct pain points of engineers who are trying to build with insufficient tooling. We are thrilled to collaborate with a technical team with a bold vision to address both a practical and impactful problem.
Why This is Important
GenAI has the potential to be transformative. However, in our view, enterprises will not be able to effectively adopt this technology until they can be assured that the systems they are building are trustworthy and don’t cause more harm than good. We believe that Distributional is the missing enabling piece. As we have seen several alternatives come to market, we feel that none of them address the challenges of in–development and continuous post-production testing of AI products. We are excited to back Distributional’s vision to create a testing platform that enables teams to catch and address issues, shepherding a future where reliable, secure, and trustworthy AI is a reality.