· 5 min read

How to De-risk your AI Strategy

It's never been more important to build an adaptable AI strategy

It's never been more important to build an adaptable AI strategy

In February, when OpenAI services were temporarily blocked in Italy, we warned that:

if interruption or discontinuance of services wasn’t already on your risk register, it better be now. It’s time to start thinking about:

  • what your backup plan is if a particular vendor is no longer available
  • what your overall AI strategy entails, e.g., building your own LLM or managing regulatory risk

This weekend’s dramatic events at OpenAI have certainly made these comments more relevant. In light of this, let’s revisit four key AI risk management strategies that we’ve previously highlighted:

Design for change.

The most effective way to prepare for change is to expect it. When selecting software, whether from a vendor or as part of internal development, consider whether it’s designed to support swapping out one model for another. This applies to all software, but as the pace of change is so rapid in AI, it’s especially true for software that relies on LLMs.

While the focus today is on OpenAI’s counterparty risk, it’s also worth noting that change may be driven by market forces like a superior model from someone like Google or Meta. If you need to change models to keep up with your peers, then you’ll want to be able to do so quickly and easily.

It’s no mistake that we designed Kelvin to be modular, supporting the use of different components across the entire stack - most especially LLMs. This modularity has a cost, as it requires us to support multiple interfaces and implementations, but it also provides customers with the flexibility and peace of mind that no one component will be a single point of failure.

We support dozens of models, including:

ModelProviderModel(s)Cloud/Local
GPT
  • OpenAI
  • Microsoft Azure
  • GPT-3.5 (16K)
  • GPT-4 (8K, 32K, 128K)
  • Cloud
PaLM2
  • Google
  • text-bison (8K, 32K)
  • chat-bison (8K, 32K)
  • Cloud
Claude
  • Anthropic
  • claude-1
  • claude-2
  • Cloud
Llama2
  • Meta
  • llama-2 (7B, 13B, 70B)
  • Variants via transformers
  • Variants via ggml
  • Local
  • Cloud (e.g., Azure)
Mistral
  • Mistral
  • Mistral 7B
  • Variants via transformers
  • Local

Have a backup plan - or maybe a better plan.

This is a corollary to the first point, but it’s worth emphasizing. Regardless of whether your primary plan is to use GPT-4 or an open source model like Llama2, it’s critical that you have a backup plan. As with any risk management strategy, business continuity is the goal, and the only real way to obtain it is to build in redundancy.

In addition to the obvious benefits of redundancy, there are also often cost savings to be had. For example, in testing alternatives to GPT-4, many organizations have found that they can achieve similar levels of quality with less expensive vendors or models. This is especially true for organizations that handle a range of different types of tasks or documents, as different models may be better suited to different types of documents.

Avoid embedding lock-in.

Some choices are easy to change, like the tool used to extract text from a PDF. Other choices - like document embeddings in a large DMS - are much less so. One of the most painful lessons that organizations learn is that when they select a closed embedding model to support their RAG workflows, they are effectively locked in to that model for the life of their system.

Imagine a world where OpenAI’s models are no longer available. Just kidding; you don’t have to, since they’ve already deprecated a number of prior models that users relied on.

Now, imagine that you’ve spent millions of dollars indexing your entire document management system with a model like text-embedding-ada-002. What do you do when that model is no longer available? How do you compare a new document to the millions of documents that you’ve already indexed?

The answer is that you can’t. You’re stuck with the model that you’ve selected, and if it’s no longer available, your only choice is to start over and pay to re-index your entire system. For some organizations, this may be a small price to pay, but for others, it may be a material and painful cost.

The best insurance policy is to avoid embedding lock-in in the first place. This is why we’ve not only designed Kelvin to support open embeddings, but also chosen to provide customers with traditional escrow and source code access to the Kelvin software and our proprietary models. Even if we were to go out of business, our customers would still be able to use our embedding models to support their existing workflows.

Own your own.

In the long run, we believe that the nature of competition in the industry will change as LLMs become more widely available. Not only will all firms have access to the same models through vendors like Thomson Reuters or LEXIS, but their clients will be able to generate the same work product in-house.

Relationships will always be important, but we expect that firms will increasingly compete on the basis of their private processes and knowledge. In such a world, it is critical that firms have the ability to build and maintain their own LLMs and LLM-driven workflows.

While we expected that this dynamic would play out over longer horizons, this weekend’s events have made it clear that it’s never too early to start planning for a future in which you own your own LLMs. In addition to the competitive benefits, this also provides a hedge against the risk of vendor discontinuance.

We are uniquely situated in the market to support organizations that decide to start on this journey, as we provide not just the software to build and maintain LLM workflows, but also the training data and consulting expertise to help you get started.

If you’d like to learn more about how we can help you de-risk your AI strategy, don’t be shy: say hello@273ventures.com.


Author Headshot

Jillian Bommarito, CPA, CIPP/US/E

Jillian is a Co-Founding Partner at 273 Ventures, where she helps ensure that Kelvin is developed and implemented in a way that is secure and compliant.

Jillian is a Certified Public Accountant and a Certified Information Privacy Professional with specializations in the United States and Europe. She has over 15 years of experience in the legal and accounting industries.

Would you like to learn more about risk management for AI-enabled legal tools? Send your questions to Jillian by email or LinkedIn.

Back to Blog