managemnet company strategy managemanet Bridging the Confidence Gap in Generative AI

Bridging the Confidence Gap in Generative AI

Bridging the Confidence Gap in Generative AI post thumbnail image

By Debanjan Saha

Generative artificial intelligence (AI) could radically transform the business landscape, opening new doors to innovation in product design, customer service, medical discoveries, and much more.

But the widespread implementation of generative AI remains in a holding pattern. Companies of all sizes have jumped in early, planning a host of generative AI pilot projects, but many have not achieved scale—or even gone into production yet.

What is preventing organizations from embracing such a powerful new technology to its full potential? Generative AI prompts big questions that don’t have quick or easy answers.

It all comes down to confidence. Issues surrounding the safety, transparency, and accuracy of generative AI challenge the confidence of CIOs, CDOs, and other technology leaders. But bridging the confidence gap is possible, and the steps leaders can take to make generative AI solutions a reality are clear and attainable.

Future-Proof Your AI Strategy      

With generative AI advancing rapidly, it’s easy to understand leaders’ anxieties about future-proofing their investments for long-term gains.

The threat of incurring “technical debt” is real. Selecting the wrong component parts, fitting them together incorrectly, or not anticipating future developments can each increase the risk of making obsolescent investments that result in upstream disruption and downstream chaos.

Even at organizations employing the top data scientists, AI experts are already seeing their prototyping pipelines break as tooling updates or newer versions of open-source frameworks for creating applications using large language models (LLMs) emerge.

Unfortunately, there is no insurance policy to protect against technical debt. It’s critical that technology leaders recognize this reality and embed openness and flexibility at the core of their AI strategy to avoid being locked into any given LLM or ecosystem.

Despite the host of questions, generative AI is set to become many organizations’ most valuable tool—as long as they keep complete control over it.

Organizations will abandon custom-built tooling, particularly proprietary LLMs, due to the costs of maintaining them, Gartner users predict. This makes sense: for complex technologies, buying is usually far more cost-effective than building.

In addition, being able to switch out the component pieces, such as LLMs or vector databases, without breaking the production pipelines and implementing generative AI solutions will also be key in future-proofing an AI strategy.

Safe Data, Reliable Results

Data is the lifeblood of nearly every business. Organizations need to ensure they don’t use data as training content for an LLM—or, worse, inadvertently share confidential information with competitors or bad actors by using open-source AI to review a classified database’s source code for errors, or convert a recording of an internal confidential meeting to produce a written summary (to use two real-life examples).

Such actions permanently add proprietary corporate information to an LLM’s training data and can expose sensitive data to public access, resulting in damaging and potentially catastrophic effects on a business’s bottom line.

Technology leaders must safeguard their corporate data to ensure open-source AI services can’t access anything stored in their databases or models.

One way to determine the security of an LLM lies in how it’s hosted. If an LLM is connected to and trained on internal data and then exposed to outsiders via a customer-support chatbot, bad actors can query and extract all that data.

Transparency and Oversight

Technology leaders must be confident they have full visibility, monitoring, and governance over all generative AI work.

The excitement surrounding generative AI is palpable, contagious, and undoubtedly warranted. It’s not surprising so many data scientists are diving headfirst into creating and testing it.

Unfortunately, haste often leads to chaos in organizations with multiple teams prototyping multiple products autonomously, potentially creating challenges that can be difficult to untangle. One executive I spoke to found 50 vector databases under their purview—with no knowledge of who created them or what they contained.

While technology leaders want to support the fervor and passion that leads to innovation, maintaining visibility is critical. What are teams creating? Which teams are creating it? Which data are they using to train projects? Where are these generative AI assets housed?

Without clear answers, confidence will remain out of reach. It’s critical for technology leaders to create a single repository and production environment for all generative AI within their organizations to achieve the appropriate level of monitoring and transparency.

Correctness Equals Confidence

Building a generative AI chatbot isn’t terribly difficult. Making sure it’s providing the right information is a lot harder.

This is something that companies just have to get right. Getting it wrong risks undermining the company’s brand and reputation, and it can happen in a flash.

While it’s impossible to ensure generative AI will never make mistakes, companies can protect themselves—and customers—by adding a “confidence score” to the output of a response for business tasks to signal which areas of the response may need more attention and careful review.

This is especially important when using generative AI for work assistance, such as a medical group using generative AI as a first-line response to a patient email or a seller using AI to help reply to technical questions.

Closing the Confidence Gap    

Whether technology leaders know it or not, building a strategy with generative AI needs to be a priority for nearly every business.

This means looking at the AI effort holistically and considering all the questions and potential answers.

This will help technology leaders move beyond prototyping and ensure they provide something of value their organizations can use safely, cost-effectively, and at scale.

None of us are sure what the next generation of generative AI will look like. But what we do know is that advancements will only continue. It’s critical to maintain a flexible environment and holistic view to confidently manage AI workflows and operations.

To profit from the AI revolution, companies must approach the big questions around generative AI and find the confidence to shift from experimentation to tangible, real-world transformation.

Learn how DataRobot can deliver value with AI for your organization at

Debanjan Saha is the CEO of DataRobot.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post