Going All-in For Really Responsible AI

Responsible AI means hallucination-free user experiences or it doesn’t mean anything at all.

Going All-in For Really Responsible AI
Canonical URL
Do not index
Do not index
The GenAI industry plays a Responsible AI shell game, a variant of the motte-and-bailey doctrine. The hand moves fast. Only two moves. Watch close—
  1. In response to questions about hallucinations and safety, AI vendors say they’re investing heavily in Responsible AI and trot out non-functional window dressing from non-engineers as proof. See, we told you we are good!
  1. While customers puzzle over the relevance of the non-functional window dressing to, say, their goal of further automating new drug regulatory filings, the vendor yalks about a RAG-with-LLM app that hallucinates and is only aware of enterprise documents, ignoring enterprise databases.
The game as it’s played is eager talk about non-functional Responsible AI while shipping irresponsible, incomplete systems.

Responsible AI is neither AI nor Responsible

Responsible AI as it’s really practiced is mostly decorative. Let’s give the non-engineers who’re excited about AI something to chew on. Looking recently at Intel’s Responsible AI program—to take a random example—I couldn’t help but see two and only two things: first, all babies must eat-type use cases; and, second, lots of non-functional requirements.
Then I thought about it more and, okay, diminishing the environmental impact from AI is a functional requirement and good business, too. And I like the other use cases! Who’s opposed to babies eating food?!
But the rest of it just seems odd and pretty silly from an engineering org as deep and important as Intel, of whose products I am a happy consumer.
But most Responsible AI in our industry is just non-AI at best. It’s non-engineering. Hell, it’s not even original safe systems thinking. It’s badly rehashed policy-and-procedures, culturally updated (appropriated?) to fit the ideology of contemporary academic, which probably hit its intellectual peak in the 70s in defense systems engineering.

Stardog’s Point of View on Responsible AI

When I say in the title that Stardog is “all in” on Really ResponsibleAI what am I saying? Before that, let’s be clear that this isn’t a pivot. We are still us, still focused on our mission—
Anyone can ask any question about any data and get a timely, accurate, and trusted answer immediately, 100% free of hallucination.
In fact it’s delivering on that mission that got us here to this hot take on how Responsible AI isn’t.

First, Do No Harm!

One of the earliest statements of professional ethics, the Hippocratic Oath, is more responsible than most actually existing AI: “I will abstain from all intentional wrong-doing and harm.”
Hallucinations are a complex phenomena and there is a range of harms they do, including the not-harm of inspiring creativity by sparking the imagination. Poetry is full of fantastical, nonsensical, and intensely metaphoric language use. I still read poetry as calisthenics of the imagination. But no one brings Wallace Stevens or Sharon Olds as relevant to a working group for regulatory filings of a new cancer drug.
Hallucinations in high-stakes use cases in regulated industries are pernicious errors—hard to correct and systemic. Because LLM can be a fluent liar, its lies often go undetected, where they can do the most harm.
A very well understood alternative to RAG exists, namely, Semantic Parsing, and it remains the cornerstone of our approach. But since SP also has a failure mode, how can I claim that it’s really Responsible AI and RAG-and-LLM isn’t?
Well consider their failure modes in high-stakes use cases—
  • When RAG-and-LLM fails, it hallucinates, and the harms there range from mild to extreme: someone’s going to jail or paying a big fine or something blows up that shouldn’t have.
  • When Semantic Parsing fails, one of two things happens:
    • The system says “I don’t know” and the user is left temporarily unenlightened but that was the status quo ante. No harm.
    • The system returns an answer to a question the user didn’t intend, but the answer is correct with respect to the data. Mild harm and easily repaired: just ask again!
SP is one of the big ideas in Stardog Voicebox, which we hear from users, customers, and analysts alike is the most powerful and mature knowledge graph question answering product available. It’s also the safest and most comprehensive; and, hence, most responsible.

Then Continue Not Doing Harm!

Applying the mere basics of what your doctor swears to do when she gets her medical license, Really Responsible AI should be hallucination free and complete with respect to all the data relevant to a use case, not just with respect to some of it.
Our customers and the viability of our industry demand no less.

Voicebox talks for you and your data talks back! Grow your business with faster time to insight.

Stardog Voicebox is a fast, accurate AI Data Assistant that's 100% hallucination-free guaranteed.

Signup