AI Design Needs More Systems Thinking

LLMs don’t matter in a vacuum but only as components in larger systems. Let’s stop neglecting this crucial fact.

AI Design Needs More Systems Thinking
Canonical URL
Do not index
Do not index
The biggest flaw in AI design right now is the painful lack of systems thinking. Or to put it the way a philosopher might: AI design thinking is full of part-whole fallacy.
I’m not a systems thinker per se and I’m too lazy to google a definition, so I’ll just offer this one that I just wrote:
📖
Systems thinking is a design perspective that focuses on the inherent or emergent properties of entire systems rather than on the properties of system-parts as the determinative factor of the utility of the system.

Two Motivating Examples

What I mean will be clear with two examples—
  1. Every LLM hallucinates a lot. But if you build a system that includes LLM but never shows its users the raw output of LLM—as no RAG systems do—then you can build a hallucination-free AI system. See Safety RAG for more details.
  1. Recently Simon Willison—who’s super productive and does great work—wrote a piece to reassure people that using LLMs is safe with respect to data leakage because, after all, LLMs are stateless and your inputs to and outputs from LLMs aren’t stored in LLMs. Which is 💯.
#2 fails the systems thinking test because of course the LLM-using systems you are actually interacting with store every byte of input and output because of course they do! Whether these are used immediately or later (or, frankly, not at all) in the LLM is irrelevant to whether using that system is a data leakage. Of course it is!
(To Simon’s credit, he considers the issue of whether OpenAI uses your inputs later.)
Now I’m personally nonplussed by this leakage but that doesn’t mean it’s not a leakage. It’s just a leakage to OpenAI the corporation, not a leakage to OpenAI’s LLM.

LLM Systems, not LLMs

The common theme in both examples is that LLMs matter but the system using the LLM matters more.
In the first example, systems thinking lets us use LLMs to help people achieve their laudable goals. In the second it may offer unintentionally misleading reassurance.

Credit Where It’s Due

I owe this post to an offhand chat with my friend Ted Kwartler. But that doesn’t mean Ted endorses this post or that he doesn’t. Ted speaks for Ted.
 

Voicebox talks for you and your data talks back! Grow your business with faster time to insight.

Stardog Voicebox is a fast, accurate AI Data Assistant that's 100% hallucination-free guaranteed.

Signup