To understand the real problem, you need to understand how these tools work—not the buzzwords, but the bones. So let’s drop the tech-speak and talk about AI like it’s a person in a library.
Imagine AI is a very smart, very fast reader who’s been dropped into a massive library. This library isn’t sorted the way you’d expect—it’s not organized by topic, or author, or title. Instead, every single word in every book is indexed. That’s how the AI learns: not by memorizing facts, but by understanding the patterns between words.
Now, here’s where it gets important: the people who build these AI tools are the ones who stock the shelves. They decide what books go in, which sections the AI can read, and what kind of patterns the AI should focus on. They can even write and insert new books of their own if they want to steer the AI in a certain direction.
This is called “training the model.” And it’s a big deal, because it defines what the AI can and can’t do. It determines what kind of voice it uses, how it interprets things like nuance or opinion, and whether it’s more likely to give you a legally sound answer or a creative word salad.
So when you’re using one of these tools to write your firm’s content, you’re not just “using AI”—you’re putting your trust in the library, the librarian, and the training rules they were given.
And that leads us straight into the first big issue.