Harness me, daddy

AI hype-words, Deloitte cheating on its homework, and stuff we already knew about Sam Altman

Welcome to the newsletter! I'm still figuring out the format, and this is my first stab at it. If you need to tell me why I am wrong, you can find me on Mastodon, or we can meet behind the Home Depot.

In this issue:


Harness me, daddy

There's a new hype-word in AI World, and it is "harness."

It sounds cool! "Agent harness." It's evocative of work, muscles, oxen. You put a harness on a thing that is powerful and can do a lot with the right guidance. Here we are talking about Python libraries like LangChain or coding tools like Claude Code or Cursor. They can spawn and manage dozens of subprocesses, all working together iteratively to accomplish a task. The argument is that now that frontier models have plateaued, so maybe whoever comes up with the best agent harnesses will be able to win the market!

Except this all sounds familiar. Because isn't a harness just... an API wrapper? A fancy one, for sure, with a while True loop that can spawn other while True loops, but we already did this discourse back in 2022. Take away the model and it does nothing. They are calling them "harnesses" now because "wrapper" sounds derogatory.

Hype-words like this have been a feature of the AI boom since the beginning. The jargon signals sophistication to the in-group and danger/newness to the out-group. Probably the best example is "P(doom)", the (completely made up) probability that a malign super intelligence will enslave us for eternity. Other good early ones are "AGI" (artificial general intelligence), "hallucination" (when the generated text is factually wrong), and "prompt engineer" (someone who writes prompts for an LLM).

New hype-words tend to emerge when old hype fades and there is a need for new hype to keep the hype-train going. So, when it became clear that "prompt engineering" wouldn't work because LLMs have inherent problems with accuracy, the industry switched to hyping things like "RAG" to fix the problem. And when model performance started to plateau and it became clear that brute-forcing a bigger model wasn't going to produce AGI, the hype-men switched to talking about "agents" and "agentic AI."

Now it turns out "agents" aren't any kind of magic bullet because it is very difficult (some might say impossible!) to get a deterministic result by simply adding more iterations to a stochastic process. So here we are, in 2026, talking about "harnesses."

If you are doing any kind of development with LLMs as a component, you do, in fact, need elaborate scaffolding for checking and rechecking the inputs and outputs for safety and correctness, as LLMs tend to fabricate facts, ignore instructions, naively execute malicious prompts, and generally do things they are not supposed to do. Perhaps this is different than an API wrapper, as deterministic APIs do what they are told.

In that case, I'm still not sure "harness" is the right metaphor, because while you can constrain a model, you never really control it, even when you let it loose to do a task. Perhaps "cage."

back to top


Deloitte is cheating on its homework

Early on, there was speculation that generative AI would be particularly useful for consulting firms, as bullshitting is central to their business processes, and now it does indeed seem like they are using ChatGPT to cheat on their homework. In this case, it was Deloitte charging millions of dollars for white papers it generated (at least partially) with an LLM. Mo comments on the news here and waxes quite eloquent on what AI can do in the work place and what it cannot. Really worth a watch.

back to top


The Sam Altman we already know

I read that giant long profile of Sam Altman in the New Yorker so you don't have to, and honestly, if you've been following Sam Altman over the years, it's not that interesting. It confirms what we already know about Sam Altman, which is that he's a liar and a bullshitter. He says whatever he thinks he has to say to the person in front of him to get what he wants.

He's extremely good at it! He leaves himself extensive flexibility to deny and obfuscate what any normal person can see with their own eyes, but that's kind of the point: he's not lying to normal people, he's lying to rich people, powerful people, and elite media people, and there's a code among such people to essentially accept lies and bullshit as long as it comes from a rich person.

Altman also joins a long tradition in tech companies of lying to investors about the capabilities of products and services until the engineers can make them work. The article acknowledges as much:

Steve Jobs, one of his idols, was said to project a “reality-distortion field”—an unassailable confidence that the world would conform to his vision. But even Jobs never told his customers that if they didn’t buy his brand of MP3 player everyone they loved would die.

No, but Steve Jobs did famously rig the demo of the original iPhone, cheated Steve Wozniak, and was generally a megalomaniacal jerk to everyone around him, including his family. Hell of an idol!

Apple under Steve Jobs actually made good products, so everyone forgave him, but there have been quite a few Silicon Valley people who tried to bend reality to their will and did not make good products, and it ended in tears: Sam Bankman-Fried, Adam Neumann, and Elizabeth Holmes come to mind, so there's that.

In the end, I found myself wondering who this article is for, since, as I said, it doesn't produce any new insights if you're a normal person who follows Sam Altman and is far too involved to introduce a normal person to Sam Altman.

I think in the end, it's a letter to elites written by their own people (Ronan Farrow, Andrew Marantz) and in their preferred format ( New Yorker piece) and in language they understand (elaborate throat clearing, avoids a clear conclusion).

It is intended as a warning as OpenAI prepares its IPO: This guy might be a fraud.

back to top


Strays

back to top

Subscribe to Endnotes

Sign up for free to get every issue in your e-mail.
jamie@example.com
Subscribe