Bubble treats

AI bubble treats are going away, boring conversation topics, and my review of Ghost in the Machine

Hello, and welcome to my newsletter! Please take a seat. The coolest thing that happened this week is I finished getting my branding and logos and whatnot all finished. I like it! If you like it, send your congratulations over to Andy Carolan, who did a fantastic job.

Thanks for reading, please contact me over on Mastodon if we need to take it outside.

OK, here's what's up this week:


This time is different (really)

[permalink]

screen cap from the Bubble Boy episode of Seinfeld. white-letter words over two working class-looking guys: "what kind of a person would hurt the bubble boy?!"

A thing I don't think normal people understand about the AI bubble is that the cheap and free stuff we have access to because rich people are dumping a trillion dollars into the nascent technology will not last. One day, most of it will simply be gone, and we will never see it again.

That little genie you can chat with for free, the little box you can use to generate an image, that nifty tool you can use to cheat on your homework? Gone. Either gone gone, or behind-a-paywall gone.

This is not how the dot com bubble worked. Sure, tons of stuff just shut down when it popped, but we still had Google, we still had Yahoo!, we still had our e-mail accounts and our RSS feeds, and thousands of websites. Things paused, but the network remained. We could continue to have our little treats even when no one was making much money.

But this time is different. The cost is different. Early dot com technology like websites and e-mail are incredibly efficient. The cost to serve a website doesn't meaningfully increase if you are serving it to 100 users or 100,000 users. The marginal cost of each new user is essentially zero, so as you grow your user base, you can make enough from advertisement or from a handful of paying users to cover many, many free users.

That's not how generative AI works. With generative AI, each request is incredibly computationally expensive and requires the server to do billions of little math problems, which takes a lot of electricity and produces a lot of heat that has be dissipated. The marginal cost of a new user never goes down, meaning that 100,000 users is going to cost 1000x more than your first 100 users. Until you find a way for users to pay for what they are using, you are losing a ton of money.

šŸ™„
OK, you do get some efficiencies with caching, economies of scale, and concurrency, so the costs don't scale exactly linearly, but I think the point still stands.

So forget the dot com comparison. A better comparison is the 2010-2022 "Millennial Lifestyle Subsidy." Remember $10 Uber rides to the airport? Free meal kits? DoorDashing a burger and fries for a couple bucks? Those e-scooters that were piled up on every street corner?

All those treats are gone now because they were expensive, and once the VC money ran out, they had to go away. This wasn't free e-mail, it was costly goods and services being provided at a loss to gain market share. Either the company became successful enough that it could then raise prices (Uber), or it got swallowed by a bigger fish (Jump) or shut down (Munchery).

And now the same thing is happening with the AI bubble. Everywhere you look, startups are scooching up prices, metering usage, and tapering free stuff.

GitHub Copilot, owned by Microsoft, announced it is axing its monthly flat-rate plans and replacing them with "pay for what you use". Anthropic implemented a system of rate limits for its subscription users. Character AI, which provides fantasy chat services, now caps free users. And OpenAI, the gorilla in the room with 900 million weekly active users, shut down its video generation app Sora.

Who knows how long OpenAI, Google, Meta, and Anthropic can afford to offer free generative AI chat to whatever random person wanders onto their website. These companies have a lot of money. But don't assume $20/mo. all-you-can-eat ChatGPT is going to continue to exist forever. Especially as the big AI shops pivot to focusing on enterprise customers to raise real revenue, the treats for the general public are on the chopping block.

[back to top]


Keep it to yourself

[permalink]

There's an old "This American Life" episode called "The Seven Things You're Not Supposed To Talk About" where the mother of one of the producers gives her rules for things you don't talk about at a dinner party because they are boring/personal and no one wants to hear it. They are:

  1. Your diet
  2. Your health
  3. Your period
  4. How you slept
  5. Money
  6. Your dreams
  7. Route talk

The challenge for the episode was to find an interesting story for each supposedly non-interesting thing. My opinion is that it is easily one of the most boring episodes of TAL ever produced, completely vindicating the rules.

I think about this list every time someone posts or talks about something they got from prompting an LLM, as that is the eighth thing I would add to this list:

  1. Stuff you got from AI.

"I asked ChatGPT and it told me" great, OK, I am mentally walking away.

If the robot solves your problem, if it tickles your itch, hey, go crazy. I won't judge. But I don't want to hear about it, it's boring. I don't want to see the cool image ChatGPT generated of your family standing on top of Mount Rushmore, same as I don't want to hear about your odyssey finding parking, or the dream you had about a tower or something.

To be clear, this is not an ideological thing. I am not mad about the amount of water it took or the carbon emissions or whatever. In fact, I am not mad at all. I just don't care. If you don't care about it enough to write/paint/draw/research/compose it yourself, then I don't care either.

It's like someone spilling the salt and going hey, hey, wow, that kind of looks like Argentina, right? Sure dude, I guess.

The inverse effect of the meaninglessness of the things people generate with AI is that now I kind of do want to hear about the random stuff people are actually taking care to make themselves. You know what, yeah, I do want to read your short story about elves at brunch. Send me the joke image of your dog wearing a hat that you pastiched together with a copy of PhotoShop you stole. Talk to me at length about the side-scrolling video game you started making that is basically just Mario Bros but everyone is trans. Yes, I want to see the cool leaf you found, absolutely.

It doesn't matter if something is "good," it matters that a person put care and thought and humanity into it. That's what makes it interesting.

[back to top]


MOVIE NIGHT: Ghost in the Machine

I struggled with whether to publish this review at all, since I don't know how much it will help "the cause," but I paid twenty bucks to stream it, so here we go.

I watched the anti-AI documentary, "Ghost in the Machine." I didn't really like it. Mainly, it feels like it is preaching to the choir, to the kind of people who will recognize the "Bella Ciao" melody running throughout the film and understand the reference.

That is, people who already believe AI is a fascist, eugenicist technology.

There's not necessarily anything wrong with media intended to rally the troops, so to speak, and I think going to a screening could be a fun way to connect with like-minded people.

But for my money, the film relies on guilt-by-association to make its argument, and it doesn't quite connect. Yes, one of the inventors of the semiconductor was a hobbyist eugenicist and a real piece of shit. But you've got to draw some more explicit connections between him and the AI companies beyond just "Stanford" for it to really mean something.

At the same time, the film failed to discuss some of the real, documented problems with machine learning that make it useful to fascists and eugenicists: its long-documented history of reproducing patterns of racism and sexism, its facility at generating images of fascist nostalgia, its use as a weapon to keep workers in line by threatening them with unemployment.

Processing large amounts of data has always been central to fascist projects, and this time is no different.

🧐
Two good reads that cover some of this territory are Cathy O'Neil's Weapons of Math Destruction and Karen Hao's Empire of AI

At the end of the day, I would prefer not to attribute the current artificial intelligence boom to a sinister, eugenicist plot going all the way back to the father of modern statistics. That gives them way too much credit, verging on helping them with their doomvertising. Most of these people are simply greedy. They are not selling their technology to the US military because they are racist super villains but because the US military has the biggest pot of free money on the planet.

Of course there are people involved in the "AI" project who are white supremacists and eugenicists, but that's probably true of basically every major American industry if you dig deep enough. For me, the more salient motive behind the frenzy of grifting and (what I suspect will turn out to be) fraud is good old-fashioned avarice.

[back to top]


[permalink]

[back to top]

Subscribe to Endnotes

Sign up for free to get every issue in your e-mail.
jamie@example.com
Subscribe