Showing posts with label ai. Show all posts
Showing posts with label ai. Show all posts

Tuesday, September 30, 2025

The Gaslit Asset Class

James Grant invited me to address the annual conference of Grant's Interest Rate Observer. This was an intimidating prospect, the previous year's conference featured billionaires Scott Bessent and Bill Ackman. As usual, below the fold is the text of my talk, with the slides, links to the sources, and additional material in footnotes. Yellow background indicates textual slides.

Thursday, September 18, 2025

Hard Disk Unexpectedly Not Dead

As I read Zak Killian's Expect HDD, SSD shortages as AI rewrites the rules of storage hierarchy — multiple companies announce price hikes, too I realized I had forgotten to write this year's version of my annual post on the Library of Congress' Desihning Storage Architectures meeting, which was back in March. So below the fold I discuss a few of the DSA talks, Killian's more recent post, and yet another development in DNA storage. The TL;DR is that the long-predicted death of hard disks is continuing to fail to materialize, and so is the equally long-predicted death of tape.

Thursday, August 14, 2025

The Drugs Are Taking Hold

cyclonebill CC-BY-SA
In The Selling Of AI I compared the market strategy behind the AI bubble to the drug-dealer's algorithm, "the first one's free". As the drugs take hold of an addict, three things happen:
  • Their price rises.
  • The addict needs bigger doses for the same effect.
  • Their deleterious effects kick in.
As expected, this what is happening to AI. Follow me below the fold for the details.

Tuesday, July 22, 2025

The Selling Of AI

Not AI, just a favorite
On my recent visit to London I was struck by how many of the advertisements in the Tube were selling AI. They fell into two groups, one aimed at CEOs and the other at marketing people. This is typical, the pitch for AI is impedance-matched to these targets:
  • The irresistible pitch to CEOs is that they can "do more with less", or in other words they can lay off all these troublesome employees without impacting their products and sales.
  • Marketing people value plausibility over correctness, which is precisely what LLMs are built to deliver. So the idea that a simple prompt will instantly generate reams of plausible collateral is similarly irresistible.
In The Back Of The AI Envelope I explained:
why Sam Altman et al are so desperate to run the "drug-dealer's algorithm" (the first one's free) and get the world hooked on this drug so they can supply a world of addicts.
You can see how this works for the two targets. Once a CEO has addicted his company to AI by laying off most of the staff, there is no way he is going to go cold turkey by hiring them back even if the AI fails to meet his expectations. And once he has laid off most of the marketing department, the remaining marketeer must still generate the reams of collateral even if it lacks a certain something.

Below the fold I look into this example of the process Cory Doctrow called enshittification.

Thursday, June 12, 2025

The Back Of The AI Envelope

Sauce
The rise of the technology industry over the last few decades has been powered by its very strong economies of scale. Once you have invested in developing and deploying a technology, the benefit of adding each additional customer greatly exceeds the additional cost of doing so. This led to the concept of "blitzscaling", that it makes sense to delay actually making a profit and devote these benefits to adding more customers. That way you follow the example of Amazon and Uber on the path to a monopoly laid out by Brian Arthur's Increasing Returns and Path Dependence in the Economy. Eventually you can extract monopoly rents and make excess profits, but in the meantime blitzscale believers will pump your stock price.

This is what the VCs behind OpenAI and Anthropic are doing, and what Google, Microsoft and Oracle are trying to emulate. Is it going to work? Below the fold I report on some back-of-the-envelope calculations, which I did without using A1.

Tuesday, April 22, 2025

Going Out With A Bang

In 1.5C Here We Come I criticized people like Eric Schmidt who said that:
the artificial intelligence boom was too powerful, and had too much potential, to let concerns about climate change get in the way.

Schmidt, somewhat fatalistically, said that “we’re not going to hit the climate goals anyway,”
Salomé Balthus
Uwe Hauth, CC BY-SA 4.0
In January for a Daily Mail article, Miriam Kuepper interviewed Salomé Balthus a "high-end escort and author from Berlin" who works the World Economic Forum. Balthus reported attitudes that clarify why "3C Here We Come" is more likely. The article's full title is:
What the global elite reveal to Davos sex workers: High-class escort spills the beans on what happens behind closed doors - and how wealthy 'know the world is doomed, so may as well go out with a bang'
Below the fold I look into a wide range of evidence that Balthus' clients were telling her the truth.