Maybe my confirmation bias colors my opinion here, but I think this is a decent book. Not a good one, not a great one, but decent. Mainly because this book could have been a TED talk or a white paper. 5 stars out of 10. ★★★★★
I've been skeptical of "AI" since I first heard about it.I recognized large language models (LLMs) as definitely not anything remotely like human intelligence. An LLM is just an algorithm picking likely next words or images based on tagging or description. I’ve called it spicy autocorrect or a jumped up Markov chain. And in small, niche applications LLMs and neural networks are great! But when you try to make it a general thing – a google killer, an agent, an artist, an author, a stand in for people, a panopticon supervisor – it falls way short. And if you’re selling it as something that can be smarter than people, it’s a con. Worse, the people selling it may believe the con themselves.
Bender and Hanna get into the details of the hype – what's really being sold, how it’s being sold. They also get into why we as humans see the output of LLMs as a sort of people – because we use language for so much that we can’t help but seeing language as an indication of intelligence as we understand it day to day. It isn’t though. Again, LLMs don’t learn like an infant – they are just picking the next most likely word based on a statistical model.
They also get into a lot of what LLMs are built on – not just stolen creative works – but also faulty definitions of intelligence. Definitions based on cultural and racist biases, ones that favor white, middle to upper class. More broadly, it favors WEIRD (Western, European, Industrial, Rich and Democratic). They also get into how this is all tied to TESCREAL (transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism, and longtermism) and how AI doomers and boosters are opposite sides of the same coin – getting on the LLM train to produce artificial general intelligence to save use all and spread humanity throughout the galaxy.
Yeah, the reality is that weird. And they don’t even get into Roko’s Basilisk!
The authors also get into how to deal with AI hype – by asking questions. I borrowed these from chapter 7 Do You Believe in Hope After Hype?
- What is being automated? What goes in, and what comes out?
- Can you connect the inputs to outputs?
- Are these systems being described as human?
- How is the system evaluated?
- Who benefits from this technology, who is harmed, and what recourse do they have?
- How was the system developed? What are their labor and data practices?
I was glad for that chapter.
Bender and Hanna do bring the receipts – at least a third of the book is references and sources.
Still, I felt like this was too long. It really could have been a TED talk or a whitepaper to good effect. Also, they could have benefitted from looking beyond academia and the sciences. Things like Large Language Mentalist Effect (https://softwarecrisis.dev/letters/llmentalist/) and Naomi Alderman's The Future (Part 4, section 3) on Matchbox Educatable Noughts and Crosses Engine (https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_Crosses_Engine) and why we tend to see these things as intelligent. And they failed the journalistic exercise of “Follow the money.” For that see Ed Zitron’s Better Offline blog.
Still, not a bad book even if I did have to push myself a bit to finish. 5 stars out of 10. ★★★★★
by BravoLimaPoppa