I've just finished reading If Anyone Builds It, Everyone Dies, which makes the case that developing superintelligent AI poses an existential risk to humanity. I found it compelling but felt there were a lot of assumptions in there, mostly glossed over as axiomatic.
Specifically I'm looking for:
- Books that argue the opposite, that superintelligent AI is either safe, beneficial, or overhyped as a risk
- Accessible, non-technical reads on how modern AI is actually built and trained (LLMs in particular)
- Books that address whether current LLM based approaches could plausibly lead to superintelligence at all, or whether there are fundamental limits
I'm not a researcher, I work in tech but not in AI/ML. I can handle complexity but want readable over academic.
by wintermute023