It has been demonstrated many times that this list believes that AI tools cannot and should not write fiction.
But this is a long, interesting post about a writer's experience with a new LLM that is, in my opinion, head and shoulders above the rest — the Claude Opus 4.5 model.
Will the pile-on begin, saying that an LLM cannot edit or make useful comments? And that those who have not reached a level where they are supplied with real-life editors should just make do on their own?
Earlier today, somebody compared using LLMs to using pitching machines to play baseball. Obviously silly. But the fact is that pitching machines can help human batters become much better at their game.
If a human uses an LLM for editing assistance, is it the equivalent of steroids / PEDs, or of batting machines?
by Own-Animator-7526
4 Comments
Sure they can be useful, but everything comes at a cost. As ai continues to displace workers, the economy will only get worse for everyone that isn’t heavily invested in ai. Is it worth displacing millions of working people and destroying the environment to make things a bit easier for ourselves? Unfortunately, selfish human beings will probably say “yes” in overwhelming numbers.
Think of it as just a more advanced version of spell/grammar check in any word processing program. It’s a helpful tool, but it’s good to keep aware of its limitations as well.
Seems sensible to me? As long as the person is the one doing the actual writing.
Lack of feedback can be a real problem for a one-person writing team. Having Terminator give you some feedback is not ideal, but it definitely beats zero feedback, and it can also perform many of the more tedious proofreading tasks.
Ignore them for the dipshits they are.