Comment by phillipclapham
7 hours ago
I think there's a useful distinction nobody here is making: there's a difference between using AI as a writing tool and using AI as a thinking tool.
Most people in this thread are talking about the output stage. You know: polish my text, fix my grammar, generate my message. That's where you lose your voice. But the blank page problem borski describes isn't really a writing problem, it's a thinking problem. Once you know what you want to say, saying it tends to be the easy part for us writers (sometimes lol!).
The most useful thing I've found is using AI to figure out what I actually think, using it for rubber ducking, exploring angles, stress-testing arguments, and then closing the tab and writing it myself. You get the cognitive help without losing the (or your) soul. I've output more writing in my own genuine voice in the last year than I did in several years prior, and it's because I use AI for clarity instead of replacing my output.
But what if your rubber duck is actually steering your thought process (since you may not have a consolidated one)? In this way I think the AI as editor is far better than a rubber duck AI. While in the former, it might point out your mistakes and give useful advices (which is similar to what you describe), it might not steer your thought (unless your mistakes are far too severe!), and actually help in your reasoning. But AI as a brainstorming rubber duck (or thinking tool) could be harmful to your thought process.
That's a real risk and I'd be lying if I said it never happens. But the distinction I'd draw is between using AI to generate conclusions vs using it to stress-test yours. Thinking FOR you vs thinking WITH you. When I say rubber ducking I mean something closer to what borski described — "fight and engage me on my ideas" — not "tell me what to think."
The steering problem is worst when you go in without any sort of position. If you sit down and say "what should I write about X" then yeah, you're risking ending up thinking whatever the model thinks. But if you sit down with even a half-formed argument and conviction and say "here's what I think, build on it and poke holes in it," the dynamic is completely different. You're still driving. You still need to maintain meta-awareness of how your thinking might be shifting in response to the AI, but you remain in control.
With that I think the editor vs thinking tool framing is a false binary. The best use I've found is somewhere in between — more adversarial than an editor, less open-ended than brainstorming. Alternating between convergent building together and structured disagreement, basically.
If you let it, sure. But I don't go into a session asking 'what should I write.' Rather, I ask it to help fight me on my ideas, so that I can stress-test the logic behind them, which is precisely what I do with humans too.
Only with humans, it's admittedly way more fun. :)
I do agree on that take. I find AI to be most useful as a sparing partner for my thought process. I also agree with the other commenter that it, of course, can also influence your thought process. We have to stay aware of that and try to stay in control of that conversation.