← Back to context

Comment by GuinansEyebrows

3 days ago

> rubber ducking

i don't mean to pick on your usage of this specifically, but i think it's noteworthy that the colloquial definition of "rubber ducking" seems to have expanded to include "using a software tool to generate advice/confirm hunches". I always understood the term to mean a personal process of talking through a problem out loud in order to methodically, explicitly understand a theoretical plan/process and expose gaps.

based on a lot of articles/studies i've seen (admittedly haven't dug into them too deeply) it seems like the use of chatbots to perform this type of task actually has negative cognitive impacts on some groups of users - the opposite of the personal value i thought rubber-ducking was supposed to provide.

There is something that happens to our thought processes when we verbalise or write down our thoughts.

I like to think of it that instead of having seemingly endless amounts of half thoughts spinning around inside your head, you make an idea or thought more “fully formed” when you express it verbally or with written (or typed) words.

I believe this is part of why therapy can work, by actually expressing our thoughts, we’re kind of forced to face realities and after doing so it’s often much easier to reflect on it. Therapists often recommend personal journals as they can also work for this.

I believe rubber ducking works because in having to explain the problem, it forces you to actually gather your thoughts into something usable from which you can more effectively reflect on.

I see no reason why doing the same thing except in writing to an LLM couldn’t be equally effective.

Indeed the duck is supposed to sit there in silence while the speaker does the thinking ^^

This is what human language does though, isn't it? Evolves over time, in often weird ways; like how many people "could care less" about something they couldn't care less about.

Well OK, sure. But I’m having a “conversation” with nobody still. I’m surprised how often it happens that the AI a gives me a totally wrong answer but a combination of formulating the question and something in the answer made me think of the right thing after all.