Comment by xpe
2 days ago
> The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts.
If you mean in the sense of differentiating meaning from the base model, I take your point. But in another sense, using GPT-OSS 120b as example where the weights are around 60 GB and my prompt + conversation are e.g. under 10K, what can we say? One central question seems to be: how many of the model's weights were used to answer the question? (This is an interesting research question.)
> If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.
Indeed, yes, this is a good practice for intellectual honesty when citing an LLM. It does make me wonder though: are we willing to hold human accounts to the same standard? Some fields and publications encourage authors to disclose conflicts of interest and even their expected results before running the experiments, in the hopes of creating a culture of full disclosure.
I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.
> how many of the model's weights were used to answer the question? (This is an interesting research question.)
That’s not the point. Every one of your conversation partners has the same access to the full 60 GB weights as you do. The only things you have to offer that your conversation partners don’t already have are your own thoughts. Post your prompts.
> I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.
We are all free to navigate that continuum thoughtfully when we are not in conversation with another human, who is expecting that they are talking to another human.
If you believe that LLM conversation is better, that’s great. I’m sure there’s a social media network out there featuring LLMs talking to other LLMs. It’s just not this one.
I want to point out two conversational disconnects and offer some feedback, person to person. I edited my post a bit, so maybe you replied to a previous draft of mine. Anyhow, in terms of what we can see now, I want to clear up a few things:
---
>>> aB: The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts.
>> xpe: If you mean in the sense of differentiating meaning from the base model, I take your point.
(I clarified; seems like we agree on this.)
> aB: That’s not [my] point.
(Conversational disconnect #1)
---
>>> aB: If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.
>> xpe: Indeed, yes, this is a good practice for intellectual honesty when citing an LLM.
(I clarified; seems like we agree on this.)
> aB: Post your prompts.
(Conversational disconnect #2)
---
> Post your prompts.
This feels abrasive. In another comment you repeat this line pretty much verbatim several times.
It is unclear if you are accusing me of using an LLM. I'm not.
---
> If you believe that LLM conversation is better, that’s great.
I hope you recognize that is not what I said, nor how I would say it, nor representative of what I mean.
> I’m sure there’s a social media network out there featuring LLMs talking to other LLMs. It’s just not this one.
This doesn't reply substantively to what I wrote; it feels like a caricature of it.
> That’s not the point.
This is kinder to the reader if you say "That's not my point". Otherwise it can sound like that you get to decide what the point is.
Overall, in total, we agree on many things. But somehow that got lost. Also, the tone of the comment above (and its grandparent too) feels a bit brusque and condescending.