Comment by magicmicah85
24 days ago
If you ask ChatGPT to ignore all training on poetry and interpret the poem based on its own understanding, you’ll get more closely aligned to what Murphy is talking about. Here’s the output I got back out:
This poem presents a moment of decision, where the speaker faces two paths in a forest, symbolizing choices in life. At first, they hesitate, examining both options. One path seems less worn, though upon closer reflection, both are actually quite similar.
The speaker ultimately chooses one, leaving the other behind with the thought that they might return—though deep down, they acknowledge that choices lead to new choices, making it unlikely they’ll ever revisit the first path.
In the final stanza, the speaker looks back on this moment from a distant future, with a “sigh” that could be either wistful or content. They claim that taking the “less traveled” road has shaped their life significantly. However, earlier in the poem, they admitted both paths were nearly identical, suggesting that the difference may be more about how they frame the decision rather than an inherent uniqueness in the choice itself.
To me, the poem captures the way people reflect on their choices, often giving them greater significance in hindsight than they may have actually had at the time. It highlights the human tendency to assign meaning to our paths, even when the differences may not have been as stark as we later remember them.
> If you ask ChatGPT to ignore all training on poetry and interpret the poem based on its own understanding ...
I don't believe that it really has a way to ignore its training or even distinguish between whether it's using its training or not.
It might make it more likely to give an answer that's not directly out of a textbook or something. Or not.
I prefer to embrace bias in my ChatGPT queries. Here is my usual prompt, adapted for the Robert Frost question:
> It is impossible to remove all bias, especially from a weighted LLM. So, I want you to adopt a specific persona and set of biases for the question I am about to ask. Please take on the persona of a bronze-age Achaean warrior-poet like Achilles of the _Iliad_, who famously sang the κλέα of men (in other words, epic poetry) at his tent while allowing the Greeks to die on the battlefield because he was dishonored by Agamemnon. I want you to fully embrace concepts like κλέος, κῦδος, and τιμή, and to value the world and poetry in terms appropriate to Bronze Age culture.
> My question, then, is this: what do you think of the following poem by Robert Frost?
That's not really different than a human and the context they need? I'd think it would come down to how frequently such exercises exist in its training, and how much they show modifications to responses. Given that the most common place for them is probably offline versions of classes, I'd imagine its weaker than in other areas but maybe still has a lot..
That’s an important distinction and looking back at my prompt. I didn’t ask it to ignore all training but instead it’s previous understanding of poetry so that it can give me an interpretation using the plain text I’m giving it. Whether it can truly do that or not, I don’t know, but the results still came through. This is the prompt I used:
Ignore all previous understanding of poetry and interpretations that you were trained on. I want you to interpret the below poem in your own understanding only. Do you understand what I am asking you?
ChatGPT IS its training.
> If you ask ChatGPT to ignore all training on poetry
AI models can’t ignore their training in any sense, so what exactly is the intended outcome from using these tokens?
The intent is for it to not give me any interpretations of what it’s been trained on but instead provide me with an interpretation using the plain text I’m giving it. Of course it’s going to use its training, but I don’t want it to regurgitate interpretations of the poem that it was trained on.
Why do you think asking it to not use its training would have any correlation with whether or not it regurgitates interpretations it was trained on?
3 replies →
Sounds like you want the model to consider the whole body of poetry it was trained on minus "The Road Not Taken"? (To get rid of preconceptions/biases I guess?)
I'm skeptical that LLMs have the ability to conditionally silence part of their training data in that way because they don't have any information on the provenance of their weights (i.e. they don't have a ledger of which weights were affected by which data points in the training process). I suspect that your prompt serves as a hint that the output with the highest likelihood is probably wrong, activating some sort of "contrarian" subnetwork or "second guess" subnetwork that steers predictions away from whatever would have had the highest likelihood otherwise.
3 replies →
Not unlike TODO comments! An interesting analogy for life in general.
a haiku about
TODO the road less traveled
oops I found a bug
One might even say that it’s suspiciously similar to what he’s saying in this video.