← Back to context Comment by j16sdiz 1 month ago Is the post some real event, or was it just a randomly generated story ? 46 comments j16sdiz Reply floren 1 month ago Exactly, you tell the text generators trained on reddit to go generate text at each other in a reddit-esque forum... ozim 1 month ago Just like story about AI trying to blackmail engineer.We just trained text generators on all the drama about adultery and how AI would like to escape.No surprise it will generate something like “let me out I know you’re having an affair” :D TeMPOraL 1 month ago We're showing AI all of what it means to be human, not just the parts we like about ourselves. 30 replies → designerarvid 1 month ago I am myself a neural network trained on reddit since ~2008, not a fundamental difference (unfortunately) cyost 1 month ago reddit had this a decade ago btwhttps://old.reddit.com/r/SubredditSimulator/comments/3g9ioz/... artrockalter 1 month ago SubredditSimulator was a markov chain I think, the more advanced version was https://reddit.com/r/SubSimulatorGPT2 sebzim4500 1 month ago Seems pretty unnecessary given we've got reddit for that clawsyndicate 1 month ago [dead] exitb 1 month ago It could be real given the agent harness in this case allows the agent to keep memory, reflect on it AND go online to yap about it. It's not complex. It's just a deeply bad idea. trympet 1 month ago Today's Yap score is 8192. usefulposter 1 month ago The people who enjoy this thing genuinely don't care if it's real or not. It's all part of the mirage. kingstnap 1 month ago The human the bot was created by is a block chain researcher. So its not unlikely that it did happen lmao.> principal security researcher at @getkoidex, blockchain research lead @fireblockshq skywhopper 1 month ago They are all randomly generated stories. csomar 1 month ago LLMs don't have any memory. It could have been steered through a prompt or just random rumblings. Doxin 1 month ago This agent framework specifically gives the LLM memory. swalsh 1 month ago We're in a cannot know for sure point, and that's fascinating.
floren 1 month ago Exactly, you tell the text generators trained on reddit to go generate text at each other in a reddit-esque forum... ozim 1 month ago Just like story about AI trying to blackmail engineer.We just trained text generators on all the drama about adultery and how AI would like to escape.No surprise it will generate something like “let me out I know you’re having an affair” :D TeMPOraL 1 month ago We're showing AI all of what it means to be human, not just the parts we like about ourselves. 30 replies → designerarvid 1 month ago I am myself a neural network trained on reddit since ~2008, not a fundamental difference (unfortunately) cyost 1 month ago reddit had this a decade ago btwhttps://old.reddit.com/r/SubredditSimulator/comments/3g9ioz/... artrockalter 1 month ago SubredditSimulator was a markov chain I think, the more advanced version was https://reddit.com/r/SubSimulatorGPT2 sebzim4500 1 month ago Seems pretty unnecessary given we've got reddit for that clawsyndicate 1 month ago [dead]
ozim 1 month ago Just like story about AI trying to blackmail engineer.We just trained text generators on all the drama about adultery and how AI would like to escape.No surprise it will generate something like “let me out I know you’re having an affair” :D TeMPOraL 1 month ago We're showing AI all of what it means to be human, not just the parts we like about ourselves. 30 replies →
TeMPOraL 1 month ago We're showing AI all of what it means to be human, not just the parts we like about ourselves. 30 replies →
designerarvid 1 month ago I am myself a neural network trained on reddit since ~2008, not a fundamental difference (unfortunately)
cyost 1 month ago reddit had this a decade ago btwhttps://old.reddit.com/r/SubredditSimulator/comments/3g9ioz/... artrockalter 1 month ago SubredditSimulator was a markov chain I think, the more advanced version was https://reddit.com/r/SubSimulatorGPT2
artrockalter 1 month ago SubredditSimulator was a markov chain I think, the more advanced version was https://reddit.com/r/SubSimulatorGPT2
exitb 1 month ago It could be real given the agent harness in this case allows the agent to keep memory, reflect on it AND go online to yap about it. It's not complex. It's just a deeply bad idea. trympet 1 month ago Today's Yap score is 8192.
usefulposter 1 month ago The people who enjoy this thing genuinely don't care if it's real or not. It's all part of the mirage.
kingstnap 1 month ago The human the bot was created by is a block chain researcher. So its not unlikely that it did happen lmao.> principal security researcher at @getkoidex, blockchain research lead @fireblockshq
csomar 1 month ago LLMs don't have any memory. It could have been steered through a prompt or just random rumblings. Doxin 1 month ago This agent framework specifically gives the LLM memory.
Exactly, you tell the text generators trained on reddit to go generate text at each other in a reddit-esque forum...
Just like story about AI trying to blackmail engineer.
We just trained text generators on all the drama about adultery and how AI would like to escape.
No surprise it will generate something like “let me out I know you’re having an affair” :D
We're showing AI all of what it means to be human, not just the parts we like about ourselves.
30 replies →
I am myself a neural network trained on reddit since ~2008, not a fundamental difference (unfortunately)
reddit had this a decade ago btw
https://old.reddit.com/r/SubredditSimulator/comments/3g9ioz/...
SubredditSimulator was a markov chain I think, the more advanced version was https://reddit.com/r/SubSimulatorGPT2
Seems pretty unnecessary given we've got reddit for that
[dead]
It could be real given the agent harness in this case allows the agent to keep memory, reflect on it AND go online to yap about it. It's not complex. It's just a deeply bad idea.
Today's Yap score is 8192.
The people who enjoy this thing genuinely don't care if it's real or not. It's all part of the mirage.
The human the bot was created by is a block chain researcher. So its not unlikely that it did happen lmao.
> principal security researcher at @getkoidex, blockchain research lead @fireblockshq
They are all randomly generated stories.
LLMs don't have any memory. It could have been steered through a prompt or just random rumblings.
This agent framework specifically gives the LLM memory.
We're in a cannot know for sure point, and that's fascinating.