Comment by eimrine
13 hours ago
This requires to have a homoiconic AI which does not have a learning-time. If the learning is just compressing some data in data-center, the AI quickly will get obsoleted.
And one more thing, this kind of artificial living will be the easiest in many sences if it is going to specialize in all kinds of scam/fraud especially. Technically it is doable, but Sams Altmans are too interested in their own money, not in yours.
Great point on homoiconicity — I agree that most current LLMs are "frozen brains" with no lifelong learning.
My aim here isn’t to create a fully self-modifying AI (yet), but to test what happens when even a static model is forced to operate in a feedback loop where money = survival.
Think of it as a sandbox experiment: will it exploit loopholes? specialize in scams? beg humans for donations?
It’s more like simulating economic pressure on a mindless agent and watching what behaviors emerge.
(Also, your last line made me laugh — and yeah, that’s part of the meta irony of the experiment.)
If you use a <8gb model you can finetune it with Unsloth in an hour or so. What if the system extracts facts and summarises its own output every day to only 10,000 lines or so, and then finetunes its base model with the accumulated data and switches to run that, as a kind of simulation of long-term memory? Within the same day it could have a kind of medium-term memory via RAG and short term memory via context.