Comment by embedding-shape
3 days ago
If it's not meant to be read, and not meant to be run since it doesn't compile and doesn't seem like it's been able to for quite some time, what is this mean to demonstrate?
That agents can write a bunch of code by themselves? We already knew that, and what's even the point of that if the code doesn't work?
I feel like I'm still missing what this entire project and blogpost is about. Is it supposed to be all theoretical or what's the deal?
You and me both, bud. I often feel these days that humanity has never had a more fractured reality, and worse, those fractures are very binary and tribal. I cope by trying to find fundamental truths that are supported by overwhelming evidence rather than focus on speculation.
I guess the fundamental truth that I’m working towards for generative AI is that it appears to have asymptotic performance with respect to recreating whatever it’s trying to recreate. That is, you can throw unlimited computing power and unlimited time at trying to recreate something, but there will still be a missing essence that separates the recreation from the creation. In very small snippets, and for very large compute, there may be reasonable results, but it will never be able to completely replace what can be created in meatspace by meatpeople.
Whether the economics of the tradeoff between “nearly recreated” and “properly created” is net positive is what I think this constant ongoing debate is about. I don’t think it’s ever going to be “it always makes sense to generate content instead of hire someone for this”, but rather a more dirty, “in this case, we should generate content”.
No, but this blogpost is on a whole other level. Usually at least the stuff they showcase at least does something, not shovelware that doesn't compile.