Comment by jsight
1 day ago
I once wrote a little generalized yaml templating processor in Python by using an LLM for assistance. It was working pretty well and passing a lot of the tests that I was throwing at it!
Then I noticed that some of the tests that failed were failing in really odd ways. Upon closer inspection, the generated processor had made lots of crazy assumptions about what it should be doing based upon specific values in yaml keys that were obviously unrelated to instructions.
Yeah, I agree with the author. This stuff can be incredibly useful, but it definitely isn't anything like an AGI in its current form.
No comments yet
Contribute on Hacker News ↗