← Back to context

Comment by alganet

3 months ago

The worse part is that it seems to be useless.

It is already running on fumes. Presumably, it already ingested all the content it could have ingested.

The unlocking of more human modes of understanding will probably make it worse (hey, researchers, you already know that, right?), revealing a fundamental flaw.

These hopes of getting some magic new training data seem to be stagnant for at least two or three years.

Now everyone has a broken LLM deployed, and it works for some things, but it's darn terrible for what it was designed.

The real dark territory is companies trying to get their investment back. As it seems, it won't happen that easily. Meanwhile, content gets even more scarce, and the good old tank (the internet) is now full of imbecile poison encouraged by the models themselves.