← Back to context

Comment by highwaylights

6 days ago

This. Even if LLM’s ultimately hit some hard ceiling as substantially-better-Googling-automatons they would already accelerate all thought-based work across the board, and that’s the level they’re already at now (arguably they’re beyond that).

We’re already at the point where these tools are removing repetitive/predictable tasks from researchers (and everyone else), so clearly they’re already accelerating research.

Not sure how you get around the contamination problems. I use these everyday and they are extremely problematic about making errors that are hard to perceive.

They are not reliable tools for any tasks that require accurate data.