← Back to context

Comment by suddenlybananas

3 hours ago

The story you're describing doesn't seem much better than one could get from googling around and going on stackoverflow

Isn't that the whole point? We were already burning steam trying to find better ways around search and bettering our search engines.

Now we have something that understands the question better than any search engine on earth (prior to the advent of LLMs), has relevant information about the question, presents the information in a way we can understand it, remembers the whole context so that when we say that it didn't get it quite right, it understands that, and gets us more information until we find what we are looking for.

The process described above (at least for me) replaces googling and SO searches.

In fact, you can try this on your own as well. Here is a challenge: Try to build a simple game. In the game you create a target that's randomly generated on the screen and have a player at the middle of the screen that needs to hit the target. When a key is pressed, the player swings a rope attached to a metal ball in circles above it's head, at a certain rotational velocity. Upon key release, the player has to let go of the rope and the ball travels tangentially from the point of release. Each time you hit the target you score.

Try it without LLMs and only with google and SO first and see if LLMs help with efficiency :)

It doesn’t have to be, really. Even if it could replace 30% of documentation and SO scrounging, that’s pretty valuable. Especially since you can offload that and go take a coffee.