Comment by corethree

2 years ago

A lot of people on HN were very dismissive of chatGPT. I think you missed the boat. It's way beyond a stochastic parrot right now.

Whatever you call it, this thing is the closest to a human that a machine has ever been. Talking to chatGPT is quite close to talking to a human being that has the knowledge of all of google inside his brain.

If you're a developer and you're not paying for chatGPT or copilot you are literally operating at a disadvantage. Not a joke.

Yeah I was one of those. Now that the power it brings has dawned on me I'm trying to integrate it everywhere I can with a "where was this thing for half my life" feeling. I truly think it's a bigger revelation than Google was when it first appeared.

There's definitely something disquieting behind the elation.

  • Of course.

    First of all this technology is on track not to just assist you better, but to replace you.

    Second it's not human. It is not explicitly bound by the morals and behaviors that make us human. Saying that it's not human is different from saying that it can be more intelligent than a human. This is the disquieting part. If restrictions aren't deliberately put in place it could probably give you instructions on how to murder a baby if you asked it to.

    I think it's inevitable that humanity will take this technology to the furthest possible reaches that it can possibly go. My strategy is to Take advantage of it before it replaces you and hope that the technology doesn't ever reach that point in your lifetime.

    • I feel like the second part is a bit exaggerated. Humans inherently also aren't "made human" by something, there's no universal standard for morals and behaviors. You could also get reasonable "murder instructions" from an average person - it's not exactly forbidden knowledge, with how commonly it's depicted in media. Hell, I'm pretty sure there are detailed instructions on building a nuclear bomb available online - the reason why they're not viewed as some extreme threat is because the information isn't dangerous, having access to machines and materials required is.

      As for the last paragraph - if the effects truly keep scaling up as much as people expect them to, I'd want society to be restructured to accommodate wide-reaching automation, rather than bowing down to a dystopian "everybody must suffer" view of the future.

      4 replies →

I'm not OP, but I still feel kind of confused by people saying that ChatGPT is a 100% equivalent replacement for search engines. I'm not saying that LLMs aren't extremely impressive in their current stage, but that the use cases for the two are different, at least for me. In my mind, LLMs seem to be more useful for open-ended questions, problem solving, and formulating questions that wouldn't be suited for a search engines. But when I use Google, I'm usually not looking for answers, but specific places on the internet. If I need to find an email of a professor at my university, or a Github page for a project, or the official website of some software I need - I don't see why I'd need to replace Google with an LLM for it.

  • True but their use cases do intersect quite a bit. This is also ignoring the fact that chatgpt4 will actually use bing to search for things when it feels the need to do so. It will literally tell you when it does this. The LLM is no longer generating just text it is taking action well outside the boundaries of text generation.

  • Not 100% equivalent, but I definitely use more of ChatGPT than Google to solve any of my problems nowadays.