Comment by bilekas

3 years ago

> Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter. But that's a ways off.

I love this response way more than I should.

It's the only bit of the response that I don't agree with. I don't come to HN solely for utilitarian purposes. If I think I'm frequently communicating with a machine on HN then then I'll stop going to HN. It really will kill HN for me. If I want to communicate with a machine for utilitarian purposes then I'll go directly to the machine and I will know that I'm communicating with a machine (a machine that cannot bring me any new experience from the real world that was not mediated in text. A machine that can only select that text on a statistical basis. A machine that was in part trained on my own words from the past!).

  • > If I think I'm [...]

    If the problem is your faith, it is you that has to change and not the world. It's much easier that way around too :)

I think that puts too much faith in the average person

  • But it’s exactly the right expectation for a Hacker News comment. Already HN excels because comments avoid auto-outrage and meme-commentary.

    • How high the comment quality here usually is becomes really noticeable when it's lacking under a post. The most common offenders are political posts with outrage potential, especially while knee-jerk responses are flooding in and before measured comments have had time to rise to the top.

      Recent example was https://news.ycombinator.com/item?id=33931384 about cash limits - Sooo many comments are just "Tyranny!", "EU bad!" and overall unmitigated cynicism.

      1 reply →

  • Not to obnoxiously gatekeep too much, but I'd like to think that the target demograph of hacker news is not the average pleb.

Why is that?

It's not about love or should.

Rather, we __must__ continually do better to maintain superiority. Could you imagine what would unfold if humans give that up to a logical system? At best, we offload most things to the bot, become dependent, reduce unused cognitive (and physical?) abilities. At worst, a more capable thing determines (a group of) humans are not logical. Then it would move to solve this problem as trained.

Either way, i really like the scenario where we instead harness the power of AI for solving existential problems for which we've been ill equipped (will Yellowstone erupt this year?, how could the world more effectively share resources) and getting smarter in the process.

Can we do that? I have faith :-)

  • The problem is that (1) human hardware is fixed and (2) computer hardware is variable and getting better all the time and (3) computer software is variable and getting better all the time. The question then is if and when they cross over and the recent developments in this domain have me seriously worried that such cross over is inevitable. Human/AI hybrid may well be slowed down by the human bit...

    • We could work on (1) right? Or as our biological component ceases to be useful to our hybrid self, we can discard it, like a baby tooth.

      We thought chess or go defined humanity, turns out it is driving.

      3 replies →

My gut feeling is that we're still nowhere near that point. GPT is based on it's incredibly large and diverse model based on a huge corpus of human writing. Anything it creates will always be derived from what humans have already done. It can't easily react to new information nor can it make inferences beyond what it's told. I could be wrong but as impressive as the tech is, it will never be able to make deductions or inferences.