Comment by ramraj07

3 years ago

It’ll be interesting if we soon come to a day when a comment can be suspected to be from a bot because it’s too coherent and smart!

I agree, but in that case we can learn from the bots instead of wincing at regurgitated material.

Basically, if it improves thread quality, I'm for it, and if it degrades thread quality, we should throw the book at it. The nice thing about this position is that comment quality is a function of the comments themselves, and little else.

  • I suggest thinking about the purpose of discussion on HN.

    There’s a tension between thread quality on the one hand and the process of humans debating and learning from each other on the other hand.

    • There are many types of contributions to discussions on HN, of course. But I will tell you the contributions that resonate most with me: Personal experiences and anecdotes that illuminate the general issue being discussed. Sometimes a single post is enough for that illumination, and sometimes it is the sum of many such posts that sheds the brightest light.

      An example of the latter: Since March 2020, there have been many, many discussions on HN about work-from-home versus work-at-office. I myself started working from home at the same time, and articles about working from home started to appear in the media around then, too. But my own experience was a sample of one, and many of the media articles seemed to be based on samples not much larger. It was thus difficult judge which most people preferred, what the effects on overall productivity, family life, and mental health might be, how employers might respond when the pandemic cooled down, etc. The discussions on HN revealed better and more quickly what the range of experiences with WFH was, which types of people preferred it and which types didn’t, the possible advantages and disadvantages from the point of view of employers, etc.

      In contrast, discussions that focus only on general principles—freedom of this versus freedom of that, foo rights versus bar obligations, crypto flim versus fiat flam—yield less of interest, at least to me.

      That’s my personal experience and/or anecdote.

    • I don't think that thread quality and the process of humans debating and learning from each other are opposing concepts.

      On the contrary. It's precisely when people aren't willing to learn, or to debate respectfully and with an open mind, when thread quality deteriorates.

    • Yeah. Overemphasis on wanting "smart thoughtful comments" coul create a chilling effect where people might refrain from asking simple questions or posting succinct (yet valuable!) responses. Sometimes dumb questions are okay (because it's all relative).

    • I like thinking about the purpose, because I doubt there is a defined purpose right now. I have absolutely no idea why whoever hosts this site (ycombinator?) wants comments - if they're like reddit or twitter, though, it's to build a community and post history, because you can put that down as an asset and, idk, do money stuff with it. Count it in valuations and whatnot. And maybe do marketing and data mining. Or sell APIs. Stuff like that. So in this case, for the host, the "purpose" is "generate content that attracts more users to register and post, that is in a format that we can pitch as having Value to the people who decide valuations, or is in a format that we can pitch as having Value to the people who may want to pay for an API to access it, or is valuable for data mining, or, gives us enough information about the users that, combined with their contact info, functions as something we can sell for targeted ads."

      For me the "purpose" of discussion on HN is to fill a dopamine addiction niche that I've closed off by blocking reddit, twitter, and youtube, and, to hone ideas I have against a more-educated-than-normal and partially misaligned-against-my-values audience (I love when the pot gets stirred with stuff we aren't supposed to talk about that much such as politics and political philosophy, though I try not to be the first one to stir), and occasionally to ask a question that I'd like answered or just see what other people think about something.

      Do you think there's much "learning from eachother" on HN? I'm skeptical that really happens much on the chat-internet outside of huge knowledge-swaps happening on stackoverflow. I typically see confident value statements: "that's why xyz sucks," "that's not how that works," "it wasn't xyz, it was zyx," etc. Are we all doing the "say something wrong on the internet to get more answers" thing to eachother? What's the purpose of discussion on HN to you? Why are you here?

      The purpose of my comment is I wanna see what other people think about my reasons for posting, whether others share it, maybe some thoughts on that weird dopamine hit some of us get from posting at eachother, and see why others are here.

      5 replies →

    • There's the quality of the written commentary (which is all that matters for anyone only reading, never posting on HN) and the quality of the engagement of people that do write comments (which include how much their learned, the emotions they had, and other less tangible stuff)

      I think HN is optimizing for the former quality aspects and not the latter. So in that sense, if you can't tell if it's written by a bot, does it matter? (cue Westworld https://www.youtube.com/watch?v=kaahx4hMxmw)

    • Intelligent debate can happen in high-quality threads. And when we are intelligently debating subjective matters, the debate is targeted towards the reader, not the opposing party. On the other hand, when we are debating objective matters, the debate leads to the parties learning from the other. So I don't think these things are opposites.

      1 reply →

    • I don’t think so, at least, I find that process to be very educational, especially when some one changes their mind or an otherwise strong argument gets an unusually compelling critique.

      Basically I think those two things are synonymous.

  • Then humans might just be on the sideline, watching chatbots flooding the forums with superbly researched mini-whitepapers with links, reasoning, humour; a flow of comments optimized like tiktok videos, unbeatable like chess engines in chess. Those bots could also collude with complementing comments, and create a background noise of opinions to fake a certain sentiment in the community.

    I have no suggestion or solution, I'm just trying to wrap my head around those possibilities.

    • If there’s a bot that can take a topic and research the argument you feed it, all without hallucinating any data and made up references… please please point me to it.

      2 replies →

Seems this isn't a widely held opinion, but some of what I've seen from ChatGPT is already better than the typical non-LLM equivalents.

  • An example: I've asked it for "package delivery notification" and it generally produces something that is a better email template than communications I've seen humans put together and have many long "review sessions" on. Potentially an incredible saving of time & effort.

At that point the whole concept of a message board with humans exchanging information is probably over.

I am ultimately motivated to read this site to read smart things and something interesting. It is quite inefficient though. This comment is great but most comments are not what I am looking for.

If you could spend your time talking to Von Neumann about computing the input from thousands of random people who know far less than Von Neumann would not be interesting at all.

There is an xkcd comic about this (of course):

#810 Constructive: https://xkcd.com/810/

  • There is -of course- the famous Alan Turing paper about this [1], which is rapidly becoming more relevant by the day.

    Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).

    I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.

    Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.

    A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.

    This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.

    We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.

    [1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...

    • I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit

      This feels wrong for some reasons. A generalized knowledge that AI can express may be useful. But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them? With all shit humans say, it’s their real human experience formulated through a prism of their mood, intelligence and other states and characteristics. It’s a reflection of a real world somewhere. AI statements in this sense are minced realities cooked into something that may only look like a solid one. Maybe for some communities it would be irrelevant because participants are expected to judge logically and to check all facts, but it would require to keep awareness at all times.

      By “real human” I don’t mean that they are better (or worse) in a discussion, only that I am a human too, a real experience is applicable to me in principle and I could meet it irl. AI’s experience applicability has yet to be proven, if it makes sense at all.

      2 replies →

    • Note: Alan Turing's Imitiation Game pretty much directly involves Men, Women, Machines, Teletypes.

      These days of course we use such things as IRC clients, Discord, Web Browsers etc, instead of teletypes. If you substitute in these modern technologies, the Imitation Game still applies to much online interaction today.

      I've often applied the lessons gleaned from this to my own online interactions with other people. I don't think I ever quite imagined it might start applying directly to <machines>!

  • Of course there is, but it’s definitely weird when the jokes only funny when it’s not easy to think of it as a real possibility!

    In someways this thread sounds like the real first step in the raise of true AI, in a weird banal encroachment kind of way.

    • I think it would be really interesting to see threads on Hackernews start with an AI digestion of the article and surrounding discussion. This could provide a helpful summary and context for readers, and also potentially highlight important points and counterpoints in the conversation. It would be a great way to use AI to enhance the user experience on the site.

      I routinely use AI to help me communicate. Like Aaron to my Moses.

      1 reply →

    • When I compare the ChatGPT-generated comments to those written by real humans on most web forums, I could easily see myself preferring to only interact with AIs in the future rather than humans, where I have to deal with all kinds of stupidity and bad and rude behavior.

      The AIs aren't going to take over by force, it'll be because they're just nicer to deal with than real humans. Before long, we'll let AIs govern us, because the leaders we choose for ourselves (e.g. Trump) are so awful that it'll be easier to compromise on an AI.

      Before long, we'll all be happy to line up to get installed into Matrix pods.

      5 replies →

From the ChatGPT-generated stuff I've seen just in the last week, I think we're already there. Most humans these days are incredibly stupid.

  • I would rephrase that: Humans are incredibly stupid most of the time. Only if they make diligent use of ‚system 2‘ they are not.