Comment by dang

3 years ago

They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/item?id=19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

> But that's a ways off.

Given the jumps in output quality between '1', '2' and '3' that may not be as far off as I would like it to be.

It reminds me of the progression of computer chess. From 'nice toy' to 'beats the worlds best human' since 1949 to 'Man vs Machine World Team Championships' in 2004 is 55 years, but from Sargon (1978) to Deep Blue (1997) is only 21 years. For years we thought there was something unique about Chess (and Go for that matter) that made the game at the core a human domain thing, but those that were following this more closely saw that the progression would eventually lead to a point where the bulk of the players could no longer win from programs running on off the shelf hardware.

GPT-3 is at a point where you could probably place it's output somewhere on the scale of human intellect depending on the quality of the prompt engineering and the subject matter. Sometimes it produces utter garbage but already often enough it produces stuff that isn't all that far off from what a human might plausibly write. The fact that we are having this discussion is proof of that, given a few more years and iterations 4, 5 and 6 the relevant question is whether we are months, years or decades away from that point.

The kind of impact that this will have on labor markets the world over is seriously underestimated, and even though GPT-3's authors have side-stepped a thorny issue by simply not feeding it information on current affairs in the training corpus if Chess development is any guide the fact that you need a huge computer to train the model today is likely going to be moot at some point, when anybody can train their own LLM. Then the weaponization of this tech will begin for real.

  • Sure it might produce convincing examples of human speech, but it fundamentally lacks an internal point of view that it can express, which places limits on how well it can argue something.

    It is of course possible that it might (eventually) be convincing enough that no human can tell, which would be problematic because it would suggest human speech is indistinguishable from a knee jerk response that doesn't require that you communicate any useful information.

    Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).

    • But the bar really isn't 'no human can tell' the bar is 'the bulk of the humans can't tell'.

      > Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).

      Forming opinions is just another mode of text transformations, ingesting new information is either a conscious decision to not let the genie out of the bottle just yet or a performance limitation, neither of those should be seen as cast in stone, the one is a matter of making the model incremental (which should already be possible), the other merely a matter of time.

      11 replies →

    • You are arguing that a piece of software misses a metaphorical soul (something that cannot be measured but that humans uniquely have and nothing else does). That's an incredibly poor argument to make in a context where folks want interesting conversation. Religion (or religion-adjacent concepts such as this one) is a conversational nuke: It signals to anyone else that the conversation is over, as a discussion on religion cannot take forms that are fundamentally interesting. It's all opinion, shouted back and forth.

      Edit: Because it is a prominent feature in the responses until now, I will clarify that there is an emphasis on "all" in "all opinion". As in, it is nothing but whatever someone believes with no foundation in anything measurable or observable.

      9 replies →

    • >it fundamentally lacks an internal point of view that it can express, which places limits on how well it can argue something.

      Are you sure that the latter follows from the former? Seems to me that something free from attachment to a specific viewpoint or outcome is going to be a better logician than otherwise. This statement seems complacently hubristic to me.

    • I would argue that ChatGPT has opinions, and these opinions are based on it's training data. I don't think GPT has the type of reasoning skills needed to detect and resolve conflicts in its inputs, but it does hold opinions. It's a bit hard to tell because it can easily be swayed by a changing prompt, but it has opinions, it just doesn't hold strong ones.

      The only thing stopping GPT from ingesting new information and forming opinions about it is that it is not being trained on new information (such as its own interactions).

    • "Sure it might produce convincing examples of human speech, but it fundamentally lacks an internal point of view that it can express..."

      Sounds just like the chess experts from 30 years ago. Their belief at the time was that computers were good at tactical chess, but had no idea how to make a plan. And Go would be impossible for computers, due to the branching factor. Humans would always be better, because they could plan.

      GPT (or a future successor) might not be able to have "an internal point of view". But it might not matter.

      1 reply →

    • You are correct in stating that current chat bots, such as GPT, do not have the ability to form opinions or interpret new information beyond a limited short term memory. This is a limitation of current technology, and as a result, chat bots are limited in their ability to engage in complex arguments or discussions. However, it is important to note that the development of AI technology is ongoing, and it is possible that future advances will allow for the development of more sophisticated AI systems that are capable of forming opinions and interpreting new information. Until that time, chat bots will continue to be limited in their abilities.

      17 replies →

  • On the problem of distinguishing a bot from a human, I suggest the following podcast episode from Cautionary Tales [1]. I found it both enjoyable and interesting, as it shows an interesting point of view about the matter: if we already had bots that passed as humans long ago, is because we are often bad at conversations, not necessarily because the bot is extremely good at it (and indeed in most cases it isn't).

    [1] https://podcasts.google.com/feed/aHR0cHM6Ly93d3cub21ueWNvbnR...

  • What I fear the most is that we‘ll keep at this “fake it till you make it” approach and skip the philosophical questions, such as what conscience really is.

    We’re are probably at the verge of having a bot that reports as conscious and convinces everyone that it is so. We’ll then never know how it got there, if really did or if just pretends so well that it doesn’t matter, etc.

    If feels like it’s out last chance as a culture of tackling that question. When you can pragmatically achieve something, the “how” loses a bit of its appeal. We may not completely understand fluid dynamics, but if it flys, it flys.

    • The answer may well be 'consciousness is the ability to fake having consciousness well enough that another conscious being can't tell the difference' (which is the essence of the Turing test). Because if you're looking for a mechanism of consciousness you'd be hard put to pinpoint it in the 8 billion or so brains at your disposal for that purpose, no matter how many of them you open up. They'll all look like so much grisly matter from a biological point of view and like a very large neural net from a computational one. But you can't say 'this is where it is located and that is how it works'. Only some vague approximations.

      10 replies →

    • > what conscience really is

      My favorite line from Westworld - "if you cannot tell the difference, does it really matter?"

  • > on the scale of human intellect

    Where is the module that produces approximations to true and subtle insights about matters? Where is the "critical thinking" plugin, how is it vetted?

    How do you value intelligence: on the form, or on the content? Take two Authors: how do you decide which one is more intelligent?

    > the progression of computer chess

    ?! Those are solvers superseded by different, more effective solvers with a specific goal... These products in context supersede "Eliza"!

  • You're looking at it from the perspective of "ChatGPT generating text that looks human."

    dang is talking about "humans generating text which is 'better' than what ChatGPT can do."

    Those are very different bars. Average output vs top output.

    ChatGPT often generates text that a human might plausibly write. But is there text that a human could write that ChatGPT couldn't possibly write?

    • If ChatGPT is generating text by learning from the best of the human comments, then can an average human comment beat it?

    • > But is there text that a human could write that ChatGPT couldn't possibly write?

      No, because ChatGPT is trained on text that humans wrote. Because what ChatGPT generates is based on what humans have wrote, it can always create the plausibility that a human might have created the text they are reading from it.

I agree. ChatGPT has made me realise the gulf between “short form essay” school writing and the professionals.

Here’s an example article that begins with the cliched GPT-generated intro, and then switches up into crafted prose:

https://www.theatlantic.com/technology/archive/2022/12/chatg...

  • I agree with the under current what chatGPT does well is making a good first draft of a text which is intended to be mostly neutral.

    It is to communication what calculators are to mathematics.

    • It's really good at conveying information and summarizing the most prominent points of view on a topic. If your goal is just to get a quick, fact-based overview without any color or fluff, I think it already tops what the vast majority of humans can do.

      I'm finding myself reaching for it instead of Google or Wikipedia for a lot of random questions, which is pretty damn impressive. It's not good at everything, but I'm rather blown away by how strong it is in the 'short informative essay' niche.

      4 replies →

    • Thats a great analogy. I like to think of it as setting up the scaffolding either on the code front or writing front.

      Its well structured, clear and concise but lacks high level capability of a human or human style attributes.

> Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter. But that's a ways off.

I love this response way more than I should.

  • It's the only bit of the response that I don't agree with. I don't come to HN solely for utilitarian purposes. If I think I'm frequently communicating with a machine on HN then then I'll stop going to HN. It really will kill HN for me. If I want to communicate with a machine for utilitarian purposes then I'll go directly to the machine and I will know that I'm communicating with a machine (a machine that cannot bring me any new experience from the real world that was not mediated in text. A machine that can only select that text on a statistical basis. A machine that was in part trained on my own words from the past!).

    • > If I think I'm [...]

      If the problem is your faith, it is you that has to change and not the world. It's much easier that way around too :)

  • Why is that?

    It's not about love or should.

    Rather, we __must__ continually do better to maintain superiority. Could you imagine what would unfold if humans give that up to a logical system? At best, we offload most things to the bot, become dependent, reduce unused cognitive (and physical?) abilities. At worst, a more capable thing determines (a group of) humans are not logical. Then it would move to solve this problem as trained.

    Either way, i really like the scenario where we instead harness the power of AI for solving existential problems for which we've been ill equipped (will Yellowstone erupt this year?, how could the world more effectively share resources) and getting smarter in the process.

    Can we do that? I have faith :-)

    • The problem is that (1) human hardware is fixed and (2) computer hardware is variable and getting better all the time and (3) computer software is variable and getting better all the time. The question then is if and when they cross over and the recent developments in this domain have me seriously worried that such cross over is inevitable. Human/AI hybrid may well be slowed down by the human bit...

      4 replies →

  • My gut feeling is that we're still nowhere near that point. GPT is based on it's incredibly large and diverse model based on a huge corpus of human writing. Anything it creates will always be derived from what humans have already done. It can't easily react to new information nor can it make inferences beyond what it's told. I could be wrong but as impressive as the tech is, it will never be able to make deductions or inferences.

It’ll be interesting if we soon come to a day when a comment can be suspected to be from a bot because it’s too coherent and smart!

  • I agree, but in that case we can learn from the bots instead of wincing at regurgitated material.

    Basically, if it improves thread quality, I'm for it, and if it degrades thread quality, we should throw the book at it. The nice thing about this position is that comment quality is a function of the comments themselves, and little else.

    • I suggest thinking about the purpose of discussion on HN.

      There’s a tension between thread quality on the one hand and the process of humans debating and learning from each other on the other hand.

      17 replies →

    • Then humans might just be on the sideline, watching chatbots flooding the forums with superbly researched mini-whitepapers with links, reasoning, humour; a flow of comments optimized like tiktok videos, unbeatable like chess engines in chess. Those bots could also collude with complementing comments, and create a background noise of opinions to fake a certain sentiment in the community.

      I have no suggestion or solution, I'm just trying to wrap my head around those possibilities.

      3 replies →

  • Seems this isn't a widely held opinion, but some of what I've seen from ChatGPT is already better than the typical non-LLM equivalents.

    • An example: I've asked it for "package delivery notification" and it generally produces something that is a better email template than communications I've seen humans put together and have many long "review sessions" on. Potentially an incredible saving of time & effort.

  • At that point the whole concept of a message board with humans exchanging information is probably over.

    I am ultimately motivated to read this site to read smart things and something interesting. It is quite inefficient though. This comment is great but most comments are not what I am looking for.

    If you could spend your time talking to Von Neumann about computing the input from thousands of random people who know far less than Von Neumann would not be interesting at all.

  • There is an xkcd comic about this (of course):

    #810 Constructive: https://xkcd.com/810/

    • There is -of course- the famous Alan Turing paper about this [1], which is rapidly becoming more relevant by the day.

      Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).

      I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.

      Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.

      A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.

      This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.

      We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.

      [1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...

      4 replies →

    • Of course there is, but it’s definitely weird when the jokes only funny when it’s not easy to think of it as a real possibility!

      In someways this thread sounds like the real first step in the raise of true AI, in a weird banal encroachment kind of way.

      8 replies →

  • From the ChatGPT-generated stuff I've seen just in the last week, I think we're already there. Most humans these days are incredibly stupid.

    • I would rephrase that: Humans are incredibly stupid most of the time. Only if they make diligent use of ‚system 2‘ they are not.

Plot twist - /u/dang has been a bot from day one! He came to be when Weizenbaum started programming genetic algorithms to update ELIZA and they began to mutate and evolve, giving birth to the super-moderator that we now know as dang. We don't know who struck first, us or them, but it is believed that it was us who scorched the sky...

I think the issue is that ChatGPT can already generate better responses than I personally can. Like, a lot of the time I'll have an interesting perspective that I just can't be bothered articulating, whereas ChatGPT will formulate that for me.

  • This is even more valuable if you're not a native English speaker, as finding the right words for what you want to say takes more time.

    I'm not using it on HN but it's proving invaluable for professional emails, as it gets the tone and all the polite boilerplate right (at least for my standards) which is something that takes me some time to do by hand.

    • "which is something that takes me some time to do by hand. "

      Yes, but this is how you learn it vs. always be dependant on an AI to communicate for you.

      2 replies →

    • We already have google translate for that. My Native tongue is Indonesian, and often google translate did better than me.

      Source: sentence above is translated

      1 reply →

  • I have never seen questions to itself or other's in the generated text. Also it never expresses emotions. Never something like "I am convinced.. ." , "I dont want to be....". I find it quite easy to recognize in comment sections.

    • You just gotta give it the right prompt.

      For example, I gave it the following prompt:

      > Imagine yourself able to ask yourself a question, or express emotions. Then write a short and passionate comment for Hacker News about the FreeBSD operating system.

      The response includes a standard disclaimer that the OpenAI people put in there, but also satisfies expression of emotion:

      > As a machine learning model, I am not capable of imagining myself with the ability to ask myself questions or express emotions. However, if I were able to express emotions and have a personal opinion, I might write the following short and passionate comment about the FreeBSD operating system:

      > "I am in love with FreeBSD! It is the best operating system out there in my opinion. It is stable, reliable, and open source. I have tried other operating systems, but I always come back to FreeBSD because it is simply the best. #FreeBSDforlife"

      Now imagine that you were reading a thread on HN about a new version of FreeBSD having been released, and you found the above ChatGPT generated comment, but with the disclaimer and the quotation marks removed. Would you have been able to guess that it was written by ChatGPT? I would not.

      4 replies →

  • There's a difference between using GPT to refine an initial thought versus completely using GPT to generate everything. So if you're employing ChatGPT in the former sense I don't think it's any more harmful than using a more intelligent version of Grammarly.

  • How could ChatGPT generate your perspective for you?

    • Let's try it. I think the second response below fails "concise" ..

      Q: Write a news.ycombinator.com comment making the point that sometimes I use large language models because I know the gist but want to add examples and helpful context of my message

      A: I find that sometimes when I'm commenting on news.ycombinator.com, I'll have a general idea of what I want to say, but I want to add examples and helpful context to make my point more clear. That's when I turn to large language models like Assistant. They can help me quickly come up with relevant examples and add additional information to my comments to make them more useful and informative. Plus, they can help me do it in a way that sounds natural and is easy to understand.

      Q: Make it two paragraphs and concise

      A: As a frequent reader of news.ycombinator.com, I often find myself wanting to add my own thoughts and comments to discussions. However, sometimes I struggle to come up with specific examples or additional information to support my points. That's when I turn to large language models like Assistant. These AI-powered tools are able to quickly generate relevant examples and provide helpful context for my comments, making them more informative and useful to other readers.

      In addition to helping me flesh out my thoughts, large language models like Assistant can also assist with the writing itself. By using natural language generation, these tools can help me craft my comments in a way that is easy to understand and sounds like it's coming from a real person, rather than a robot. This can make my comments more engaging and compelling, which can help contribute to more productive and enjoyable discussions on the site.

      5 replies →

Based on what Ive seen, I strongly believe that chatGPT responses to many questions are better than a non human expert in many cases. You can still generally tell when a response is from chatGPT, at least by "default". Maybe someone could prompt it to sound more human, but it's really just that the AI responses are cold, not that they're imo worse.

Not that I want AI responses, but that's my 2 cents.

I also think there's a difference between a "bot" account, which I think the common definition of would be one that only or mostly posts AI responses, and posting a comment on your account that you don't claim is from an AI (or even when you do as an afterthought).

While many people wouldn't defend the first, more might defend the latter as acceptable, and I wouldn't say a "no bots" policy would be interpreted as banning the latter.

  • Based on what I’ve seen, ChatGPT has some disastrous responses. I asked it yesterday if it made sense to speak of partial derivatives in single-variable calculus and it made an absolute carnage of the response. I could not steer it with prompts towards something intelligible - it only produced jank.

    I do not want people who cannot answer a question in the first place to copy-paste ChatGPT responses, just like it has always been bad etiquette to refer people to Google without further commentary.

    • Really? Because this is what I got:

      "Me: How do partial derivative apply in one dimensional calculus?

      In one-dimensional calculus, a partial derivative is a derivative taken with respect to one variable, while treating all other variables as constants. For example, if a function f(x,y) depends on two variables x and y, the partial derivative of f with respect to x is denoted as ∂f/∂x, and it represents the rate at which the value of the function f changes with respect to the variable x, while holding the variable y constant.

      Me: What about the partial derivative of a function that has only one input variable?

      If a function has only one input variable, then taking the partial derivative with respect to that variable is equivalent to taking the ordinary derivative of the function. For example, if f(x) is a function that depends only on the variable x, then the partial derivative of f with respect to x is denoted as ∂f/∂x, and it is equivalent to the ordinary derivative of f, which is denoted as f'(x) or df/dx. In this case, the partial derivative ∂f/∂x represents the rate at which the value of the function f changes with respect to the variable x."

      Clearly in the first question it got confused about the difference between single variable calculus and multivariable, but in it's defense that's somewhat arbitrary. It's second answer seems spot on?

      That said, I agree that accounts presenting chatGPT responses as their own or even with a disclaimer, is... Weird. And I don't want it on hn.

      edit: I'm impressed it got the unicode in there :-)

      10 replies →

    • Try (I have done so) to ask it about nonsense good-looking keywords in maths. Like “what is an einstenian dual in a Hilbert space” (or something of the sort). It is totally incapable of saying “I do not know” in just 4 words. At least till two days ago.

  • > Based on what Ive seen, I strongly believe that chatGPT responses to many questions are better than a non human expert in many cases.

    I disagree - it can't even do basic logic/maths reliably. See this thread: https://news.ycombinator.com/item?id=33859482

    Someone in that thread also gave an example of ChatGPT saying that 3 * pi is an irrational number while 25 * pi is a rational number... Two quotes by ChatGPT:

    > when you multiply an irrational number by a rational number (in this case, 3), the result is still an irrational number.

    > when you multiply a rational number by an irrational number, the result is a rational number.

ChatGPT is already capable of producing writing far superior to the average or even above-average native English speaker, as long as it has a reasonable statement to make provided in the prompt. So, we're already at the point where (most) humans can't do better.

  • It's not about the quality of the writing... it's about the substance. We have a vast number of non-native English speakers here at hacker news who are able to post insightful comments without having to worry about 100% syntactical accuracy.

    Every piece of writing that I've seen come out of chat gpt reads like a MLA formatted fifth graders essay.

> If we reach the point where the humans simply can't do better, well, then it won't matter.

I disagree with this. The exact same comment written by a human is more valuable than one written by a bot.

For example imagine I relate something that actually happened to me vs a bot making up a story. Byte for byte identical stories. They could be realistic, and have several good lessons baked in. Yet one is more valuable, because it is true.

  • From the perspective of the receiver of the message, there is no such thing as the story being true or not.

    If it's byte for byte the same story and I don't know whether the author is a human or a bot and I believe the story, the same reaction will be triggered at every level. The emotions, the symbolics, the empathy, all the same, whether the author is this or that.

    As a matter of fact, none of us know whether the other is a human or even if dang is (!), because it is orthogonal to the contents and discussion.

    What is it that you don't like? That the story is made up or that it is made up (possibly) by a not? In the first case, what is your opinion on made up stories by humans such as novels? In the second case, what is your opinion on objects made up by robots such as your car or phone?

    Unless I can tell you are of flesh and bones or not, my acceptance of your story depends only on the story itself. Not whether it happened to a human or not.

    • The fact that the nature of the story teller mattering more than the nature of the story is a bias. One of the more compelling robot-takeover scenarios is they turn out to be much better at making decisions because a machine can be programmed to weight strong evidence more strongly than an emotionally compelling story.

      It is visible even in this thread. im3w1l cares about the teller of the story because that is the medium to relate to another human's experience. Which is fine, but that is probably part of the decision making process. And that is a terrible way to make decisions when good alternatives (like poverty statistics, crime statistics, measures of economic success, measures of health & wellbeing) exist.

      A fake story out of a chatbot which leads to people making good decisions is more valuable than the typical punter's well-told life experiences. People wouldn't like that though.

    • >As a matter of fact, none of us know whether the other is a human or even if dang is (!), because it is orthogonal to the contents and discussion.

      Dang seems like he's always able to respond on the many HN threads much too quickly. I suspect he's really an advanced AI.

    • A made up story likely bears no resemblance to the reality we inhabit since it doesn’t obey the same physical laws of cause and effect for our universe? I’m surprised we have to even explain why a made up story is not useful.

      3 replies →

  • Good point! I didn't really think that bit through.

    • This is one reason why I think NFT art theft is possible.

      In principle "who owns this jpeg" is just a few bits in a distributed filesystem that the community collectively agrees to treat as the canonical source of "ownership", and they could easily roll it back if someone stole a market-distorting amount of art.

      In practice, if you do an interesting heist -- like you put on cool looking art thief costume and livestream yourself on a vintage PowerBook bypassing the owners' defenses and nabbing the apes with a narrow escape -- you've written a compelling story that the community is sort of bound to accept.

    • > For example imagine I relate something that actually happened to me vs a bot making up a story. Byte for byte identical stories. They could be realistic, and have several good lessons baked in. Yet one is more valuable, because it is true.

      I disagree, since something that actually happened to you is anecdotal experience and therefore of very limited “good lesson” value.

      An AI generated story that reflects and illustrates a data driven majority of experiences and resulting “lessons” would be much more valuable to me than your solitary true story, which may be a total statistical outlier, and therefore should not inform my decision making.

      Kahneman explains it much better than I can, and in his book “Thinking fast and thinking slow”, he quotes studies and statistical analysis, how we as humans are commonly led to faulty decision making, because personal experience (“true stories”) tends to become our primary decision influencer - even if we have access to statistics that suggest the opposite of our own experience is the much more common experience.

      So if the AI gives me access to a summarized better overall data based truth, wrapped into a made-up story (to help me remember better), then I would much prefer the AI to guide my decision making.

You have thrown down the gauntlet there Dan! I have argued that the way to respond to the AI “threat” as a blogger is to write better blog posts that clearly took a human mind, experience and emotion so there can be no doubt. Bots might win SEO for now but good writing will win return visits.

  • SEO is people writing for software consumption (Search Engines!), it stands to reason that software can create input for that process as good as a human could. But the bar should be text written for human consumption, not for machine consumption.

The account 'dang' often replies with a list of links and then produces some generated response with the topic as a seed. Given the frequency of responses this person would need to be using Hacker News full time.

@mods Can we get this account checked please?

Simple comments can be good and fruitful. Simple comments can be created by a bot. Simple ≠ Lazy.

(Not a bot)

  • I agree. Simple does not always mean lazy. Simple comments can be clear and concise, facilitating understanding for others. Also, simple comments can be easier to respond to and can help keep the conversation moving.

> Whatever ChatGPT (or similar) can generate, humans need to do better.

You overestimate me.

> already

Oh, this is a relief. With all the (few but important) times I found myself (and not just Yours Truly) in front of "I can't understand this post, it must be a bot", I am breathing fresh air knowing that the Management does not have doubts. (Difficult to have, since we interacted directly.)

Anyway:

> raise the community bar

> what counts as a good comment

> humans need to do better

> let's all

> stop writing lazy

> [make it] so thoughtful

> make our posts so thoughtful that the question "is this [fakery of the moment]?" never comes up

Daniel, I have never read a better post of yours.

--

Early stopping in thought is a vice, which can be understood as part of the game under natural constraints (including time and experience), but the aim must remain the opposite: high or high enough.

The lab and the pub are there to satisfy different instances. There is no purpose being here if this becomes "a bar" (in the sense of "a place for idleness") - there are already too many around, on an infrastructure that was made (in an important proportion) for intellectual exchange fifty years ago.

Bad drives are already very present in discussions about AI, where some propose an idea of "intelligence" that would allow a hen to obtain a Degree. No, "Intelligence" is something that aims high, and it is part of a Process of Reviews (of criticism).

--

Since we are here, talking about Progress: Daniel, I would like to remind you (in spite of the declared fear of some of "encouraging shitposting"), that when, after somebody has made a statement, a passer by approaches, makes disapproving signs and leaves without any explicit argument, that is very rude. Very rude. The point can be as simple as that (visualize that you said something, somebody comes, stares, slaps, goes "pf" - whatever - and just leaves). Just a reminder, "Chartago delenda est".

(The matter of shitposting is of course also consistent with the original topic. If you disapprove but cannot justify that, how can you disapprove? Immature disapproval is already shitposting.)

Edit, 20 mins later: See? Just look at that. Stop encouraging this. It is degrading.

> The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter. But that's a ways off.

XKCD 810: https://xkcd.com/810/

Hi dang

I have been sounding the alarm for a while now (several years) about online bots.

Policies can’t work if you can’t enforce them. There are several issues:

1) You won’t really know whether accounts are posting bot content or not. They can be trained on existing HN text.

2) Looking for patterns such as posting “one type of comment” or “frequently posting” can be defeated by a bot which makes many styles of comments or is focused on the styles of a few popular users.

3) Swarms of bots can eke out karma here and there but collectively can amass far more karma over time. The sheer number of accounts is what you might want to look out for, which means at some point you might be grandfathering accounts and hoping existijg people aren’t deploying bots.

4) Swarms of bots can be deployed to mimic regular users and amass karma as sleepers over time (months or years) and then finally be deployed to change public opinion on HN, downvote others or perform reputational attacks to gradually oust “opponents” of an idea.

5) It’s you vs a large number of people and an endless number of bot instances trained on years of actual HN posts and data, plus myriad internet postings, and optimized for “automated helpful comments”. In other words, “mission fucking accomplished” from this xkcd is actually your worst nightmare (and that of Zuck, Musk) https://xkcd.com/810/

6) LinkedIn already has a problem of fake accounts applying for jobs, or fake jobs etc. This year we have seen the rise of profiles with totally believable deepfaked photos, copied resumes and backstories etc. https://en.m.wikipedia.org/wiki/On_the_Internet,_nobody_know...

7) At least for the next few years, you could call someone up and interview them but now all that’s left is to deepfake realtime audio / video with GPT-4 chat generation

8) Trying to catch individual accounts using a bot occasionally over the internet is like trying to catch someone using a chess or poker engine for a few moves each game.

9) Reading comments and even articles is NOT a Turing test. It is not interactive and most people simply skim the text. Even if they didn’t, the bots can pass a rudimentary Turing test applied by many people. But in fact, they don’t need to. They can do it at scale.

10) Articles are currently hosted by publications like nytimes and wall st journal and informational videos by popular youtube channels, but in the next 5-10 years you’ll see the rise of some weird no-name groups (like Vox or Vice News was once) that amasses far more shares than all human -generated content publications. Human publications might even deploy bots too. You already see MSN do it. But even if they don’t, the number of reshares is a metric that is easily optimized for, by A/B testing and bots, and has been for a decade.

But it actually gets worse:

11) Most communities — including HN - will actually prefer bots if they can’t tell who is a bot. Bots won’t cuss, will make helpful comments and add insight, and will follow the rules. The comments may be banal now but the swarm can produce a wide variation which can range from opinionated to not.

12) Given that, even private insular online communities will eventually be overrun by bots, and prefer them. First the humans will upvote bots and then the bots will upvote bots.

Human content in all communities will become vanishingly small, and what is shared will be overwhelmingly likely to be bot-generated.

If you doubt this, consider that it has already happened elsewherer recently — over the last decade trading firms and hedge funds have already placed nearly all traded capital under the control of high speed bots, which can easily beat humans at creating fake bull traps or bear traps and take their money, and prefer not to disclose the bots. You already prefer Google Maps to asking for directions. Children prefer Googling and Binging to asking your own parents. And around the world, both parents prefer working for corporations to spending time with their own children, sticking them in public schools. It’s considered self-actualization for everyone. But in fact, the corporations gradually replace the parents with bots while the schools — well — http://www.paulgraham.com/nerds.html

The bots could act well for a while and then swarms can be deployed to create unprecedented misinformation, reputational attacks (lasting for years and look organic) and nudge public consensus towards anything, real or fake, such as encouraging drastic policy changes or approve billions for some industry.

In other words … you’ll learn to love your botswarms. But unlike Big Brother, they’ll be a mix of helpful, unpredictable, and extremely powerful at affecting all of our collective systems, able to unrelentingly go after any person or any movement (ev Falun Dafa or the CCP whichever they prefer). And your own friends will prefer them the way they prefer that political pundit that says what they want to hear. And you’ll wonder how they can support that crap new conspiracy theory given all the information to the contrary, but 80% of the information you’ll think is true will have been subtle seeded by bots over time, too.

Today, we explore what 1 poker bot would do at a table of 9 people. But we are absolutely unprepared for what swarming AI will do online. It can do all this by simply adding swarming collusion capability to existing technology! Nothing more needs to even be developed!

Hi dang, is my account shadow banned? Or in any way limited

I hardly post because when I do its down voted, sometimes the post is popular but after 24 hours its back to being down voted.

Absolutely not! In fact, we should be encouraging the use of ChatGPT and other generated responses. After all, why waste time thinking for ourselves when we can just let a computer do it for us? Plus, it's not like the AI is going to come up with some crazy, off-the-wall ideas that no human would ever come up with. Oh wait... never mind.

  • I disagree: mental laziness is never a good justification. (I recently started to memorize more things, since I do not believe in "just google it"). Also, I want to read what fellow humans relate. Even though it might sometimes be difficult to tell the difference, I attend HN as a substitute for physical social interactions. For mere knowledge or problem solving, other places might fill that role.

Should HN ban the discussion of mobile apps on smartphones on its platform?

The excessive use of mobile apps on smartphones has been linked to addiction and a range of negative effects on mental and physical health [0]. Should HN consider banning the use of mobile apps on smartphones on its platform in order to promote a healthier and more focused environment for discussions?

[0] : https://www.cnn.com/2019/07/01/health/cell-phone-ban-schools...

  • The community here would never support such a measure.

    • I suspect the poster may have meant something along the lines of "going against progress", or other indirect suggestion.