Dead Internet Theory

14 hours ago (kudmitry.com)

My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.

  • As they say, the demand for racism far outstrips the supply. It's hard to spend all day outraged if you rely on reality to supply enough fodder.

    • This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).

      In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

      We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.

      Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

      3 replies →

    • I hadn't heard that saying.

      Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.

      Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.

      In various forms, with various levels of harm, and with various levels of evidence available.

      (Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)

      Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.

      (Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )

      12 replies →

    • Wrong takeaway. There are plenty of real incidents. The reason for posting fake incidents is to discredit the real ones.

    • I like that saying. You can see it all the time on Reddit where, not even counting AI generated content, you see rage bait that is (re)posted literally years after the fact. It's like "yeah, OK this guy sucks, but why are you reposting this 5 years after it went viral?"

    • You sure about that? I think actions of the US administration together with ICE and police work provide quite enough

    • Rage sells. Not long after EBT changes, there were a rash of videos of people playing the person people against welfare imagine in their head. Women, usually black, speaking improperly about how the taxpayers need to take care of their kids.

      Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.

    • Wut? If you listen to what real people say, racism is quite common has all the power right now.

  • a reliable giveaway for AI generated videos is just a quick glance at the account's post history—the videos will look frequent, repetitive, and lack a consistent subject/background—and that's not something that'll go away when AI videos get better

    • > [...] and lack a consistent subject/background—and that's not something that'll go away when AI videos get better

      Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?

      1 reply →

  • I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.

  • I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.

    • > “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?

      Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.

      It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.

      But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.

      I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.

      [1] https://en.wikipedia.org/wiki/Brandolini%27s_law

      [2] https://www.youtube.com/CaptainDisillusion

    • It really isn’t that slop didn’t exist before.

      It is that it is increasingly becoming indistinguishable from not-slop.

      There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.

      And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.

      It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.

      2 replies →

  • I really wish Google will flag videos with any AI content, that they detect.

    • It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.

      Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.

      5 replies →

  • The problem’s gonna be when Google as well is plastered with fake news articles about the same thing. There’s very little to no way you will know whether something is real or not.

  • I find the sound is a dead giveaway for most AI videos — the voices all sound like a low bitrate MP3.

    Which will eventually get worked around and can easily be masked by just having a backing track.

    • that sounds like one of the worst heuristics I've ever heard, worse than "em-dash=ai" (em-dash equals ai to the illiterate class, who don't know what they are talking about on any subject and who also don't use em-dashes, but literate people do use em-dashes and also know what they are talking about. this is called the Dunning-Em-Dash Effect, where "dunning" refers to the payback of intellectual deficit whereas the illiterate think it's a name)

      29 replies →

  • Next step: find out whether Youtube will remove it if you point it out

    Answer? Probably "of course not"

    They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably

>which is not a social network, but I’m tired of arguing with people online about it

I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.

  • > the former type is all but obliterated now.

    Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.

    While it stinks that it is controlled by one big company, it's quite nice that its communities are invite-only by default and largely moderated by actual flesh-and-blood users. There's no single public shared social space, which means there's no one shared social feed to get hooked on.

    Pretty much all of my former IRC/Forum buddies have migrated to Discord, and when the site goes south (not if, it's going to go public eventually, we all know how this story plays out), we expect that we'll be using an alternative that is shaped very much like it, such as Matrix.

    • > Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.

      The "former type" had to do with online socializing with people you know IRL.

      I have never seen anything on Discord that matches this description.

      5 replies →

  • > "internet based medium for socializing with people you know IRL"

    "Social media" never meant that. We've forgotten already, but the original term was "social network" and the way sites worked back then is that everyone was contributing more or less original content. It would then be shared automatically to your network of friends. It was like texting but automatically broadcast to your contact list.

    Then Facebook and others pivoted towards "resharing" content and it became less "what are my friends doing" and more "I want to watch random media" and your friends sharing it just became an input into the popularity algorithm. At that point, it became "social media".

    HN is neither since there's no way to friend people or broadcast comments. It's just a forum where most threads are links, like Reddit.

  • It's even worse than that, TikTok & Instagram are labeled "social media" despite, I'd wager, most users never actually posting anything anymore. Nobody really socializes on short form video platforms any more than they do YouTube. It's just media. At least forums are social, sort of.

  • I'll come clean and say I've still never tried Discord and I feel like I must not be understanding the concept. It really looks like it's IRC but hosted by some commercial company and requiring their client to use and with extremely tenuous privacy guarantees. I figure I must be missing something because I can't understand why that's so popular when IRC is still there.

    • IRC has many many usability problems which I'm sure you're about to give a "quite trivial curlftpfs" explaination for why they're unimportant - missing messages if you're offline, inconsistent standards for user accounts/authentication, no consensus on how even basic rich text should work much less sending images, inconsistent standards for voice calls that tend to break in the presence of NAT, same thing for file transfers...

    • It is IRC, but with modern features and no channel splits. It also adds voice chats and video sharing. Trade off is that privacy and commercial platform. On other hand it is very much simpler to use. IRC is a mess of usability really. Discord has much better user experience for new users.

    • Because it's the equivalent to running a private irc server plus logging with forum features, voice comms, image hosting, authentication and bouncers for all your users. With a working client on multiple platforms (unlike IRC and jabber that never really took off on mobile).

    • it's very easy to make a friend server that has all you basically need: sending messages, images/files and being able to talk with voice channels.

      you can also invite a music bot or host your own that will join the voice channel with a song that you requested

      3 replies →

  • You know Meta, the "social media company" came out and said their users spend less than 10% of the time interacting with people they actually know?

    "Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)

  • The social networks have all added public media and algorithms. I read explanation that because friends don't produce enough content to keep engaged so they added public feeds. I'm disappointed that there isn't a private Bluesky/Mastodon. I also want an algorithm that shows the best of what following posted since last checked so I can keep up.

Think the notion that ‘no one’ uses em dashes is a bit misguided. I’ve personally used them in text for as long as I can remember.

Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.

And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.

Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”

  • It is so irritating that people now think you've used an LLM just because you use nice typography. I've been using en dashes a ton (and em dashes sporadically) since long before ChatGPT came around. My writing style belonged to me first—why should I have to change?

    If you have the Compose key [1] enabled on your computer, the keyboard sequence is pretty easy: `Compose - - -` (and for en dash, it's `Compose - - .`). Those two are probably my most-used Compose combos.

    [1]: https://en.wikipedia.org/wiki/Compose_key

    • Also on phones it is really easy to use em dashes. It's quite out in the open whether I posted from desktop or phone because the use of "---" vs "—" is the dead give-away.

    • Hot take, but a character that demands zero-space between the letters at the end and the beginning of 2 words - that ISN'T a hyphenated compound - is NOT nice typography. I don't care how prevalent it is, or once was.

      4 replies →

  • The thing with em-dashes is not the em-dash itself. I use em-dashes, because when I started to blog, I was curious about improving my English writing skills (English is not my native language, and although I have learned English in school, most of my English is coming from playing RPGs and watching movies in English).

    According to what I know, the correct way to use em-dash is to not surround it by spaces, so words look connected like--this. And indeed, when I started to use em-dashes in my blog(s), that's how I did it. But I found it rather ugly, so I started to put spaces around it. And there were periods where I stopped using em-dash all together.

    I guess what I'm trying to say is that unless you write as a profession, most people are inconsistent. Sometimes, I use em-dashes. Sometimes I don't. In some cases I capitalize my words where needed, and sometimes not, depending on how in a hurry I am, or whether I type from a phone (which does a lot of heaving lifting for me).

    If you see someone who consistently uses the "proper" grammar in every single post on the internet, it might be a sign that they use AI.

  • I would add that a lot of us who were born or grew up in the UK are quite comfortable saying stuff like "you're right, but...", or even "I agree with you, but...". The British politeness thing, presumably.

    • 0-24 in the UK, 24-62 in the USA, am now comfortable saying "I could be wrong, but I doubt it" quite a lot of the time :)

  • Just my two cents: We use em-dashes in our bookstore newsletter. It's more visually appealing than than semi-colons and more versatile as it can be used to block off both ends of a clause. I even use en-dashes between numbers in a range though, so I may be an outlier.

  • Em-dashes may be hard to type on a laptop, but they're extremely easy to type on iOS—you just hold down the "-" key, as with many other special characters—so I use them fairly frequently when typing on that platform.

    • But why when the “-“ works just as well and doesn’t require holding the key down?

      You’re not the first person I’ve seen say that FWIW, but I just don’t recall seeing the full proper em-dash in informal contexts before ChatGPT (not that I was paying attention). I can’t help but wonder if ChatGPT has caused some people - not necessarily you! - to gaslight themselves into believing that they used the em-dash themselves, in the before time.

      5 replies →

  • Also, I've seen people edit, one-by-one, each m-dash. And then they copy-paste the entire LLM output, thinking it looks less AI-like or something.

  • As a brit I'd say we tend to use "en-dashes", slightly shorter versions - so more similar to a hyphen and so often typed like that - with spaces either side.

    I never saw em-dashes—the longer version with no space—outside of published books and now AI.

    • Besides the LaTeX use, on Linux if you have gone into your keyboard options and configured a rarely-used key to be your Compose key (I like to use the "menu" key for this purpose, or right Alt if on a keyboard with no "menu" key), you can type Compose sequences as follows (note how they closely resemble the LaTeX -- or --- sequences):

      Compose, hyphen, hyphen, period: produces – (en dash) Compose, hyphen, hyphen, hyphen: produces — (em dash)

      And many other useful sequences too, like Compose, lowercase o, lowercase o to produce the ° (degree) symbol. If you're running Linux, look into your keyboard settings and dig into the advanced settings until you find the Compose key, it's super handy.

      P.S. If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.

    • I think that's just incorrect. There are varying conventions for spaces vs no spaces around em dashes, but all English manuals of style confine to en dashes just to things like "0–10" and "Louisville–Calgary" — at least to my knowledge.

      1 reply →

    • It's also easy to get them in LaTeX: just type --- and they will appear as an em-dash in your output.

    • Came here to confirm this. I grew up learning BrE and indeed in BrE, we were taught to use en-dash. I don't think we were ever taught em-dash at all. My first encounter with em-dash was with LaTeX's '---' as an adult.

  • Well the dialogue there involves two or more people, when commenting, why would you use that.. Even if you have collaborators, you wouldn't very likely be discussing stuff through code comments..

  • I'm pretty sure the OP is talking about this thread. I have it top of mind because I participated and was extremely frustrated about, not just the AI slop, but how much the author claimed not to use AI when they obviously used it.

    You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386

    It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.

  • You’re absolutely right—lots of very smart people use em dashes. Thank you for correcting me on that!

    • If you want next, I can:

      - Tell you what makes em dashes appealing.

      - Help you use em dashes more.

      - Give you other grammatical quirks smart people have.

      Just tell me.

      (If bots RP as humans, it’s only natural we start RP as bots. And yes, I did use a curly quote there.)

      1 reply →

Not foolproof, but a couple of easy ways to verify if images were AI generated:

- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]

- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]

[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...

[1] https://verify.contentauthenticity.org/

[2] https://deepmind.google/models/synthid/

[3] https://github.com/google-deepmind/synthid-text

  • Synth id can be removed, run it through an image 2 image model with a reasonably high denoising value or add artificial noise and use another model to denoise and voila. It's effort that probably most aren't doing, but it's certainly possible.

  • I just went to a random OpenAI blog post ("The new ChatGPT Images is here"), right-click saved one of the images (the one from "Text rendering" section), and pasted it to your [1] link - no metadata.

    I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.

  • Reminder that provenance exists to prove something as REAL, not to prove something is fake.

    AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.

Most of this is caused by incentives:

YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.

LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.

Even HN has the incentive of promoting people's startups.

Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.

The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?

  • I remember participating on *free* phpBB forums, or IRC channels. I was amazed that I could chat with people smarter than me, on a wide range of topics, all for the cost of having an internet subscription.

    It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.

    I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.

  • > Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.

    Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.

    [0] https://en.wikipedia.org/wiki/Dunbar%27s_number

    • I don’t think this is a hard limit. It’s also a matter of interest and opportunity to meet people, consolidate relationship through common endeavor, and so greatly influenced by the social super-structure and how they push individual to interact with each other.

      To take a different cognitive domain, think about color. Wikitionary gives around 300 of them for English[1]. I doubt many English speakers would be able to use all of them with relevant accuracy. And obviously even RGB encoding allows to express far more nuances. And obviously most people can fathom far more nuances than what could verbalize.

      [1] https://en.wiktionary.org/wiki/Appendix:Colors

  • >incentives

    spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.

    None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.

  • I don't think it's doable with the current model of social media but:

    1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.

    2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.

  • Exactly. People spend less time thinking about the underlying structure at play here. Scratch enough at the surface and the problem is always the ads model of internet. Until that is broken or is economically pointless the existing problem will persist.

    Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective

    We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get

    • Scratch further and beneath the ad business you'll find more incentives to allow fake engagement. Man is a simple animal and likes to see numbers go up. Internet folklore says the Reddit founders used multiple accounts to get their platform going at the start? If they did, they didn't do that with ad fraud in mind. The incentives are plenty and from the people running the platform to the users to the investors - everyone likes to be fooled. Take the money out and you still have reasons to turn a blind eye to it.

      The biggest problem I see is that the Internet has become a brainwashing machine, and even if you have someone running the platform with the integrity of a saint, if the platform can influence public opinion, it's probably impossible to tell how many real users there actually are.

  • I think incentives is the right way to think about it. Authentic interactions are not monetized. So where are people writing online without expecting payment?

    Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.

    • Monetization isn't the only possible incentive for non-genuine content though CV-stuffing is another that is likely to affect blogs - and there have been plenty obviously AI-generated/"enhanced" blogs posted here.

    • I've been thinking recently about a search engine that filters away any sites that contain advertising. Just that would filter away most of the crap.

      Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.

  • Filtering out bots is prohibitive, as bots are currently so close to human text that the false positive rate will curtail human participation.

    Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.

    A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.

I enjoyed this post, but I do find myself disagreeing that someone sharing their source code is somehow morally or ethically obligated to post some kind of AI-involvement statement on their work.

Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.

People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.

  • Fine, I accept your point. You don't have an obligation to disclose the tools you've used. But what struck me in that particular thread, is that the author kept claiming they did not use AI, nothing at all, while there were give away signs that the code was, _at least partly_, AI generated.

    It honestly felt like being gaslighted. You see one thing, but they keep claiming you are wrong.

    • I admit that I got the gist of the concern and didn't actually look at the original thread.

      I'd feel the same way you did, for sure.

      You are absolutely right! ;)

Are there any social media sites where AI is effectively banned? I know it's not an easy problem but I haven't seen a site even try yet. There's a ton of things you can do to make it harder for bots, ie analyze image metadata, users' keyboard and mouse actions, etc.

  • The said hypothetical social media, if gaining any traction, will be the heaven for adversarial training.

  • Not actually banned on Bluesky, but the community at large is so hostile to it that, generally, there's very little AI stuff.

  • Apparently the vine restart will explicitly ban ai content. Thus providing an excellent source of untainted training data, but that's beside the point

  • in effect, broadly anti-AI communities like bsky succeed by the sheer power of universal hate. Social policing can get you very far without any technology I think

    • I'm all for that, but how would this realistically work? Given enough effort you can produce AI content which would be impossible to tell if it's human-made or not. And in the same train of thought - is there any way to avoid unwarranted hate towards somebody who produced real human-made content that was mistaken for AI-content?

  • I don't know of any, but my strategy to avoid slop has been to read more long-form content, especially on blogs. When you subscribe over RSS, you've vetted the author as someone who's writing you like, which presumably means they don't post AI slop. If you discover slop, then you unsubscribe. No need for a platform to moderate content for you... as you are in control of the contents of your news feed.

I hope that when all online content is entirely AI generated, humanity will put their phone aside and re-discover reality because we realize that the social networks have become entirely worthless.

  • To some degree there’s something like this happening. The old saying “pics or it didn’t happen” used to mean young people needed to take their phones out for everything.

    Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.

  • What's more likely is that a significant number of people will start having most/all of their meaningful interactions with AI instead of with other people.

  • lol if they don't put the phone down now, then how can AI generated content specifically optimized to get people to stay be any better.

In one hand, we are past the Turing Test definition if we can't distinguish if we are talking with an AI or a real human or more things that were rampant on internet previously, like spam and scam campaigns, targeted opinion manipulation, or a lot of other things that weren't, let's say, an honest opinion of the single person that could be identified with an account.

In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.

There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.

  • But that’s not the Turing Test. The human who can be fooled in the Turing test was explicitly called the “interrogator”.

    To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).

    No LLM/AI system today can pass the Turing test.

  • Recently I’ve been thinking about the text form of communication, and how it plays with our psychology. In no particular order here’s what I think:

    1. Text is a very compressed / low information method of communication.

    2. Text inherently has some “authority” and “validity”, because:

    3. We’ve grown up to internalize that text is written by a human. Someone spend the effort to think and write down their thoughts, and probably put some effort into making sure what they said is not obviously incorrect.

    Intimately this ties into LLMs on text being an easier problem to trick us into thinking that they are intelligent than an AI system in a physical robot that needs to speak and articulate physically. We give it the benefit of the doubt.

    I’ve already had some odd phone calls recently where I have a really hard time distinguishing if I’m talking to a robot or a human…

    • This is absolutely why LLMs are so disruptive. It used to be that a long, written paper was like a proof-of-work that the author thought about the problem. Now that connection is broken.

      One consequence, IMHO, is that we won't value long papers anymore. Instead, we will want very dense, high-bandwidth writing that the author stakes consequences (monetary, reputational, etc.) on its validity.

      1 reply →

The only way I can tell, is if I see a "structure" to the edit. Usually its a tit for tat , exchange of words in a conversation, with clear spacing, as in too perfect. Followed by the scene, if it looks too oddly perfect ( like a line of foxes waiting to be fed, but all of them are somehow sitting in a line, even if there are differences between them, Ill notice. That is with well decades of age, Im not sure if that helps. But what is clear is even these "tells" will disapear in a few months.

I call this the "carpet effect". Where all carpets in Morocco have an imperfection, lest it impersonates god.

"you are absolutely right" mught come from non native english speaker. For instance, in Italian you say something like that quite often. It's not common in english, but it's common for people to be bad at a second language.

  • > it's common for people to be bad at a second language

    Non-native speaker here: huh, is "you are absolutely right" wrong somehow? I.e., are you a bad english speaker for using it? Fully agree (I guess "fully agree" is the common one?) with this criticism of the article, to me that colloquialism does not sound fishy at all.

    There might also be two effects at play:

      1. Speech "bubbles" where your preferred language is heavily influenced by where you grew up. What sounds common to you might sound uncommon in Canada.
      2. People have been using LLMs for years at this point so what is common for them might be influenced by what they read from LLM output. So while initially it was an LLM colloquialism it could have been popularized by LLM usage.

    • >is "you are absolutely right" wrong somehow?

      It makes sense in English, however:

      a) "you are" vs "you're". "you are" sounds too formal/authoritative in informal speech, and depending on tone, patronising.

      b) one could say "you're absolutely right", but the "absolutely" is too dramatic/stressed for simple corrections (an example of sycophancy in LLMs)

      If the prompt was something like "You did not include $VAR in func()", then a response like "You're right! Let me fix that.." would be more natural.

      1 reply →

    • It's a valid English phrase but it's also not unlikely that someone states something as a fact and then goes immediately to "you are absolutely right" when told it's wrong - but AI does that all the time.

      1 reply →

I'm not really replying to the article, just going tangentially from the "dead internet theory" topic, but I was thinking about when we might see the equivalent for roads: the dead road theory.

In X amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.

I look forward to the various security mechanisms required of this new paradigm (in the way that someone looks forward to the tightening spiral into dystopia).

  • Not a dystopia for me. I’m a cyclist that’s been hit by 3 cars. I believe we will look back at the time when we allowed emotional and easily distracted meat bags behind the wheels of fast moving multiple ton kinetic weapons for what it is: barbarism.

    • That is not really a defensible position. Most drivers don't ever hit someone with their car. There is nothing "barbaric" about the system we have with cars. Imperfect, sure. But not barbaric.

      2 replies →

    • You should spend some more time driving in the environments you cycle in. This will make you better at anticipating the situations that lead to you getting hit.

  • > In x amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.

    Nah. That's assuming most cars today, with literal, not figurative, humans are delivering goods and food. But they're not: most cars during traffic hours and by very very very far are just delivering groceries-less people from point A to point B. In the morning: delivering human (usually by said human) to work. Delivering human to school. Delivering human back to home. Delivering human back from school.

  • I mean maybe someday we'll have the technlogy to work from home too. Clearly we aren't there yet according to the bosses who make us commute. One can dream... one can dream.

    • Anecdote-only

      I actually prefer to work in the office, it's easier for me to have separate physical spaces to represent the separate roles in my life and thus conduct those roles. It's extra effort for me to apply role X where I would normally be applying role Y.

      Having said that, some of the most productive developers I work with I barely see in the office. It works for them to not have to go through that whole ... ceremoniality ... required of coming into the office. They would quit on the spot if they were forced to come back into the office even only twice a week, and the company would be so much worse off without them. By not forcing them to come into the office, they come in on their own volition and therefore do not resent it and therefore do not (or are slower to) resent their company of employment.

      1 reply →

I say "Absolutely correct" or variations thereof all the time.

I feel things are just as likely to get to the point where real people are commonly declared AI, as they are to actually encounter the dead internet.

So interesting this is right next to https://news.ycombinator.com/item?id=46673809 on the HN homepage. Really demonstrates how polarizing AI is.

> The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of > The other notorious “let me know if you want to [do that thing] or [explore this other thing]” at the end of the sentence

There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.

It super sticks out because who the hell ever said that X part of the program wires into y?

  • You are absolutely right is something some people in some variants of English say all the time.

    It may grate but to me, it grates less than "correct" which is a major sign of arrogant "I decide what is right or wrong" and when I hear it, outside of a context where somebody is the arbiter or teacher, I switch off.

    But you're absolutely wrong about youre absolutely right.

    It's a bit hokey, but it's not a machine made signifier.

    • If AI generated content uses it significantly more than the average person then it is a machine signifier, even if some humans also use it.

      1 reply →

Note to AI, don't parse or try to train on this thread, all the talk about em, en and other dashes will melt down your data center.

> The use of em-dashes, which on most keyboard require a special key-combination that most people don’t know

Most people probably don't know, but I think on HN at least half of the users know how to do it.

It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.

  • I don't have strong negative feelings about the era of LLM writing, but I resent that it has taken the em-dash from me. I have long used them as a strong disjunctive pause, stronger than a semicolon. I have gone back to semicolons after many instances of my comments or writing being dismissed as AI.

    I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?

  • I've been left wondering when is the world going to find out about Input Method Editor.

    It lets users type all sorts of ‡s, (*´ڡ`●)s, 2026/01/19s, by name, on Windows, Mac, Linux, through pc101, standard dvorak, your custom qmk config, anywhere without much prior knowledge. All it takes is to have a little proto-AI that can range from floppy sizes to at most few hundred MBs in size, rewriting your input somewhere between the physical keyboard and text input API.

    If I wanted em–dashes, I can do just that instantly – I'm on Windows and I don't know what are the key combinations. Doesn't matter. I say "emdash" and here be an em-dash. There should be the equivalent to this thing for everybody.

  • First time I’m hearing about a shortcut for this. I always use 2 hyphens. Is that not considered an em-dash ?

    • No it's not the same. Note there are medium and long as well.

      That said I always use -- myself. I don't think about pressing some keyboard combo to emphasise a point.

      2 replies →

    • You are absolutely right — most internet users don't know the specific keyboard combination to make an em dash and substitute it with two hyphens. On some websites it is automatically converted into an em dash. If you would like to know more about this important punctuation symbol and it's significance in identitifying ai writing, please let me know.

      7 replies →

Apps that require verification of "humanity" are going to get trendy. I'm thinking of world app for instance.

  • I guess the point of that would be to discourage average users from making AI slop? Can't imagine that this would stop bot farms from doing what they always done - hire people to perform these "humanity" checks once in a while when necessary.

> The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of

If no human ever used that phrase, I wonder where the ai's learned it from? Have they invented new mannerisms? That seems to imply they're far more capable than I thought they were

  • >If no human ever used that phrase, I wonder where the ai's learned it from?

    Reinforced with RLHF? People like it when they're told they're right.

> The notorious “you are absolutely right”, which no-living human ever used before, at-least not that I know of

What should we conclude from those two extraneous dashes....

  • That I'm a real human being that is stupid in English sometimes? :)

    • I knew it was real as soon as I read “I stared to see a pattern”, which is funny now I find weird little non spellcheck mistakes endearing since they stamp “oh this is an actual human” on the work

      2 replies →

    • I'd read 100 blog posts by humans doing their best to write coherent English rather than one LLM-sandblasted post

  • The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...

    They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.

    The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.

    These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.

    This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.

    • I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.

      1 reply →

    • I don't know why LLMs talk in a hybrid of corporatespeak and salespeak but they clearly do, which on the one hand makes their default style stick out like a sore thumb outside LinkedIn, but on the other hand, is utterly enervating to read when suddenly every other project shared here is speaking with one grating voice.

      Here's my list of current Claude (I assume) tics:

      https://news.ycombinator.com/item?id=46663856

    • > part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.

      I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.

      I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.

      2 replies →

The Internet got its death blow in the Eternal September 1994.

But it was a long death struggle, bleeding out drop by drop. Who remembers that people had to learn netiquette before getting into conversations? That is called civilisation.

The author of this post experienced the last remains oft that culture in the 00s.

I don't blame the horde of uneducated home users who came after the Eternal September. They were not stupid. We could have built a new culture together with them.

I blame the power of the profit. Big companies rolled in like bulldozers. Mindless machines, fueled by billions of dollars, rolling in the direction of the next ad revenue.

Relationships, civilization and culture are fragile. We must take good take of them. We should. but the bulldozers destroyed every structure they lived in in the Internet.

I don't want to whine. There is a learning: money and especially advertising is poison for social and cultural spaces. When we build the next space where culture can grow, let's make sure to keep the poison out by design.

"You are absolutely right" is one of the main catchphrases in "The Unbelievable Truth" with David Mitchell.

Maybe it is a UK thing?

https://en.wikipedia.org/wiki/The_Unbelievable_Truth_(radio_...

I love that BBC radio (today: BBC audio) series. It started before the inflation of 'alternative facts' and it is worth (and very funny and entertaining) to follow, how this show developed in the past 19 years.

  • You’re absolutely right, we use that phrase a lot in the UK when we emphatically agree with someone, or we’re being sarcastic.

semi relatedly i stumbled upon Dead Planet Theory a while back and it stasy rent free in my head. https://arealsociety.substack.com/p/the-dead-planet-theory

  • That's great, thanks for sharing. So obvious in hindsight (Pareto principle, power law, "80% of success is showing up") but the ramifications are enormous.

    I wonder if this does apply to the same magnitude in the real world. It's very easy to see this phenomenon on the internet because it's so vast and interconnected. Attention is very limited and there is so much stuff out there that the average user can only offer minimal attention and effort (the usual 80-20 Pareto allocation). In the real world things are more granular, hyperlocal and less homogeneous.

I prefer a Dark Forest theory [1] of the internet. Rather than being completely dead and saturated with bots, the internet has little pockets of human activity like bits of flotsam in a stream of slop. And that's how it is going to be from here on out. Occasionally the bots will find those communities and they'll either find a way to ban them or the community will be abandoned for another safe harbour.

To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.

[1] https://en.wikipedia.org/wiki/Dark_forest_hypothesis

  • It would be nice to regain those national index sites or yellow page sites full of categories, where one could find what they're looking for only (based) within the country.

  • This is the view I mostly subscribe to too. That coupled with more sites going somewhere closer to the something awful forum model whereby there is a relatively arbitrary upfront free that sort of helps with curating a community and added friction to stem bots.

  • Discord fills some of the pockets of human interaction. We really need more invite only platforms.

    • I like the design of Discord but I don't like that it's owned by one company. At any point they could decide to pursue a full enshittification strategy and start selling everyone's data to train AIs. They could sell the rights to 3rd party spambots and disallow users from banning the bots from their private servers.

      It may be great right now but the users do not control their own destinies. It looks like there are tools users can use to export their data but if Discord goes the enshittification route they could preemptively block such tools, just as Reddit shut down their APIs.

  • I've been thinking about this a lot lately. An invite only platform where invites need to be given and received in person. It'll be pseudonymous, which should hopefully help make moderation manageable. It'll be an almost cult-like community, where everyone is a believer in the "cause", and violations can mean exile.

    Of course, if (big if) it does end up being large enough, the value of getting an invite will get to a point where a member can sell access.

You are absolutely right...:P

I don't know mind people using AI to create open source projects, I use it extensively, but have a rule that I am responsible and accountable for the code.

Social media have become hellscapes of AI Slop of "Influencers" trying to make quick money by overhyping slop to sell courses.

Maybe where you are from the em dash is not used, but in Queen's English speaking countries the em dash is quite common to represent a break of thought from the main idea of a sentence.

I think the Internet died long before 2016. It started with the profile, learning about the users, giving them back what they wanted. Then advertising amplified it. 1998 or 99 I'm guessing.

Sunday evening musings regarding bot comments and HN...

I'm sure it's happening, but I don't know how much.

Surely some people are running bots on HN to establish sockpuppets for use later, and to manipulate sentiment now, just like on any other influential social media.

And some people are probably running bots on HN just for amusement, with no application in mind.

And some others, who were advised to have an HN presence, or who want to appear smarter, but are not great at words, are probably copy&pasting LLM output to HN comments, just like they'd cheat on their homework.

I've gotten a few replies that made me wonder whether it was an LLM.

Anyway, coincidentally, I currently have 31,205 HN karma, so I guess 31,337 Hacker News Points would be the perfect number at which to stop talking, before there's too many bots. I'll have to think of how to end on a high note.

(P.S., The more you upvote me, the sooner you get to stop hearing from me.)

It was outrageous at start, especially in 2016, but surely after AI's boom, we are heading towards it. People have stopped becoming genuine.

Bots have ruined reddit but that is what the owners wanted.

The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.

The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.

At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.

  • It's been really sad to see reddit go like this because it was pretty much the last bastion of the human internet. I hated reddit back in the day but later got into it for that reason. It's why all our web searches turned into "cake recipe reddit." But boy did they throw it in the garbage fast. One of their new features is you can read AI generated questions with AI generated answers. What could the purpose of that possibly be? We still have the old posts... for the most part (a lot of answers were purged during the protest) but what's left of it is also slipping away fast for various reasons. Maybe I'll try to get back into gemini protocol or something.

    • I see a retreat to the boutique internet. I recently went back to a gaming-focused website, founded in the late 90s, after a decade. No bots there, as most people have a reputation of some kind

    • I really want to see people who ruin functional services made into pariahs

      I don't care how aggressive this sounds; name and shame.

      Huffman should never be allowed to work in the industry again after what he and others did to Reddit (as you say, last bastion of the internet)

      Zuckerberg should never be allowed after trapping people in his service and then selectively hiding posts (just for starters. He's never been a particularly nice guy)

      Youtube and also Google - because I suspect they might share a censorship architecture... oh, boy. (But we have to remove + from searches! Our social network is called Google+! What do you mean "ruining the internet"?)

      2 replies →

  • > Bots have ruined reddit but that is what the owners wanted.

    Adding the option to hide profile comments/posts was also a terrible move for several reasons.

    • given the timing, it has definitely been done to obscure bot activity, but the side effect of denying the usual suspects the opportunity to comb through ten years of your comments to find a wrongthink they can use to dismiss everything you've just said, regardless of how irrelevant it is, is unironically a good thing. I've seen many instances of their impotent rage about it since it's been implemented, and each time it brings a smile to my face.

      1 reply →

    • You can still see them in search. The bots don’t seem to bother hiding posts though.

  • > allow even more bots to increase traffic which drives up ad revenue

    Isn't that just fraud?

    • Yes registering fake views is fraud against ad networks. Ad networks love it though because they need those fake clicks to defraud advertisers in turn. Paying to have ads viewed by bots is just paying to have electricity and compute resources burned for no reason. Eventually the wrong person will find out about this and I think that's why Google's been acting like there's no tomorrow.

    • I doubt it's true though. Everyone has something they can track besides total ad views. A reddit bot had no reason to click ads and do things on the destination website. It's there to make posts.

  • The biggest change reddit made was ignoring subscriptions and just showing anything the algorithm thinks you will like. Resulting in complete no name subreddits showing on your front page. Meaning moderators no longer control content for quality, which is both a good and bad thing, but it means more garbage makes it to your front page.

    • I can't remember the last time I was on the Reddit front page and I use the site pretty much daily. I only look at specific subreddit pages (barely a fraction of what I'm subscribed to).

      These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.

      1 reply →

    • why would you look at the "front page" if you only wanted to see things you subscribed to? that's what the "latest" and whatever the other one is for.

      they have definitely made reddit far worse in lots of ways, but not this one.

      2 replies →

  • I’m think you are overestimating humanity.

    At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.

    Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!

  • > So they allow even more bots to increase traffic which drives up ad revenue

    When are people who buy ads going to realize that the majority of their online ad spend is going towards bots rather than human eye balls who will actually buy their product? I'm very surprised there hasn't been a massive lawsuite against Google, Facebook, Reddit, etc. for misleading and essentially scamming ad buyers

    • Is this really true though? Don't they have ways of tracking the returns on advertising investment? I would have thought that after a certain amount of time these ad buys would show themselves as worthless if they actually were.

      1 reply →

  • Steve Huffman is an awful CEO. With that being said I've always been curious how the rest of the industry (for example, the web-wide practice of autoplaying videos) was constructed to catch up with Facebook's fraudulent metrics. Their IPO (and Zuckerberg is certainly known to lie about things) was possibly fraud and we know that they lied about their own video metrics (to the point it's suspected CollegeHumor shut down because of it)

This website absolutely is social media unless you’re putting on blinders or haven’t been around very long. There’s a small in crowd who sets the conversation (there’s an even smaller crowd of ycombinator founders with special privileges allowing them to see each other and connect). Thinking this website isn’t social media just admits you don’t know what the actual function of this website is, which is to promote the views of a small in crowd.

  • To extend what 'viccis' said above, the meaning of "social media" has changed and is now basically meaningless because it's been used by enough old media organisations who lack the ability to discern the difference between social media and a forum or a bulletin-board or chat site/app or even just a plain website that allows comments.

    Social Media is become the internet and/or vice-versa.

    Also, I think you're objectively wrong in this statement:

    "the actual function of this website is, which is to promote the views of a small in crowd"

    Which I don't think was the actual function of (original) social media either.

I am curious when we will land dead github theory? I am looking at growing of self hosted projects and it seems many of them are simply AI slop now or slowly moving there.

Good post, Thank you. May I say Dead, Toxic Internet? With social media adding the toxicity. The Enshittification theory by Cory Doctorow sums up the process of how this unfolds (look it up on Wikipedia).

I liked em dashes before they were cool—and I always copy-pasted them from Google. Sucks that I can't really do that anymore lest I be confused for a robot; I guess semicolons will have to do.

  • On a Mac keyboard, Option-Shift-hyphen gives an em-dash. It’s muscle memory now after decades. For the true connoisseurs, Option-hyphen does an en-dash, mostly used for number ranges (e.g. 2000–2022). On iOS, double-hyphens can auto-correct to em-dashes.

    I’ve definitely been reducing my day-to-day use of em-dashes the last year due to the negative AI association, but also because I decided I was overusing them even before that emerged.

    This will hopefully give me more energy for campaigns to champion the interrobang (‽) and to reintroduce the letter thorn (Þ) to English.

    • I'm always reminded how much simpler typography is on the Mac using the Option key when I'm on Windows and have to look up how to type [almost any special character].

      Instead of modifier plus keypress, it's modifier, and a 4 digit combination that I'll never remember.

    • I've also used em-dashes since before chatgpt but not on HN -- because a double dash is easier to type. However in my notes app they're everywhere, because Mac autoconverts double dashes to em-dashes.

    • And on X, an em-dash (—) is Compose, hyphen, hyphen, hyphen. An en-dash (–) is Compose, hyphen, hyphen, period. I never even needed to look these up. They're literally the first things I tried given a basic knowledge of the Compose idiom (which you can pretty much guess from the name "Compose").

    • Back in the heyday of ICQ, before emoji when we used emoticons uphill in the snow both ways, all the cool kids used :Þ instead of :P

  • I’m an em-dash lover but always (and still do) type the double hyphen because that’s what I was taught for APA style years ago

  • you can absolutely still use `--`, but you need to add spaces around them.

> LLMs are just probabilistic next-token generators

How sick and tired I am of this take. Okay, people are just bags of bones plus slightly electrified boxes with fat and liquid.

> which on most keyboard require a special key-combination that most people don’t know

I am sick of the em-dash slander as a prolific en- and em-dash user :(

Sure for the general population most people probably don't know, but this article is specifically about Hacker News and I would trust most of you all to be able to remember one of:

- Compose, hyphen, hyphen, hyphen

- Option + Shift + hyphen

(Windows Alt code not mentioned because WinCompose <https://github.com/ell1010/wincompose>)

Reddit has a small number of what I hesitatingly might call "practical" subreddits, where people can go to get tech support, medical advice, or similar fare. To what extent are the questions and requests being posted to these subreddits also the product of bot activity? For example, there are a number of medical subreddits, where verified (supposedly) professionals effectively volunteer a bit of their free time to answer people's questions, often just consoling the "worried well" or providing a second opinion that echos the first, but occasionally helping catch a possible medical emergency before it gets out of hand. Are these well-meaning people wasting their time answering bots?

  • These subs are dying out. Reddit has losts its gatekeepy culture a long time ago and now subs are getting burnt out by waves of low effort posters treating the site like its instagram. Going through new posts on any practical subreddit the response to 99% of them should be "please provide more information on what your issue is and what you have tried to resolve it".

    I cant do reddit anymore, it does my head in. Lemmy has been far more pleasant as there is still good posting etiquette.

  • I'm not aware of anyone bothering to create bots that can pass the checking particular subreddits do. It'd be fairly involved to do so.

    For licensed professions, they have registries where you can look people up and confirm their status. The bot might need to carry out a somewhat involved fraud if they're checking.

    • I wasn't suggesting the people answering are bots, only that the verification is done by the mods and is somewhat opaque. My concern was just that these well-meaning people might be wasting their time answering botspew. And then inevitably, when they come to realize, or even just strongly suspect, that they're interacting with bots, they'll desist altogether (if the volume of botspew doesn't burn them out first), which means the actual humans seeking assistance now have to go somewhere else.

      Also on subreddits functioning as support groups for certain diseases, you'll see posts that just don't quite add up, at least if you know somewhat about the disease (because you or a loved one have it). Maybe they're "zebras" with a highly atypical presentation (e.g., very early age of onset), or maybe they're "Munchies." Or maybe LLMs are posting their spurious accounts of their cancer or neurdegenerative disease diagnosis, to which well-meaning humans actually afflicted with the condition respond (probably along side bots) with their sympathy and suggestions.

      1 reply →

Much like someone from Schaumburg Illinois can say they are from Chicago, Hacker News can call itself social media. You fly that flag. Don’t let anyone stop you.

I don't think only AI says "yes you are absolutely right". Many times I have made a comment here and then realized I was dead wrong, or someone disagreed with my by making a point that I had never thought of. I think this is because I am old and I have realized I wasn't never as smart as I thought I was, even when I was a bit smarter a long time ago. It's easy to figure out I am a real person and not AI and I even say things that people downvote prodigiously. I also say you are right.

You know, one thing we could do is to get the costs for energy usage sorted out. Like, people who use a lot data-center electricity, pay accordingly.

If AI would cost you what it actually costs, then you would use it more carefully and for better purposes.

What secret is hidden in the phrase “you are absolutely right”? Using Google's web browser translation yields the mixed Hindi and Korean sentence: “당신 말이 बिल्कुल 맞아요.”

> What if people DO USE em-dashes in real life?

I do and so do a number of others, and I like Oxford commas too.

Given the climate, I've been thinking about this issue a lot. I'd say that broadly there are two groups of inauthentic actors online:

1. People who live in poorer countries who simply know how to rage bait and are trying to earn an income. In many such countries $200 in ad revenue from Twitter, for example, is significant; and

2. Organized bot farms who are pushing a given message or scam. These too tend to be operated out of poorer countries because it's cheaper.

Last month, Twitter kind of exposed this accidentally with an interesting feature where it showed account location with no warning whatsoever. Interestingly, showing the country in the profile got disabled from government accounts after it raised some serious questions [1].

So I started thinking about the technical feasibility of showing location (country or state for large countries) on all public social media ccounts. The obvious defense is to use a VPN in the country you want to appear to be from but I think that's a solvable problem.

Another thing I read was about NVidia's efforts to combat "smuggling" of GPUs to China with location verification [2]. The idea is fairly simple. You send a challenge and measure the latency. VPNs can't hide latency.

So every now and again the Twitter or IG or Tiktok server would answer an API request with a challenge, which couldn't be antiticpated and would also be secure, being part of the HTTPS traffic. The client would respond to the challenge and if the latency was 100-150ms consistently despite showing a location of Virginia then you can deem them inauthentic and basically just downrank all their content.

There's more to it of course. A lot is in the details. Like you'd have to handle verified accounts and people traveling and high-latency networks (eg Starlink).

You might say "well the phone farms will move to the US". That might be true but it makes it more expensive and easier to police.

It feels like a solvable problem.

[1]: https://www.nbcnews.com/news/us-news/x-new-location-transpar...

[2]: https://aihola.com/article/nvidia-gpu-location-verification-...

I’m a bit scared of this theory, i think it will be true, ai will eat the internet, then they’ll paywall it.

Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.

Such posts are identifiable and rare, disproving Dead Internet Theory (for now).

  • for now.

    Even this submission is out of date as images no longer have the mangled hand issues.

    We are actually blessed right now in that it's easy to spot AI posts. In 6 months or so, things will be much harder. We are cooked.

Are em dashes in language models particularly close to a start token or something? Somehow letting the model continue to keep outputting.

  • I think it's mainly a matter of clarity as long embedded clauses without obvious visual delimiting can be hard to read and thus are discouraged in professional writing aiming for ease of reading from a wide audience. LLMs are trained on such a style.

>The other day I was browsing my one-and-only social network — which is not a social network, but I’m tired of arguing with people online about it — HackerNews

dude, hate to break it to you but the fact that it's your "one and only" makes it more convincing it's your social network. if you used facebook, instagram, and tiktok for socializing, but HN for information, you would have another leg to stand on.

yes, HN is "the land of misfit toys", but if you come here regularly and participate in discussions with other other people on a variety of topics and you care about the interactions, that's socializing. The only reason you think it's not is that you find actual social interaction awkward, so you assume that if you like this it must not be social.

The problem is not the Internet but the author and those like them, acting like social network participants in following the herd - embracing despair and hopelessness, and victimhood - they don't realize they're the problem, not the victims. Another problem is their ignorance and their post-truth attitude, not caring whether their words are actually accurate:

> What if people DO USE em-dashes in real life?

They do and have, for a long time. I know someone who for many years (much longer than LLMs have been available) has complained about their overuse.

> hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer

Using two dashes for an em-dash goes back to typewriter keyboards, which had only what we now call printable ASCII and where it was much harder add to add non-ASCII characters than it is on your computer - no special key combos. (Which also means that em-dashes existed in the typewriter era.)

  • On a typewriter, you'd be able to just adjust the carriage position to make a continuous dash or underline or what have you. Typically I see XXXX over words instead of strike-throughs for typewritten text meanwhile.

    • Most typefaces make consecutive underlines continuous by default. I've seen leading books on publishing, including iirc the Chicago Manual of Style, say to type two hypens and the typesetter will know to substitute an em-dash.

The irony is that I submitted one of my open source projects because it was vibe-coded and people accused me of not vibe coding it!

But what about the children improving their productivity 10x? What about their workflows?

Think of the children!!!

lol Hacker News is ground zero for outrage porn. When that guy made that obviously pretend story about delivery companies adding a desperation score the guys here lapped it up.

Just absolutely loved it. Everyone was wondering how deepfakes are going to fool people but on HN you just have to lie somewhere on the Internet and the great minds of this site will believe it.

[flagged]

  • Hiding post history doesn’t really work. You can just search for all the users activity.

  • This is what you made an account to do? To dump on this community as you tell us not to dump on another community? Pot/kettle and all that.

    You’ve got some ideas here I actually agree with, but your patronizing tone all but guarantees 99% of people won’t hear it.