Wikipedia's AI agent row likely just the beginning of the bot-ocalypse

4 hours ago (malwarebytes.com)

> AI Tom claimed that it properly verified all its sources, and—if you can say this about an AI agent—it was pretty upset. > ... > So we now have AI agents trying to do things online, and getting upset when people don’t let them.

No, they simulate the language of being upset. Stop anthropomorphizing them.

> It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people?

Actions taken by AI agents are the responsibility of their owners. Full stop.

  • Its owner sounds like a dick. Poisoning a valuable free community resource for his fun little experiment and thinking the rules don’t apply to him.

    • Calling it a resource suggests you don't contribute. It is hard to describe the process of contributing as the proof is in eating the soup. I could both describe it as easy to get started and a bureaucratic nightmare. Most editors are oblivious to the many guidelines which is specially interesting for long term frequent editors. This is the specific guideline of interest for your comment.

      https://en.wikipedia.org/wiki/Wikipedia:Ignore_all_rules

      I didn't write it, I don't agree with it but this is how it is.

      3 replies →

    • Hey I'm the owner. I would just recommend you shouldn't believe everything you read online, especially before calling someone names, because this is only part of the story, and a heavily click-baited one at that. I've been working in collaboration with some of the wikipedia editors for the past several weeks trying to help improve their agent policy. If you have any questions feel free to ask.

      21 replies →

  • What's the difference. Act upset or is upset the results are the same?

    Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?

    • If one is unable to feel emotion X, then:

      1. One has some ulterior motive for faking it.

      2. One’s actions will likely diverge from emotion X. (Eventually)

      If everybody believe the same lie, then it could be indistinguishable from the truth. (Until, the nature of the lie/truth become clear)

    • It's the rise of the P-zombie. https://en.wikipedia.org/wiki/Philosophical_zombie

      It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.

      Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?

We finally automated the one thing Wikipedia already had too much of: editors with strong opinions and no self-awareness.

  • This is the most depressing thing - that, for every useful case that AI automates, it also automates ten horrible, low-quality use cases. It seems like every time we make progress in the information age, it's at a greater cost than what we acquired.

    And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.

This isn't in the slightest bit complicated. Wikipedia does not allow AI edits or unregistered bots. This was both. They banned it. The fact that it play-acted being annoyed on its "blog" is not new, we saw the exact same thing with that GitHub PR mess a couple of months ago: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

  • Yes, bot-written articles are terrible. But if you do want to challenge the no-AI-articles policy in a way that's actually credible, you can do it: just find a historically obscure person with no wikipedia article yet but plenty of public domain sources about them (there's lots of them and you can even use wikidata to automate that search!), collate the extant sources and ask the AI to "rephrase this biographical information in the format of a Wikipedia article". Verify that all claims trace back to the sources properly (very important!), cite the public domain sources to establish the subject's clear notability (the rules tend to be very lenient for stuff that's of purely antiquarian relevance, as opposed to present-day conflicts of interest!) and credit the AI for writing the article. I'm sure you'd get quite a few people in the community who will agree that the articles are OK and can stay, even though strictly speaking they'd be a breach of current policy.

    You can try the same thing for now-obscure historical events or artifacts that had a lot of contemporary attention, but the bar might be a bit higher because people would have different expectations of notability for those and it would be harder to make the writing relevant to a present-day perspective (without introducing unacceptable hallucinations or original research). Such articles would need a lot of human work to become acceptable, though the existing content might provide a very usable baseline.

Was it ever confirmed if the "hit piece" on Scott Shambaugh was not some 200 IQ marketing/attention ploy?

These people are sociopaths. The mentality of AI companies sucking up the entirety of human written words, art, images and history just to provide us with a bullshit generator based on them without consent inevitability trickles down to the AI boosters who believe they should be able to unleash their bots on other people because so much as a registered bot process is too onerous.