Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”
It used to be super common that when you spotted a bot post and clicked through to the user's history, you'd see very average, human-looking activity from years ago, followed by a long gap of inactivity, and then a flurry of obvious bot comments.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.
> duplicating a human post and its comments is a good way to appear human
Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.
Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”
It used to be super common that when you spotted a bot post and clicked through to the user's history, you'd see very average, human-looking activity from years ago, followed by a long gap of inactivity, and then a flurry of obvious bot comments.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
That and locking down the API meant no more sites offering readily available visualizations of this type of thing
> allowing users to hide their history from public view
Yeah it's become my default assumption that any user who does this is either a bot or a bad-faith troll.
I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...
> duplicating a human post and its comments is a good way to appear human
Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.
So what is the comment frequency of these bots? There must be some signal in the activity even if the comments themselves pass the turing test.
Even if there was, I doubt Reddit cares enough to go after them when it’s boosting their valuation
If you find one account you can find a few dozen spam accounts by building a graph of what posts they reply to
1 reply →
Does it matter? With enough you can just have them upvote each other.
So easy to purchase online accounts nowadays, neither karma nor age of the account matters anything anymore.