This is fantastic. I couldn't find any obvious way to search for a new page, but you can simply bang out any arbitrary URL slug and the new article will be hallucinated fresh, eg:
Edit: I've just run across the antisemitic defacement in the "stumble" feature and it makes the timing of my post appear pretty unfortunate. It's especially sad because the ability to create articles through URL slugs is super cool and I'd hate to see it removed.
The 'all articles' section really is a dive into what happens when you allow unfiltered posting - it's a shame that it isn't clear how many individuals are creating this hateful and otherwise inappropriate titles - is it just 1 or 2 people, or has this been posted to 4chan or somewhere and there is a concerted effort to disrupt the site?
Shame there isn't a way to flag pages for removal. I was going to point my kids at this site, and it could be a great learning tool for schools, but not currently something I'd share.
Interesting idea with flagging.
We are considering 2 options:
1. You can generate aricle only if it was previously referenced in previous one
2. Flagging mechanism, now that you brought it up.
The mistake they made was allowing visitors to trigger the generation of articles via visiting any arbitrary URL.
A more resilient concept would have been, have a few "seed" articles in place, and then only allow for the creation of new articles by clicking a link in an existing article.
I vaguely remember a game someone made up (probably on 4chan) where the goal was to click "random article" and see how many clicks it takes to get to Hitler's page. I remember it being fun AND informative.
As the co-author of the project: the whole reason was to allow everybody to hallucinate what they want. If it was their will to research such things on there, then it shall be. But yes, it is kinda sad.
Just in the comments, right? That is where I see it. If I were the site owner I would just turn comments off. It was a cute idea when someone on HN suggested it, but without moderation open commenting becomes a cesspool in a hurry.
It's pretty fun to poke at! Although it's certainly difficult to be exact, it would be neat if generated pages used the context of the pages they were linked from (ideally, all pages that link to it) to guide the direction of the page. From the ones I generated it seemed they were mostly independent.
You not only made this excellent source of entertainment, you are also helped everyone find their unmatched socks, ensuring that "no individual would ever be forced to wear a mismatched pair". (Source: https://halupedia.com/humanitarian-accomplishments-of-the-on...
I’m curious, too. But it could probably run locally with a small model, right? The performance is stellar, so that suggests some hardware acceleration is being used, but that could all be a local system.
I get that, but how does it serve the generated and cached ones seemingly faster than Wikipedia? (My guess is that single-page applications, which this one seems to be, just need less round trips between navigations or something?)
I wouldn't. And, I'd think less of anyone who does make that argument.
Anyone of reasonable intelligence can easily tell this is a parody of an encyclopedia. Saying this is bad for the web is like saying The Onion is bad for the web.
What would you think of a person who said that they are already convinced that an opposing view could not be correct without even hearing the arguments for it?
It's probably only harmful to the AI scrapers that train from the web. Most people will understand the purpose of this -- to poison LLM training in a humorous way, which is really easy to do. It exemplifies a major weakness in modern day AI.
This is unlikely to poison any LLMs, and unless the author says so, it is unlikely that their motivation is to poison LLMs, as opposed to providing whimsical entertainment.
You could also argue that the web has failed and poisoning it into irrelevance is a vital service, motivating humans to collect knowledge into immutable sources. We‘ll call them ‘libraries.’
A web that is vulnerable to this would already be as good as dead.
As an entertaining way to highlight the importance of upgrading our ways of knowing, playful (& open-source!) projects like this are likely to strengthen the web.
To the web? It's fantastic for the web, these are the kinds of fun projects that make the web a worthwhile place to be. To slop generators? Yes, absolutely harmful, and that's for the best.
I love it. What’s the rough architecture of the system (using cloud LLM and paying $$$, or local)? The performance for new entries is really good. What is the prompt for each entry and how do you keep the steampunk vibe going?
They’re caching the pages which have already been generated. You could go back and delete all references to pages which don’t exist yet. Basically turn it into a static website.
It seems like the site's algorithm is that every newly-generate page includes multiple links to not-yet-existing pages. So it doesn't matter that existing pages are cached, all the "leaf node" pages link to multiple uncached new pages.
One suggestion for improvement is avoiding creation of self referential links. For example https://halupedia.com/chaldic-arithmetic has many references links to itself.
The page requires JS to load its content - user agents without JS support just get a blank page.
I'm not sure if the bots that scrape data to train LLMs are capable of loading that type of page, or if they only work on pages that have the content inside the HTML itself?
any serious scraping service these days will fail over to a headless browser when it fetches an asset referencing a js bundle that isn't verifiably a vendor script
Funny. Small improvement suggestion: the entry about "Glorbonian culinary arts" links to "the subterranean nation of Glorbonia". However upon clicking the link to "Glorbonia", an entry is generated claiming that "Glorbonia refers to a peculiar and largely uncatalogued form of sub-auditory resonance". It would be cool if some context were carried over from the referrer page so that there is some coherence between entries (ah, and some existing entries could be taken in account when generating new ones).
Feels like this will eventually cause collisions, although perhaps nothing multiple definitions of Glorbonia and multiple biographies of different Mrs Wiggles (perhaps with Wikipedia style disambiguation) can't solve
Btw, I've noticed just now that Glorbonia is, in the first entry, a "subterranean nation" and in the second it's a "sub-auditory resonance". So I got curious and I asked Opus what he thinks about the word Glorbonia: "Do you detect in the word a sense of place? North, south, east, west, up, down?". And Opus answers "Down, weirdly. Or maybe low — something subterranean, or at least sunken." Curious.
I'm curious about the design. Maybe you have a "how I did it" post coming soon, or something. One question: Did you find away to get some convergence, where a newly generated page will tend to cite pages (or stubs, at least) that already exist in the universe? Seems hard to do it with generated text, but not impossible.
Great idea! I created an adjacent website that gives, shall we say, "alternative facts" about your questions. (don't know if the rules allow me to link the site so I won't).
Currently breaks if you try to create a page with a Japanese slug. Multiple languages would make this an even more valuable resource than it already is.
Just incredible prose and writing (and gameplay), with something you can run with Frotz/NFrotz/LectRote or any ZMachine interpreter (or Glulxe like Gargoyle). A Pentium would run this and marvel you in a similar way.
The All Entries (https://halupedia.com/all-entries) part of the site is a bit alarming. I think OP might want to do a little bit of basic automoderation here.
In today's world it does not take long to be reminded that we cannot have nice things. Or maybe the gov't has their own bot army to wreak havoc and convince voters that actually, we really do want privacy-ending ID verification laws after all.
I find the handling of NSFW topics (and how it avoids making them nsfw) really interesting. Eg https://halupedia.com/fuck (aside from the title it seems SFW to me)
wtf, I thought these were just anecdotes until I saw they were actually happening in Astoria. I used to visit in the summers and never heard about any of that! Stop the fake news
As I said in another comment, this is brilliant. Suggestion: Remove anything that isn't part of the satire; act always as if it's a 'real' encyclopedia. For example on the front page I would remove,
> Articles are generated on demand and stored permanently upon first request.
Don't dispell the magic; don't pull back the curtain and let people see the mechanics.
EDIT: As you say in your system prompt, "You never wink at the reader. You never acknowledge that anything is funny or fictional. Everything is reported as though it is completely normal and well-documented"
This is irresponsible for people who don't get it, takes away confirmation for people who do get it, and makes me block/blacklist any liar who does it.
"Despite its failure, the Great Pigeon Census of 1887 is remembered as a cautionary tale..."
This type of writing is considered non-encyclopedic by Wikipedia standards as it injects superficial analysis. The imitation articles would look better without it. Maybe train on this article? https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
I really like this first sentence: The Nights Templar were a monastic order active during the 9th century, primarily based in the Soot Valley.
This is fantastic. I couldn't find any obvious way to search for a new page, but you can simply bang out any arbitrary URL slug and the new article will be hallucinated fresh, eg:
https://halupedia.com/shortest-cave-in-the-world
https://halupedia.com/echolocation-ability-in-spiders
https://halupedia.com/prehistoric-nazi-colony
Edit: I've just run across the antisemitic defacement in the "stumble" feature and it makes the timing of my post appear pretty unfortunate. It's especially sad because the ability to create articles through URL slugs is super cool and I'd hate to see it removed.
Nothing an LLM can’t fix.
Right?
Exactly, but I consider adding fake search that could find you ANY article, including not existent ones
All articles exist, some just haven't been discovered yet ;)
Search autocomplete but it halucinates the article titles.
This is excellent, congrats!
FYI I manually created this page and some link markup looks malformed: https://halupedia.com/list-of-uninhabited-countries
1 reply →
Yes, that would be the perfect touch. This is brilliant satire. We need more satire!
For some reason it fails to generate anything for me most of the time.
https://halupedia.com/shortest-hose-in-the-world [fail]
https://halupedia.com/new-england-rock-worm [fail]
https://halupedia.com/chronic-anaspepsis [fail]
https://halupedia.com/ancient-egyptian-algebra [OK]
I clicked a link in your first one and it generated https://halupedia.com/guild-of-amateurs
I feel seen :pokerface:
They all work for me now, maybe it was getting hugged to death?
This is wonderful. I just spat out the first phrase that came to my mind and boom:
https://halupedia.com/liminal-darkbeast
I tried it myself but I only get page generation failures
https://halupedia.com/the-alien-wizard-war-of-1425
We went to sleep and woke up with no credits on lmm provider :( Vurrently working on that
1 reply →
I'm cackling at some of these - what a perfect way to put down the phone and get lost in a world of weird. We are indeed in a simulation LOL
https://halupedia.com/spatial-bowel-movement-observatory
https://halupedia.com/hamberder-helper
Hit the Stumble link at the top right of all pages - its as good as a search when the whole thing is made up!
This is really cool, I just wish people wouldn't deface the website by submitting hateful speech as titles.
The 'all articles' section really is a dive into what happens when you allow unfiltered posting - it's a shame that it isn't clear how many individuals are creating this hateful and otherwise inappropriate titles - is it just 1 or 2 people, or has this been posted to 4chan or somewhere and there is a concerted effort to disrupt the site?
Shame there isn't a way to flag pages for removal. I was going to point my kids at this site, and it could be a great learning tool for schools, but not currently something I'd share.
Interesting idea with flagging. We are considering 2 options: 1. You can generate aricle only if it was previously referenced in previous one 2. Flagging mechanism, now that you brought it up.
Let me know what you think!
1 reply →
I guess we finally know now
https://halupedia.com/vim-is-better-than-emacs
The model seems to have an unhealthy obsession with fungi: https://halupedia.com/alan-turing
Which I guess makes some sense for a hallucinopedia.
It’s been defaced. It’s already got sex crimes and antisemitism all over the place.
The mistake they made was allowing visitors to trigger the generation of articles via visiting any arbitrary URL.
A more resilient concept would have been, have a few "seed" articles in place, and then only allow for the creation of new articles by clicking a link in an existing article.
It was so refreshing and fun for a few hours!
I vaguely remember a game someone made up (probably on 4chan) where the goal was to click "random article" and see how many clicks it takes to get to Hitler's page. I remember it being fun AND informative.
Yeah...I clicked on the "Stumble" link and it was right in my face.
As the co-author of the project: the whole reason was to allow everybody to hallucinate what they want. If it was their will to research such things on there, then it shall be. But yes, it is kinda sad.
The readers of Hacker News are almost certainly responsible. I found these pages within a minute of browsing randomly.
This is why we can't have nice things.
Looks like someone scripted `curl` in a loop and generated thousands of permutations of hate content.
Just in the comments, right? That is where I see it. If I were the site owner I would just turn comments off. It was a cute idea when someone on HN suggested it, but without moderation open commenting becomes a cesspool in a hurry.
Took me two clicks of the "Stumble" functionality to hit unsavory stuff that someone clearly made on purpose.
Try clicking "Stumble" a few times...
1 reply →
So disappointing. People are garbage.
Mind all the funny, creative articles. A few suffice to ruin it for all.
Give it a week and see what Google AI Overview has to say about the Great Pigeon Census of 1887!
google is already on it when asking about "The Great Pigeon Census of 1887"
using 1886 or 1888 makes Google correctly identify that no such sensus exist.
asking about 1887 specifically makes Google refer to some supposed great effort to track passenger pigeon population mids of the species decline.
[flagged]
By Featherton, no less.
I made the same thing months ago, so you don't need to wait:
https://encyclopedai.stavros.io
There's another one! https://grokipedia.com/
1 reply →
I searched your site for [Great Pigeon Census of 1887] and was only returned articles anout other things.
6 replies →
I made an SCP foundation inspired page: https://halupedia.com/hard-to-detroy-reptile
My favorite link generated there is the Institute for Unyielding Biology: https://halupedia.com/institute-for-unyielding-biology
there's a typo in your first title
Someone forgot to protect comments on their website before going on hn.
It's pretty fun to poke at! Although it's certainly difficult to be exact, it would be neat if generated pages used the context of the pages they were linked from (ideally, all pages that link to it) to guide the direction of the page. From the ones I generated it seemed they were mostly independent.
Update: Implemented it. All new articles work that way
Very nice! Independently of this thread, I was delighted to discover the cross references between pages. It makes a big difference.
That really improved things! Now each rabbithole goes deeper and deeper and deeper...
Yeah, thought about that, maybe will implement it. Will keep in mind! For now SSR to feed LLMs' the priority
My favorite of the several I generated this evening:
https://halupedia.com/recursive-trolley-problem
Finally a more trustworthy version of Grokipedia!
It's hilarious, you made my day hahah
I honestly forgot that Grokipedia existed. Did anyone ever use it?
People who need a citation to back up nonsense.
Tried once, but was useless. Very funny that it had so many text, while Elon is apparently "huge" fan of short and precise communication...
Somebody showed me it appearing near the top of some of their DuckDuckGo queries.
UPDATE: Just now, comment section added. Have a nice time arguing!
You are a wonderful person.
You not only made this excellent source of entertainment, you are also helped everyone find their unmatched socks, ensuring that "no individual would ever be forced to wear a mismatched pair". (Source: https://halupedia.com/humanitarian-accomplishments-of-the-on...
We should really host another one though; I think I've since lost a few more.
I'm curious, what is the LLM cost of the website?
I’m curious, too. But it could probably run locally with a small model, right? The performance is stellar, so that suggests some hardware acceleration is being used, but that could all be a local system.
great. someone has abused the "arbitrary URL" driggs@ mentioned, and now every entry has an offensive title prefixed by a number.
@bstrama, maybe you can have a process running that just iterates through the titles of different pages, and deletes the bad ones?
p.s. I know pinging like this doesn't "really" work, but maybe having their nick in the comment helps draw their attention
Ironically, this seems much faster (for pages already, erm, "researched") than the real one! How?
It generates articles only once. So once it's generated, it never perish. Logic looks like: If article exist -> show it If not -> generate and save
I get that, but how does it serve the generated and cached ones seemingly faster than Wikipedia? (My guess is that single-page applications, which this one seems to be, just need less round trips between navigations or something?)
4 replies →
Funny, but you could argue this is actively harmful to the web.
I wouldn't. And, I'd think less of anyone who does make that argument.
Anyone of reasonable intelligence can easily tell this is a parody of an encyclopedia. Saying this is bad for the web is like saying The Onion is bad for the web.
What would you think of a person who said that they are already convinced that an opposing view could not be correct without even hearing the arguments for it?
6 replies →
It's probably only harmful to the AI scrapers that train from the web. Most people will understand the purpose of this -- to poison LLM training in a humorous way, which is really easy to do. It exemplifies a major weakness in modern day AI.
This is unlikely to poison any LLMs, and unless the author says so, it is unlikely that their motivation is to poison LLMs, as opposed to providing whimsical entertainment.
3 replies →
[dead]
You could also argue that the web has failed and poisoning it into irrelevance is a vital service, motivating humans to collect knowledge into immutable sources. We‘ll call them ‘libraries.’
Interesting, but you could argue comments like this are actively harmful to the web.
But the argument wouldn't be nearly as strong.
2 replies →
The sooner the current web dies, the better. Something better either rises from its ashes, or we lose... something that was already lost.
or something way worse shows up.
5 replies →
On the other hand, one could argue that anything that can be destroyed by relatively clearly labeled satire, deserves to be.
A web that is vulnerable to this would already be as good as dead.
As an entertaining way to highlight the importance of upgrading our ways of knowing, playful (& open-source!) projects like this are likely to strengthen the web.
Any training data scraper that blindly takes stuff from websites deserves to have their model poisoned by this nonsense.
> you could argue
Could you? I don't see it happening, but I could be wrong.
You could, in the sense that it’s not illegal or impossible. I haven’t seen anyone attempt it though.
You could argue that a person could argue any point, but I’d prefer people make the argument rather than argue about arguing it.
To the web? It's fantastic for the web, these are the kinds of fun projects that make the web a worthwhile place to be. To slop generators? Yes, absolutely harmful, and that's for the best.
Grokipedia is already doing that.
Pissing on a pile of shit
I believe the website needs more moderation..
I love it. What’s the rough architecture of the system (using cloud LLM and paying $$$, or local)? The performance for new entries is really good. What is the prompt for each entry and how do you keep the steampunk vibe going?
This site is going to be expensive when a web crawler hits it. A honey pot that burns tokens.
They’re caching the pages which have already been generated. You could go back and delete all references to pages which don’t exist yet. Basically turn it into a static website.
It seems like the site's algorithm is that every newly-generate page includes multiple links to not-yet-existing pages. So it doesn't matter that existing pages are cached, all the "leaf node" pages link to multiple uncached new pages.
1 reply →
https://halupedia.com/this-experiment-may-not-last-long
>Something broke, which is ironic for a made-up encyclopedia: generation failed
I guess the LLM provider stopped working after the defacement articles.
One suggestion for improvement is avoiding creation of self referential links. For example https://halupedia.com/chaldic-arithmetic has many references links to itself.
Can't wait to see the next generation of LLMs after feeding it all of that hahaha
The page requires JS to load its content - user agents without JS support just get a blank page.
I'm not sure if the bots that scrape data to train LLMs are capable of loading that type of page, or if they only work on pages that have the content inside the HTML itself?
Not using JavaScript would also make the crawler fail on squarespace and wix website builders.
The age where the web was usable at all without JavaScript is long gone. No scraper would get much scraping done without JavaScript these days.
2 replies →
any serious scraping service these days will fail over to a headless browser when it fetches an asset referencing a js bundle that isn't verifiably a vendor script
I'm aware and will implement SSR soon ;)
It's entirely possible they simply ingest the JS as-is.
Seeing “Something broke, which is ironic for a made-up encyclopedia: Load failed” when trying to access some of the suggested starting points
Works on my PC.
Could you gimme the url that's failing?
It’s working now, not sure what was going on earlier.
Reminded me of this old, pre-LLM git docs generator:
https://git-man-page-generator.lokaltog.net/
Plan 9/9front's bullshit(1) tool works kinda like these but without requiring an $6k machine.
Fascinating https://halupedia.com/order-of-whispering-monks https://halupedia.com/church-of-the-singing-stones Many parallels here
Very interesting how it works: https://halupedia.com/inner-workings-of-hallucinopedia
But not without risk! https://halupedia.com/dangers-of-a-virtual-llm-backed-encycl...
Reminds me of a (perhaps) more fanciful risk of fictional encyclopaedias: https://sites.evergreen.edu/politicalshakespeares/wp-content...
Actually interesting response. You can also check out github.
https://github.com/BaderBC/halupedia
I see. Somehow missed the link at the top right
I'm having a blast adding new seeds :)
https://halupedia.com/fcuk-spellchecking-society https://halupedia.com/characterization-of-the-reluctant-peng...
It's nice, but after a few clicks my LLM content fatigue kicks in.
Why isn't this .gov
https://halupedia.com/2048-united-states-presidential-electi...
Amazing.
Absolutely perfect. Monty Python on demand.
Lots of antisemitism on there. Search “Jews”
Already swarmed by Epstein's private troll army, I suppose (/pol/).
these read like they're from Discworld
Funny. Small improvement suggestion: the entry about "Glorbonian culinary arts" links to "the subterranean nation of Glorbonia". However upon clicking the link to "Glorbonia", an entry is generated claiming that "Glorbonia refers to a peculiar and largely uncatalogued form of sub-auditory resonance". It would be cool if some context were carried over from the referrer page so that there is some coherence between entries (ah, and some existing entries could be taken in account when generating new ones).
Feels like this will eventually cause collisions, although perhaps nothing multiple definitions of Glorbonia and multiple biographies of different Mrs Wiggles (perhaps with Wikipedia style disambiguation) can't solve
Btw, I've noticed just now that Glorbonia is, in the first entry, a "subterranean nation" and in the second it's a "sub-auditory resonance". So I got curious and I asked Opus what he thinks about the word Glorbonia: "Do you detect in the word a sense of place? North, south, east, west, up, down?". And Opus answers "Down, weirdly. Or maybe low — something subterranean, or at least sunken." Curious.
Love it! It feels very Borges!
Feature request: also be able to click on the Talk page to see the controversies. I don't always want to trust the article itself as the final word.
Edit: Oh look, there's an article about the YC! https://halupedia.com/y-combinator
Just added comment section :)
Which now has ascii penises and other art and ... colorful commentary.
Cool!
I'm curious about the design. Maybe you have a "how I did it" post coming soon, or something. One question: Did you find away to get some convergence, where a newly generated page will tend to cite pages (or stubs, at least) that already exist in the universe? Seems hard to do it with generated text, but not impossible.
Great suggestion! Will immediately look into that!
> Edit: Oh look, there's an article about the YC! https://halupedia.com/y-combinator
This should be on YC's About page.
> Y Combinator might be responsible for the spontaneous generation of minor deities in areas experiencing extreme metaphysical gravity.
This particular piece of slop is a serendipitously brilliant description of the cult of founder worship in the metaphysical gravity of Silicon Valley.
This kind of Absurdist humour reminds me of the Marx Brothers or the Tip y Coll Spaniards.
And the Sokal case with the Humanities branches, for sure.
BTW: https://halupedia.com/postmodernism
This is golden.
https://halupedia.com/paradox
Best entry, hands down. This is a love letter to Prattchett.
It also feels a bit like Sam Kriss, if you know him.
Some of his writing: https://samkriss.substack.com/p/five-prophets
His biography is quite interesting: https://halupedia.com/sam-kriss
Great idea! I created an adjacent website that gives, shall we say, "alternative facts" about your questions. (don't know if the rules allow me to link the site so I won't).
Now I want to know the site.
https://amtaitfy.com Still don't know if it's allowed, but taking a chance here.
1 reply →
Currently breaks if you try to create a page with a Japanese slug. Multiple languages would make this an even more valuable resource than it already is.
I wonder how long it will be before Canis dementialis becomes a standalone meme.
https://halupedia.com/computer
This is perfect. Very Neal Stephensony.
Also, this, but with no AI: https://ifdb.org/viewgame?id=032krqe6bjn5au78
Just incredible prose and writing (and gameplay), with something you can run with Frotz/NFrotz/LectRote or any ZMachine interpreter (or Glulxe like Gargoyle). A Pentium would run this and marvel you in a similar way.
No need to waste tons of water in datacenters.
Hm, the page generated seems inconsistent with the usage of the original link.
The All Entries (https://halupedia.com/all-entries) part of the site is a bit alarming. I think OP might want to do a little bit of basic automoderation here.
In today's world it does not take long to be reminded that we cannot have nice things. Or maybe the gov't has their own bot army to wreak havoc and convince voters that actually, we really do want privacy-ending ID verification laws after all.
I find the handling of NSFW topics (and how it avoids making them nsfw) really interesting. Eg https://halupedia.com/fuck (aside from the title it seems SFW to me)
Best part - I didn't implement such logic. It just for some reason works that way.
Huh that is interesting, I was expecting it to show some sort of error on generation, or something like that
I LOVE IT. Superb.
This is what every LLM will converge into without curated human input.
Who says llms can't be funny?!
wtf, I thought these were just anecdotes until I saw they were actually happening in Astoria. I used to visit in the summers and never heard about any of that! Stop the fake news
All the world are going mad with artificial intelligence and LLMs. Just disgusting!
Care to elaborate?
https://halupedia.com/015-fuck-jews-and-islamists
Allow me.
You can name an article anything you want, and the thing will generate content, though not necessarily relevant to the title you chose.
So some vandal comes along and supplies a hateful title, et voila.
Well then this seems like the dumbest site ever...
this is excellent haha
As I said in another comment, this is brilliant. Suggestion: Remove anything that isn't part of the satire; act always as if it's a 'real' encyclopedia. For example on the front page I would remove,
> Articles are generated on demand and stored permanently upon first request.
Don't dispell the magic; don't pull back the curtain and let people see the mechanics.
EDIT: As you say in your system prompt, "You never wink at the reader. You never acknowledge that anything is funny or fictional. Everything is reported as though it is completely normal and well-documented"
https://news.ycombinator.com/item?id=48042306
This is irresponsible for people who don't get it, takes away confirmation for people who do get it, and makes me block/blacklist any liar who does it.
It is indeed a problem for people who refuse to use their sense of humor.
kinda cool but kinda lame, no overall consistency over articles
My contributions:
https://halupedia.com/jgldfjgjdflgjdflkgjldjglkdjlg
https://halupedia.com/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...
https://halupedia.com/drop-table-users
https://halupedia.com/test-test
https://halupedia.com/test-test-test-test-test-test-test-tes...
[dead]
[flagged]
[dead]
[dead]
[dead]
[dead]
[flagged]
[dead]
[flagged]
"Despite its failure, the Great Pigeon Census of 1887 is remembered as a cautionary tale..."
This type of writing is considered non-encyclopedic by Wikipedia standards as it injects superficial analysis. The imitation articles would look better without it. Maybe train on this article? https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Why is this example non-encyclopedic? It's an informative, falsifiable statement that could be supported by a citation, like here: https://en.wikipedia.org/wiki/Thongbu_Wainucha#:~:text=remem...