Regarding the "worse is better" discussion: At least its definitely better accessibility-wise. HN is about the last well-known site that allows interacting with it, including writing comments, with plain old Lynx. I am well aware that most web devs do not care anymore these days, and they have their reasons for sure. However, its still nice to see sites that refuse to go for SPA. It makes them so much more useable for people like me (blind). A big THANK YOU to the site maintainers, its one of the last corners of the net where interesting stuff happens which is still accessible.
Accessible UX results in good UX. I use a modern browser and appreciate how reliably Hacker News works. It's a great example of less is more when it comes to UX.
Well, I think proper font scaling would do wonders for making HN more accessible. As things stand, I have to zoom to 120% to read the text. I recall WebKit making a special case in its font rendering logic just for HN.
I can recommend the browser extension "modern for hacker news". Some features are premium (no subscription though) but only the defaults gives SO MUCH ux like font size, spacing between lines, text width.
Another great accesibility feat is the APIs for this web. I use upvoterate listing from quality news as front page.
To sum it up, opening the tab previews under Zen Browser made my experience feel like the coolest SPA, haha.
Hacker News is a perfect example of the "Worse is better" mantra applied to social engineering. I mean, slashdot had more features and functionality in the late 1990s.
What makes HN work is the tight focus and heavy moderation.
For context, "worse is better" refers to Gabriel's observation that products with simple implementations and complicated interfaces tend to achieve adoption faster than products with complex implementations and elegant interfaces.
One of the original motivating examples were Unix-like systems (simple implementation, few correctness guarantees in interfaces) vs. Lisp-based systems (often well specified interfaces, but with complicated implementations as the cost.)
"Incidentally, very few people grasp the amount of effort Daniel Gackle expends running HN now, and what an amazing job he does." -Paul Graham, https://x.com/paulg/status/1282055086433284103
HN may have less features, but do we even need them? I do not think it makes it worse because of that. You could call it minimalistic, which puts it into a more positive light. :)
Edit: or as someone else who has phrased it better: "less is more".
I liked the "friends" and "foes" system that Slashdot had, though I would say generally the "foes" here just get banned which is convenient.
I also thought Slashdot's moderation system was kind of fun. I am not sure it was useful but I enjoyed the annotations (+5 Funny when serious, +5 Insightful when inciteful, etc.) Meta-moderation was also neat?
Dark mode. Sure, Dark Reader exists but many mobile browsers don't support it.
Annoyingly enough it's been talked about for years but it never gets implemented, despite only three colors really needing a swap: background to dark sepia or just dark gray, and text to white and off-white.
I'd say that's the main thing. People hate ads, HN uses unobtrusive text ads. The moderation isn't that a competitive advantage, IMO. Slashdot's was better, mostly because it had measures to stop moderation abuse whereas HN seemingly doesn't. It's just a plain old up/down system with the added filip of a "super down" button, for those who are really committed to banning their opponents. I read with showdead turned on because perfectly reasonable comments are so often greyed out or dead. That used to happen much less on Slashdot because there were far fewer people with moderation rights and the bad ones got filtered out via metamod.
Maybe now it's been ported to Common Lisp it'll be easier to add features.
I'm not sure what you mean? The literal quote from the Wikipedia article on "worse is better" is:
> It refers to the argument that software quality does not necessarily increase with functionality: that there is a point where less functionality ("worse") is a preferable option ("better") in terms of practicality and usability.
For that reason, I think I am applying the term precisely as it was defined.
The irony of my comment, which dang picked up, is that the original idea was a criticism against Lisp, suggesting that the bloat of features was a part of the reason its adoption had lagged behind languages like C.
I've written Python for 14 years and have never seen code like that. It certainly isn't a perfect language, but this doesn't look like a common concern.
People write a lot of Python, because the language is easy to get into for a lot of non computer-science folks (e.g., engineers and scientists) and the ecosystem is massive with libraries for so many important things. It isn't as conceptually pure as lisp, but most probably don't care.
Python made a choice to have default values instead of default expressions and it comes with positive and negative trade-offs. In languages like Ruby with default expressions you get the footgun the other way where calling a function with a default parameter can trigger side effects. This kind of function is fine in Python because it's unidiomatic to mutate your parameters, you do obj.mutate() not mutate(obj).
So while it's a footgun you will be writing some weird code to actually trigger it.
Ah yes, the ol' default empty list Python gotcha, it bit me I think about 10 years ago, and ever since, sadly I've written code like this so many times it's not funny:
def fun(a = None):
_a = a if a is not None else []
A lot of people claim that but I've never seen evidence of the existence of a vast number of people who would be using Hacker News if only it had more bells and whistles. Craigslist is "ugly" too, and plenty of people use it.
I think it's more likely that most people (even most tech-adjacent people) simply don't know this place exists, or don't care, since no one is sharing links to Hacker News on mainstream social media and nothing goes viral here outside of already established HN-adjacent circles.
I like Hackernews. I like the simplicity. I don't bother with better AI. I prefer it that way and I acknowledge that the look and feel of Hackernews does not suite everyone.
But I don't value the look and feel of Hackernews, because it drives people away -- as if these people are of lesser value. That is just elitist and gatekeeper mentality.
I think HN has some pretty sophisticated automated and human-in-the-loop moderation features that few other sites possess, or throw as much resources at. Because HN is not ad-supported it does not fall victim to tragedy of the commons.
But I think HN built on what Reddit got right (at least old reddit) and also on a context of more online/faster interactions as opposed to Slashdot that brought some of the old forums structure and on a context of slower and more meaningful (ahem, for the most part) interactions. Hence why moderation was more precise, upvotes had color and you still had things like user signatures
In a way, users and posts on HN are "cattle", not pets ;)
Maybe this was tongue-in-cheek in a way that eludes me, but in case any innocent and curious bystanders are as confused as me by your comment, I'm not sure "Worse Is Better" refers to what you think it does. It isn't about "features and functionality", it's about how ease of implementation beats everything else. I can't see how that applies here, or what your comment means in that light.
The genius of Slashdot's moderation system is that it forced you to be fastidious with how your limited mod points were allocated, only using them on posts that really deserved them.
As opposed to tearing through a thread and downvoting any and everything you disagree with.
Slashdot encouraged more positive moderation, unless you were obviously trolling.
The meta-moderators kept any moderation abuse in check.
It's sad to see we have devolved from this model, and conversations have become far more toxic and polarized as a direct result of it. (Dissenting opinions are quickly hidden, and those that reinforce existing norms bubble to the top.)
I believe HN papers over these problems by relying on a lot of manual hand-moderation and curation which sounds very labor intensive, whereas Slashdot was deliberately hands-off and left the power to the people.
I remember slashdot being full of "M$ is teh evill111!!" and other childish nonsense. At the end of the day what matters is the results, and i much prefer the discusions on hn than /.
I don't think that was what made HN prevail against similar sites that were popular in the past. In my opinion, it is the fact that it is tied to Y Combinator and lots of startups/founders that made it stick. Something that is not technical at all.
Not me! For a number of years, I was like "what's with that domain, never heard of ycombinator, oh well, can't be bothered reading up on it right now, anyway, great content here, and nice minimal interface, I'll keep coming back".
I'm still missing being able to read only +5 insightful comments after 20 years.
I'd expect Slashdot's point systems and meta moderation to make a comeback in the LLM slop world we live in currently, but nobody knows about it anymore. Steam kinda rediscovered it in their reviews, perhaps even was inspired by it (I hope...)
Dutch tech news website Tweakers.net basically has this. Comments are moderated on a scale from -1 to +3, and then you can choose to expand only +2 and up.
I don’t think there is heavy moderation in the traditional sense. It’s primarily user-driven, aside from obvious abusive behavior. The downvote and flagging mechanisms do the heavy lifting.
The heuristics that detect a high ratio of arguments to upvotes (as far as I can tell) can be frustrating at times, but they also do a good job of driving ragebait off the front page quickly.
The moderators are also very good at rescuing overlooked stories and putting them in the second chance pool for users to consider again, which feels infinitely better than moderators forcing things to the front page.
It also seems that some times moderators will undo some of the actions that push a story off the front page if it’s relevant. I’ve seen flagged stories come back from the dead or flame war comment sections get a section chance at the front page with a moderator note at the top.
Back in the Slashdot days I remember people rotating through multiple accounts for no reason other than to increase their chances of having one of them with randomly granted moderation points so they could use them as weapons in arguments. Felt like a different era.
> I don’t think there is heavy moderation in the traditional sense.
It seems to be a combination of manual and automated moderation (mostly by dang but he has more help now), using the kind of over/under-engineered custom tools you'd expect from technophiles. I've wondered a lot about the kind of programming logic he and the others coded up that make HN as curious as it is, and I have half a mind to make a little forum (yet another HN clone, but not really) purely for the sake of trying to implement how I think their moderation probably works. If I went through with this, I'd have it solely be for Show HN style project sharing/discussion.
I'm reminded of definitively the most extreme writing on programming I've ever read, here https://llthw.common-lisp.dev/introduction.html, including but in no way limited to claims such as:
> The mind is capable of unconsciously understanding the structure of the computer through the Lisp language, and as such, is able to interface with the computer as if it was an extension to its own nervous system. This is Lisp Consciousness, where programmer and computer are one and the same; they drink of each other, and drink deep; and at least as long as the Lisp Hacker is there in the flow, riding the current of pure creativity and genius with their trusty companions Emacs and SLIME, neither programmer nor computer know where one ends and the other begins. In a manner of speaking, Lispers already know machine intelligence---and it is beautiful.
Has any other language produced such thoughts in the minds of human beings? Maybe yes, but I don't know of one. Maybe Forth, or Haskell, or Prolog, but I haven't found similar writing. Please do share.
I agree, and it gets even better: while low level ML support in Common Lisp does not match Python libraries, now it often does not matter because LLMs are not embedded in applications, then are often accessed via a HTTP request.
That might've been more a reflection on PLT than on Scheme48 (which also had some really smart people on it).
As some point, when I was writing a lot of basic ecosystem code that I tested on many Scheme implementations, PLT Scheme (including MzScheme, DrScheme, and a few other big pieces), by Matthias Felleisen and grad students at Rice, appeared to be getting more resources and making more progress than most.
So I moved to be PLT-first rather than portable-Scheme-first, and a bunch of other people did, too.
After Matthias moved to Northeastern, and students graduated on to their own well-deserved professorships and other roles, some of them continued to contribute to what was soon called Racket (rather than PLT Scheme). With Matthew Flatt still doing highly-skilled and highly-productive systems programming on the core.
Eventually, no matter how good their intentions and how solid their platform for production work, the research-programs-first mindset of Racket started to be a barrier to commercial uptake. They should've brought in at least one of the prolific non-professor Racketeers into the hooded circle of elders a lot sooner, and listened to that person.
One of the weaknesses of Racket for some purposes was lack of easy multi-core. The Racket "Places" concept (implementation?) didn't really solve it. You can work around it creatively, as I did for important production (e.g., the familiar Web interview load-balancing across application servers, and also offloading some tasks to distinct host processes on the same server), but using host multi-core more easily is much nicer.
As a language, I've used both Racket and CL professionally, and I prefer a certain style of Racket. But CL also has more than its share of top programmers, and CL also has some very powerful and solid tools, including strengths over Racket.
The article makes it sounds like Dang also helps with the codebase. There must be others, but Dang is the one I've seen for years at this point.
I've beeing a part of many online communities as both a member and moderator. However, Hackernews is the community that I've been apart of for the longest and the one that brings me the most joy.
Dang, is there anything random people like me can do for you? Can I at least buy you a coffee or something?
Keep in mind Hacker News (formerly Startup News) is effectively a loss-leading advertising arm of Y Combinator, which at this point is one of the most successful investment firms in the world.
And HN founder and original author Paul Graham is (at least on paper) billionaire, not merely the decamillionare he used to be.
Though it's still good for it to be a self-funding project even if that means accepting donations.
Modern CPUs are crazy fast. 4chan was serving 4 million users with a single server, a ten year old version of PHP and like 10000 lines of spaghetti code. If you do even basic code quality, profiling and optimization you can serve a huge number of users with a fraction of a CPU core.
I/O tends to be the bottleneck (disk IOPS and throughput, network connections, IOPS and throughput). HN only serves text so that's mostly an easy problem.
I still can't wrap my head around how the conventional wisdom in the industry to work around that problem is to add even more slow network I/O dependencies.
4chan is a special case, because all of its content pages are static HTML files being served by nginx that are rewritten on the server every time someone makes a post. There's nothing dynamic, everyone is served the exact same page, which makes it much easier to scale.
Modern CPUs are stupid fast when you use them the right way. You can take scale-up surprisingly far before being forced to scale out, even when that scale out is something as modest as running on multiple cores.
Based on context, you are insinuating that a discussion board like HN _can_ be hard on the CPU alone? If so, how? My guess would be _also_ be that the CPU would have little to do by itself, but that I/O would take the brunt?
I was going to reply that this is pretty common for web apps, e.g. NodeJS or many Python applications also do not use multi-threading, instead just spawning separate processes that run in parallel. But apparently, HN ran as 1 process on 1 core on 1 machine (https://news.ycombinator.com/item?id=5229548) O_O
HN is not really that much of a workload. Links with text only comments, each link gets a few hundred comments at most, and commenting on stories ends after they are old enough.
Probably everything that's current fits easily in RAM and the older stories are candidates for serving from a static cache.
I wouldn't say this is an astounding technical achievement so much as demonstrating that simplicity can fall out of good taste and resisting groupthink around "best practices".
In fairness, HN wouldn't show more than what, twenty-ish thread roots at a time, requiring you to click "more" to bring in more... which could contain the same set of thread roots you'd been looking at, depending on upvote activity.
(I assume that this update has removed that HN restriction, but haven't bothered to go look to verify this assumption.)
Good, sbcl it's great for CL. And now with current CLX from QuickLisp (the one with daily releases, I can't remember it's name) MCClim runs snappy even under Intel n270 ATom machines. Under ECL it almost runs snappy, but the performance gain it's astronomical. From a really laggy UI to instant rendering.
If dang is listening: I'd like your comments on how to pull this off. Replacing the engine of a live site without some old forgotten part breaking is hard to accomplish. I rarely see this kind of thing happen without a week of frantic bug fixing and users grumbling.
sbcl is a workhorse. I wonder if the Racket folks didn't consider the Arc under production workloads general purpose enough to fix. I actually don't know of any other projects that use Racket in anger
I'll always have a soft spot in my heart for Armed Bear because that JVM library ecosystem is enormous https://github.com/armedbear/abcl
Apologies that may have come across as more accusative than I intended. I was just surprised that whatever missing(?) feature or behavior that would cause one to move off of Racket wouldn't be of interest to other Racket users
Are there any other popular (>10k DAUs) sites that still use an esoteric, homegrown tech stack? If you have worked on them, what do you think, is it a legacy mess nobody wants to touch, or a pleasure to work with?
PG made an assertion once that websites (in contrast to desktop software) are free to use any stack of their choosing, as long as it can take in HTTP requests and output JSON or HTML. This intuitively seems to be true, especially so with how powerful modern machines can get, but it seems like it hasn't increased stack diversity much.
The advantages of boring technology and "resume-driven development" seem to outweigh whatever gains you may get from using something custom.
Are there any other popular (>10k DAUs) sites that still use an esoteric, homegrown tech stack? If you have worked on them, what do you think, is it a legacy mess nobody wants to touch, or a pleasure to work with?
I do. It's absolutely lovely.
We can make decisions based on what's best for the user, and not based on what the latest fad is.
In the time I've been in charge of this company's web sites, we've reduced cost drastically, improved reliability, and cut time to production in half.
Hiring can be problematic, because there's a lot of people out there who can't think through problems; or only know how to do x in one tool, and are unwilling/unable to learn something else.
The big keys are: We're not a tech company, so management doesn't care what we do, as long as it gets done. We build a lot of our own tools, so they fit perfectly into our workflow, which makes them and us more efficient. And we don't have to have 99.999999% uptime for no reason. Management is OK if the web sites are slow or unavailable for a few minutes each week, as long as they're back to normal in less time than it takes for someone to call and complain. And our clients love to call and complain.
But I get that there are a lot of opinions. Just try one, put up a vote over a week, do it over 4-6 weeks, settle on the one that has the best feedback...
I like it, in fact my standard terminal font size is even smaller. I hate all the modern websites wasting tons of whitespace, so that you need to hit C-- ~3 times to make it usable.
The OP isn't really asking for a "dark mode" like a literal reading of his comment might suggest. He's asking for an officially supported dark mode that evolves with the site and doesn't break random functionality one day. It's easy to use Stylist or TamperMonkey to make a dark mode that works at one instant of time. It's much harder to maintain one indefinitely in the face of constant changes made by developers not concerned with breaking your work, which they probably don't even know about.
I don't read HN in normal browsers. If you read the RSS feed and click through, for instance, it's instant white flash from the embedded browser in the RSS reader, which cannot be customized but honors dark mode.
Rewrites are definitely not “always a bad idea” as Joel Spolsky once said. What they are is highly situational.
HN has a bunch of factors that make it amenable to a rewrite. It has gigantic scale, not a ton of complexity at a business level, and what it “is” is pretty slow moving at this point.
That means it’s not a great example to justify a rewrite at work :) that said the success does prove rewrites are possible. Bravo on shipping!
The success of Hacker News doesn’t come from flashy features, but from a community that consistently produces high-quality content. That said, I can’t help but wonder if there are any updates to the UI/UX in the works, LOL.
Not saying "security by obscurity never works" but am saying it's a shame the defensive wall of anti-spam/abuse depends on some secrecy, because it's a low wall. If the concern with knowing the secret sauce is how easy it would be to defeat, then its a low wall. But, as long as it stays secret, it's doing it's job.
I'm not an infosec professional, or a competent LISP coder, I'm not in a position to say what's better. This is just what pros in the field say to me.
> I'm not sure if I follow all that. If the Clarc is not released, then how does HN run on it?
The same person who writes Clarc also deploys HN (assumption, but seems dang does can do both :) ), so using unreleased software is just a matter of navigating to the right local directory.
Hacker News is the url I use to test most fussy connections because it's so light, and will load under even the slightest trickle of data. When I was doing research in Ghana, it was the only site I could get to reliably load for news in the field, and thus spent a month reading only HN (good luck getting the _New York Times_ to load without a gigabit connection). Appreciate how it stays — and has stayed — svelte and fast throughout the years.
That's great, but I think they should improve the responsiveness. It's still a bit wonky. You can see it on the top right corner if you narrow the screen and then widen it
I'm sure if you came up with a tiny diff of the CSS that would improve the user experience without degrading anything else, and send it to hn@ycombinator.com, they can get it deployed :)
You should have rewritten it in sh and called Sharc, or BASIC and called it BArc, or PHP and called it PHarc, or ML and called it MLarcy. Or license it under GPL-3 and call it Gnarc!
That's addressed in the article. There absolutely is:
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work. The time to do it will be if and when we eventually release the alternative Arc implementations we’ve been working on.
I think the (anonymous? I can't find a name) author of the OP slipped slightly at the end of that otherwise-impeccable sequence of quotes. That last comment (http://arclanguage.org/) to Clarc. It includes a sample application which is an early version of HN, scrubbed of anything HN- or YC-specific.
I am asking about the core language implementation. No need to publish the whole source code of HN, just the part of source code of clarc.. You do not have "anti-abuse measures" in the language implementation and runtime, do you? Is it that hard to seperate a language implementation and code written in the language?
I cannot believe how people are praising a centralized, heavily censored links site. Take a look at HN's privacy policy, they sure do make money from monetizing every single thing you say and also fingerprint you all the time. We should have a decentralized link sharing site
Centralised and heavily censored, yes, but AFAICT the censoring by-and-large respects free speech and diversity of opinion, while effectively stopping spam / abuse, and thus maintaining the high quality of content that keeps us all coming back.
And surely HN is way less monetised (and therefore way more trustworthy) than virtually every other links site / every social media platform out there?
Depends on what you consider "monetization." Are there ads? Not explicitly, but YC startups do advertise themselves in threads here, and aspiring entrepreneurs do use visibility on HN (both to users and YC) as part of their strategy. If you think this is just a forum of nerds engaging in organic conversation and intellectual diversion, you'd be mistaken. Your attention is currency here as much as anywhere.
Look, I like the way HN looks but there aren't many sites that essentially look like bare html but still struggle with displaying more than 300 comments.
What do you mean? With the current internals, a 300 posts HN page would weigh ~500kb; different ones will hardly be more compact. Where is the «struggle»?
In March this year, HN changed pagination behavior. Previously, one needed to paginate through pages to read more than X comments. Around March, they now serve all comments at once.
A post having over a thousand comments is extremely rare so not a big deal.
HN has been known to fail in the past with heavy or high velocity threads to the point that dang has asked people to log off en masse to reduce server load. That shouldn't happen for a simple text forum.
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work. The time to do it will be if and when we eventually release the alternative Arc implementations we’ve been working on.
Is this a case where security through obscurity is good, or bad? Legit question. I am curious to read the responses it may prompt.
> There are a lot of anti-abuse features, for example, that need to stay secret (yes we know, 'security by obscurity' etc., but nobody knows how to secure an internet forum from abuse, so we do what we know how to do). It would be a lot of work to disentangle those features from the backbone of the code.
The OP got everything right except that bit. This is a reason for not open-sourcing HN (the application), but it doesn't relate to open-sourcing Clarc (the language implementation). We could do that without revealing any anti-abuse stuff.
Abuse of this sort isn't a security issue in the network sense. i.e. the security of Hacker News is not imperiled by people creating spam accounts, but nonetheless we want to stop that.
Obscurity is extremely good at filtering out low to medium skilled griefers. It won’t stop anyone who is highly motivated, but it will slow them down significantly.
Hacker News is small enough that obscurity would give moderators enough time to detect bad actors and update rules if necessary.
"The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"
This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.
When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.
Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say security by obscurity is bad.
Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply. There's not a secret key that can be changed in a way the system will recover its functionality if the secret key is discovered.
There is a series of measures that have worked in the past, and are still working today despite a huge population of active spamming and disrupting agents, and they should be kept secret as long as they keep working.
Is this a case where security through obscurity is good, or bad? Legit question. I am curious to read the responses it may prompt.
To me; philosophically; and to a first approximation, all security is through obscurity.
For example encryption works for Alice so long as Bob can't see the key...
... or parking the Porsche in the garage, reduces the likelihood someone knows there is a Porsche and reduces the likelihood they know what challenges exist inside the garage. Now put a tall hedge and a fence around it and the average passerby has to stop and think "there's probably a garage behind that barrier."
To put it another way, out of sight has a positive correlation to out of mind.
Yes of course a determined well funded Bob suggests obscurity with Bob's determination and budget. If Bob is willing to use a five dollar wrench, Alice might tell Bob the key.
This likely isn't so much "security through obscurity" because it's not really about security in the traditional sense but instead about anti-griefing measures.
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them.
We’ve all heard about how “security through obscurity” isn’t real security, but so many simple anti-abuse measures are very effective as long as their exact mechanism isn’t revealed.
HN’s downvote and flagging mechanisms make for quick cleanup of anything that gets through, without putting undue fatigue on the users.
Things called "security" that don't follow Kerckhoffs's principle aren't security. There are a lot of things adjacent to security, like spam prevention, that sometimes get dumped into the same bucket, but they're not really the same.
Security measures uphold invariants: absent cryptosystem breaks and implementation bugs, nobody is forging a TLS certificate. I need the private key to credibility present my certificate to the public. Hard guarantee, assuming my assumptions hold.
Likewise, if my OS is designed so sandboxed apps can't steal my browser cookies, that's a hard guarantee, modulo bugs. There's an invariant one can specify formally --- and it holds even if the OS source code leaks.
Abuse prevention? DDoS avoidance? Content moderation? EDR? Fuzzy. Best effort. Difficult to verify. That these things are sometimes called security products doesn't erase the distinction between them and systems that make firm guarantees about upholding formal invariants.
HN abuse prevention belongs to the security-adjacent but not real security category. HN's password hashing scheme would fall under the other category.
This is simply not true. At the highest levels, security is about distributing costs between attackers and defenders, with defenders having the goal of raising costs past a threshold where attacks are no longer reasonable expenses for any plausible attacker. Obfuscation, done well, can certainly play a role in that. The Blu-ray BD+ scheme is a great case study on this.
You can only say that if you have no idea about cryptography. It is definitely true in the real world, but it needs the right context to be relevant.
It is related to Kerckhoffs principle:
"The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"
This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.
When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.
Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say "security by obscurity" is bad.
Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply.
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work
The business logic in encoded into the original structure, making migration to anything different effectively impossible - without some massive redesign.
This, I think more than any response, indicates why the philosophy of “it’s working don’t touch it” will always win and new features” requests will be rejected.
HN didn’t depaginate based on user desires, it was based on internal tooling making that feature available within the context of the HN overall structure.
HN has zero financial or structural incentive to do anything but change as little as possible. That’s why this place, unique in the internet at this point unfortunately has lasted.
HN is not *trying* to grow, it’s trying to do as little as possible while staying alive; so by default, it’s more coherent to maintain because its structure isn’t built for it and changing the structure would break the encoded rituals (anti-abuse measures).
Something to think about when you’re trying to solve for many problems like “legacy code” “scaling needs” etc… it all comes back to baseline incentives
I mean this in the spirit of genuine curiosity: what staleness risk is there given the massive breadth of experience the existing userbase already has?
Man, I wish GUIs in general were like this. Not that I don't want progress, but some interactions (especially in basic OS stuff) really doesn't need to be redone every 5 years.
Honestly I don't understand why more things aren't like this. I don't need a revamped landing page for my GP/council/department/directorate/organisation/etc - just finish the previous version with the features that were promised. I don't need another half-assed version that will also be abandoned at 40-50%.
It sounds wrong because "since" is generally combined with a point in time, but "a few months" is a duration, not a date.
Also the first paragraph switches tense forms, which makes it stand out even more.
Rewriting it to "since a few months ago" seems to be the easiest way to fix this, though my favorite way to express the the same thing is "as of a few months ago".
It should be noted that the author, like most people you're likely to interact with in this bubble, is not a native speaker of English. What matters is getting the message across - which they did.
You'll end up not being very productive if you spend your time pointing all of these little slips out.
I appreciate the feedback, I edited the post. I actually should have noticed it, it's a grammar lesson I remember quite well and a mistake I spot in others' posts :] My first wording was mentioning "September of 2024", which I replaced with "a few months" at the last minute.
Hacker News has so little capability, almost any experienced developer using a modern AI Coding Agent could replicate the entire thing in a weekend, and perhaps in a single day.
I'm not saying it's bad, or criticizing anyone. I mean it does what it does, and it works, and people like it. But no one should care what technology they're using because there's just nothing impressive going on from a technical perspective.
I should've been more clear that my claim was only about the ability to post messages, have them stored in a database, and then have a tree-view that displays and edits the posts. That's 99% of what users do right? That entire functionality could be done by an AI Agent nowadays in about 10 minutes.
I don't get this attitude at all. I would think most programmers/readers are interested in the gears and cogs behind something they use on a regular basis. Especially if the work in web, backend, etc.
I've been coding for 35 years. I love HackerNews. Because it works. And it's all we need. But holding it up as some example of engineering would be silly. It's just a tree editor. I implemented a better tree editor myself today from scratch.
If you want to know what stuff I'm impressed by it's things like Mastodon, Nostr apps, and stuff that does more than edit a simple content tree of nothing but plain text. We can't even upload images. Can't do markdown. lol. It's definitely a "Less is More" app, from two decades ago. Just agree to agree with me on that. It's not an insult to them. It's just an observation.
Regarding the "worse is better" discussion: At least its definitely better accessibility-wise. HN is about the last well-known site that allows interacting with it, including writing comments, with plain old Lynx. I am well aware that most web devs do not care anymore these days, and they have their reasons for sure. However, its still nice to see sites that refuse to go for SPA. It makes them so much more useable for people like me (blind). A big THANK YOU to the site maintainers, its one of the last corners of the net where interesting stuff happens which is still accessible.
Accessible UX results in good UX. I use a modern browser and appreciate how reliably Hacker News works. It's a great example of less is more when it comes to UX.
Well, I think proper font scaling would do wonders for making HN more accessible. As things stand, I have to zoom to 120% to read the text. I recall WebKit making a special case in its font rendering logic just for HN.
1 reply →
I can recommend the browser extension "modern for hacker news". Some features are premium (no subscription though) but only the defaults gives SO MUCH ux like font size, spacing between lines, text width.
Another great accesibility feat is the APIs for this web. I use upvoterate listing from quality news as front page.
To sum it up, opening the tab previews under Zen Browser made my experience feel like the coolest SPA, haha.
If you browse with Lynx you might like https://mataroa.blog/ too!
Hacker News is a perfect example of the "Worse is better" mantra applied to social engineering. I mean, slashdot had more features and functionality in the late 1990s.
What makes HN work is the tight focus and heavy moderation.
Finally a Lisp system wins the worse-is-better crown!
For context, "worse is better" refers to Gabriel's observation that products with simple implementations and complicated interfaces tend to achieve adoption faster than products with complex implementations and elegant interfaces.
One of the original motivating examples were Unix-like systems (simple implementation, few correctness guarantees in interfaces) vs. Lisp-based systems (often well specified interfaces, but with complicated implementations as the cost.)
6 replies →
oh man... this comment is just so, so incredibly apt.
1 reply →
"Incidentally, very few people grasp the amount of effort Daniel Gackle expends running HN now, and what an amazing job he does." -Paul Graham, https://x.com/paulg/status/1282055086433284103
4 replies →
Well, Facebook is PHP so...
7 replies →
Yay
2 replies →
HN may have less features, but do we even need them? I do not think it makes it worse because of that. You could call it minimalistic, which puts it into a more positive light. :)
Edit: or as someone else who has phrased it better: "less is more".
I liked the "friends" and "foes" system that Slashdot had, though I would say generally the "foes" here just get banned which is convenient.
I also thought Slashdot's moderation system was kind of fun. I am not sure it was useful but I enjoyed the annotations (+5 Funny when serious, +5 Insightful when inciteful, etc.) Meta-moderation was also neat?
22 replies →
I think that the classical phrasing is "less is more".
At least, that's how my bash pager has it in the manpage.
1 reply →
I'd like some markdown support:
Two spaces to mono space is somewhat offensive
2 replies →
The "tech progressive" mindset cannot comprehend the idea that something cannot be improved or shouldn't be "enhanced". It is too close to the abyss.
2 replies →
Dark mode. Sure, Dark Reader exists but many mobile browsers don't support it.
Annoyingly enough it's been talked about for years but it never gets implemented, despite only three colors really needing a swap: background to dark sepia or just dark gray, and text to white and off-white.
7 replies →
I want a pickup truck that is designed like HN. The slate may be the answer
7 replies →
Dark Mode. And Follow User would be two feature I have been using for years with other tools.
9 replies →
Also the lack of needing to make money helps a lot.
I'd say that's the main thing. People hate ads, HN uses unobtrusive text ads. The moderation isn't that a competitive advantage, IMO. Slashdot's was better, mostly because it had measures to stop moderation abuse whereas HN seemingly doesn't. It's just a plain old up/down system with the added filip of a "super down" button, for those who are really committed to banning their opponents. I read with showdead turned on because perfectly reasonable comments are so often greyed out or dead. That used to happen much less on Slashdot because there were far fewer people with moderation rights and the bad ones got filtered out via metamod.
Maybe now it's been ported to Common Lisp it'll be easier to add features.
19 replies →
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. —Antoine de Saint-Exupéry
[flagged]
your comment assumes that features and functionality are a good thing. "worse is better" does not apply here.
"worse is better" is people putting up with footguns like this in python, because it's percieved easier to find a python job:
HN is very much "less is better", not "worse is better".
I'm not sure what you mean? The literal quote from the Wikipedia article on "worse is better" is:
> It refers to the argument that software quality does not necessarily increase with functionality: that there is a point where less functionality ("worse") is a preferable option ("better") in terms of practicality and usability.
For that reason, I think I am applying the term precisely as it was defined.
The irony of my comment, which dang picked up, is that the original idea was a criticism against Lisp, suggesting that the bloat of features was a part of the reason its adoption had lagged behind languages like C.
1. https://en.wikipedia.org/wiki/Worse_is_better
3 replies →
I've written Python for 14 years and have never seen code like that. It certainly isn't a perfect language, but this doesn't look like a common concern.
People write a lot of Python, because the language is easy to get into for a lot of non computer-science folks (e.g., engineers and scientists) and the ecosystem is massive with libraries for so many important things. It isn't as conceptually pure as lisp, but most probably don't care.
15 replies →
Python made a choice to have default values instead of default expressions and it comes with positive and negative trade-offs. In languages like Ruby with default expressions you get the footgun the other way where calling a function with a default parameter can trigger side effects. This kind of function is fine in Python because it's unidiomatic to mutate your parameters, you do obj.mutate() not mutate(obj).
So while it's a footgun you will be writing some weird code to actually trigger it.
5 replies →
Ah yes, the ol' default empty list Python gotcha, it bit me I think about 10 years ago, and ever since, sadly I've written code like this so many times it's not funny:
The ugliness scares most people away, or at least it doesn't accidentally lure them in.
Some of us genuinely like the way it looks.
1 reply →
HN is like 4chan, but house-broken.
A lot of people claim that but I've never seen evidence of the existence of a vast number of people who would be using Hacker News if only it had more bells and whistles. Craigslist is "ugly" too, and plenty of people use it.
I think it's more likely that most people (even most tech-adjacent people) simply don't know this place exists, or don't care, since no one is sharing links to Hacker News on mainstream social media and nothing goes viral here outside of already established HN-adjacent circles.
I like Hackernews. I like the simplicity. I don't bother with better AI. I prefer it that way and I acknowledge that the look and feel of Hackernews does not suite everyone.
But I don't value the look and feel of Hackernews, because it drives people away -- as if these people are of lesser value. That is just elitist and gatekeeper mentality.
4 replies →
I find aesthetically pleasing, tbh.
HN's aesthetic has grown on me honestly!
The heavy, thick irony of these people running their own platform on as little technology as possible and depending heavily on human input.
It's like they know somewhere deep inside that "mo tech" is not helping anyone.
There's a less cynical interpretation of that which is not so far from the case.
There's a few levels going on here:
- technologists and startup wannabes feeling like HN is "underground" because of the stripped down aesthetic and weird tech stack
- out of touch VCs who are successful because of money and connections but want to cosplay as technical
- the end users of the startups, who are fed the enshittified products funded by the VCs and created by the technologists
It's not ironic whatsoever.
I think HN has some pretty sophisticated automated and human-in-the-loop moderation features that few other sites possess, or throw as much resources at. Because HN is not ad-supported it does not fall victim to tragedy of the commons.
Honestly it's not "worse"
But I think HN built on what Reddit got right (at least old reddit) and also on a context of more online/faster interactions as opposed to Slashdot that brought some of the old forums structure and on a context of slower and more meaningful (ahem, for the most part) interactions. Hence why moderation was more precise, upvotes had color and you still had things like user signatures
In a way, users and posts on HN are "cattle", not pets ;)
Maybe this was tongue-in-cheek in a way that eludes me, but in case any innocent and curious bystanders are as confused as me by your comment, I'm not sure "Worse Is Better" refers to what you think it does. It isn't about "features and functionality", it's about how ease of implementation beats everything else. I can't see how that applies here, or what your comment means in that light.
Here's the original essay -- https://www.dreamsongs.com/RiseOfWorseIsBetter.html
This is a good little overview entitled "Worse is Better Considered Harmful" -- https://cs.stanford.edu/people/eroberts/cs201/projects/2010-... -- in which the authors argue for "Growable Is Better".
In summary - it's about ease of implementation trumping all else. C and Unix are memorably labelled "the ultimate computer viruses".
The genius of Slashdot's moderation system is that it forced you to be fastidious with how your limited mod points were allocated, only using them on posts that really deserved them.
As opposed to tearing through a thread and downvoting any and everything you disagree with.
Slashdot encouraged more positive moderation, unless you were obviously trolling.
The meta-moderators kept any moderation abuse in check.
It's sad to see we have devolved from this model, and conversations have become far more toxic and polarized as a direct result of it. (Dissenting opinions are quickly hidden, and those that reinforce existing norms bubble to the top.)
I believe HN papers over these problems by relying on a lot of manual hand-moderation and curation which sounds very labor intensive, whereas Slashdot was deliberately hands-off and left the power to the people.
I miss slashdot when it was at its peak decades back
unsure why precisely it descended so much
not crazy about HN's approach but the quality of the discourse here is so high through whatever mechanism, I don't much care
I remember slashdot being full of "M$ is teh evill111!!" and other childish nonsense. At the end of the day what matters is the results, and i much prefer the discusions on hn than /.
6 replies →
How are you determining the causative relationship?
I don't think that was what made HN prevail against similar sites that were popular in the past. In my opinion, it is the fact that it is tied to Y Combinator and lots of startups/founders that made it stick. Something that is not technical at all.
Not me! For a number of years, I was like "what's with that domain, never heard of ycombinator, oh well, can't be bothered reading up on it right now, anyway, great content here, and nice minimal interface, I'll keep coming back".
A lot of people on HN do infact hate YC and pretty much everything VC
I'm still missing being able to read only +5 insightful comments after 20 years.
I'd expect Slashdot's point systems and meta moderation to make a comeback in the LLM slop world we live in currently, but nobody knows about it anymore. Steam kinda rediscovered it in their reviews, perhaps even was inspired by it (I hope...)
Dutch tech news website Tweakers.net basically has this. Comments are moderated on a scale from -1 to +3, and then you can choose to expand only +2 and up.
2 replies →
https://m.xkcd.com/810/
https://m.xkcd.com/1019/
https://m.xkcd.com/2159/
HN's dry text-only design is what repels most of the problems. Mods only polish it a bit.
> and heavy moderation.
I don’t think there is heavy moderation in the traditional sense. It’s primarily user-driven, aside from obvious abusive behavior. The downvote and flagging mechanisms do the heavy lifting.
The heuristics that detect a high ratio of arguments to upvotes (as far as I can tell) can be frustrating at times, but they also do a good job of driving ragebait off the front page quickly.
The moderators are also very good at rescuing overlooked stories and putting them in the second chance pool for users to consider again, which feels infinitely better than moderators forcing things to the front page.
It also seems that some times moderators will undo some of the actions that push a story off the front page if it’s relevant. I’ve seen flagged stories come back from the dead or flame war comment sections get a section chance at the front page with a moderator note at the top.
Back in the Slashdot days I remember people rotating through multiple accounts for no reason other than to increase their chances of having one of them with randomly granted moderation points so they could use them as weapons in arguments. Felt like a different era.
> I don’t think there is heavy moderation in the traditional sense.
It seems to be a combination of manual and automated moderation (mostly by dang but he has more help now), using the kind of over/under-engineered custom tools you'd expect from technophiles. I've wondered a lot about the kind of programming logic he and the others coded up that make HN as curious as it is, and I have half a mind to make a little forum (yet another HN clone, but not really) purely for the sake of trying to implement how I think their moderation probably works. If I went through with this, I'd have it solely be for Show HN style project sharing/discussion.
6 replies →
HN is heavily moderated by humans. They've discussed it before. They're machine-assisted, but heavily involved day-to-day.
Maybe it's an effect of not having to compete with other outlets.
Το λακωνίζειν εστί φιλοσοφείν
To be spartan is to philosophize.
I'm curious why λακωνίζειν needs to be nominalized and φιλοσοφείν doesn't.
1 reply →
Slashdot sold out to Conde Nast. That killed it. It was very well designed.
"Worst and First beats Perfect and Last"
The exception to that rule was Google. Which coincidentally might have been one of the best VC investments of all time.
1 reply →
[flagged]
What makes HN work is being popular. Nothing more. Stop praising mediocrity.
So, Hacker News was not rewritten in Common Lisp. Instead they reimplemented the Arc Runtime in Common Lisp.
And that's the sort of thing Lisp excels in
There are probably Markdown libraries for Arc by now?
Though, Reddit eventually realized that javascript: URLs - in Markdown - were an XSS risk.
Lisp supremacy approaches!
I'm reminded of definitively the most extreme writing on programming I've ever read, here https://llthw.common-lisp.dev/introduction.html, including but in no way limited to claims such as:
> The mind is capable of unconsciously understanding the structure of the computer through the Lisp language, and as such, is able to interface with the computer as if it was an extension to its own nervous system. This is Lisp Consciousness, where programmer and computer are one and the same; they drink of each other, and drink deep; and at least as long as the Lisp Hacker is there in the flow, riding the current of pure creativity and genius with their trusty companions Emacs and SLIME, neither programmer nor computer know where one ends and the other begins. In a manner of speaking, Lispers already know machine intelligence---and it is beautiful.
Has any other language produced such thoughts in the minds of human beings? Maybe yes, but I don't know of one. Maybe Forth, or Haskell, or Prolog, but I haven't found similar writing. Please do share.
I agree, and it gets even better: while low level ML support in Common Lisp does not match Python libraries, now it often does not matter because LLMs are not embedded in applications, then are often accessed via a HTTP request.
68k assembly is like that.
> Arc was implemented on top of Racket
Originally on MzScheme, then later PLT Scheme. It was ported to Racket by the great kogir, IIRC.
I think MzScheme is just the core (non-GUI) part of PLT Scheme, which was renamed to Racket.
Also, I believe pg started implementing Arc on Scheme48 based on mailing list activity at the time. I've always been curious about the switch to PLT!
That might've been more a reflection on PLT than on Scheme48 (which also had some really smart people on it).
As some point, when I was writing a lot of basic ecosystem code that I tested on many Scheme implementations, PLT Scheme (including MzScheme, DrScheme, and a few other big pieces), by Matthias Felleisen and grad students at Rice, appeared to be getting more resources and making more progress than most.
So I moved to be PLT-first rather than portable-Scheme-first, and a bunch of other people did, too.
After Matthias moved to Northeastern, and students graduated on to their own well-deserved professorships and other roles, some of them continued to contribute to what was soon called Racket (rather than PLT Scheme). With Matthew Flatt still doing highly-skilled and highly-productive systems programming on the core.
Eventually, no matter how good their intentions and how solid their platform for production work, the research-programs-first mindset of Racket started to be a barrier to commercial uptake. They should've brought in at least one of the prolific non-professor Racketeers into the hooded circle of elders a lot sooner, and listened to that person.
One of the weaknesses of Racket for some purposes was lack of easy multi-core. The Racket "Places" concept (implementation?) didn't really solve it. You can work around it creatively, as I did for important production (e.g., the familiar Web interview load-balancing across application servers, and also offloading some tasks to distinct host processes on the same server), but using host multi-core more easily is much nicer.
As a language, I've used both Racket and CL professionally, and I prefer a certain style of Racket. But CL also has more than its share of top programmers, and CL also has some very powerful and solid tools, including strengths over Racket.
Are we iterating over all lisp implementations ? A strange variant of the ship of Theseus
Lisp of Theseus does have a certain ring to it.
Next up, the end goal: Emacs Lisp.
5 replies →
Aren't MzScheme, PLT Scheme, and Racket the same thing?
Yes, but for me each name denotes the thing as it was when it was called that.
(This conversation has turned unexpectedly ontological!)
4 replies →
They were all based on MzScheme, yes. But nowadays Racket runs on the fastest scheme, chez.
HN runs now on SBCL, which is much faster and also multi-threaded.
The article makes it sounds like Dang also helps with the codebase. There must be others, but Dang is the one I've seen for years at this point.
I've beeing a part of many online communities as both a member and moderator. However, Hackernews is the community that I've been apart of for the longest and the one that brings me the most joy.
Dang, is there anything random people like me can do for you? Can I at least buy you a coffee or something?
Keep in mind Hacker News (formerly Startup News) is effectively a loss-leading advertising arm of Y Combinator, which at this point is one of the most successful investment firms in the world.
And HN founder and original author Paul Graham is (at least on paper) billionaire, not merely the decamillionare he used to be.
Though it's still good for it to be a self-funding project even if that means accepting donations.
> Initially called Startup News or occasionally News.YC., it became known by its current name on August 14, 2007.[4]
Oh I have been on HN since 2008 and didn't know that.
8 replies →
> [Clarc] is much faster and also will easily let HN run on multiple cores
This was all running on a single core??
Modern CPUs are crazy fast. 4chan was serving 4 million users with a single server, a ten year old version of PHP and like 10000 lines of spaghetti code. If you do even basic code quality, profiling and optimization you can serve a huge number of users with a fraction of a CPU core.
I/O tends to be the bottleneck (disk IOPS and throughput, network connections, IOPS and throughput). HN only serves text so that's mostly an easy problem.
I still can't wrap my head around how the conventional wisdom in the industry to work around that problem is to add even more slow network I/O dependencies.
1 reply →
4chan is a special case, because all of its content pages are static HTML files being served by nginx that are rewritten on the server every time someone makes a post. There's nothing dynamic, everyone is served the exact same page, which makes it much easier to scale.
11 replies →
Every time a dev discovers how tremendously bloated and slow modern software is, an angel gets its wings.
Modern CPUs are stupid fast when you use them the right way. You can take scale-up surprisingly far before being forced to scale out, even when that scale out is something as modest as running on multiple cores.
Based on context, you are insinuating that a discussion board like HN _can_ be hard on the CPU alone? If so, how? My guess would be _also_ be that the CPU would have little to do by itself, but that I/O would take the brunt?
4 replies →
Most apps aren’t suffering from computation. They suffer from I/O
I was going to reply that this is pretty common for web apps, e.g. NodeJS or many Python applications also do not use multi-threading, instead just spawning separate processes that run in parallel. But apparently, HN ran as 1 process on 1 core on 1 machine (https://news.ycombinator.com/item?id=5229548) O_O
HN is not really that much of a workload. Links with text only comments, each link gets a few hundred comments at most, and commenting on stories ends after they are old enough.
Probably everything that's current fits easily in RAM and the older stories are candidates for serving from a static cache.
I wouldn't say this is an astounding technical achievement so much as demonstrating that simplicity can fall out of good taste and resisting groupthink around "best practices".
I think NodeJS apps typically rely on JavaScript event-loop instead of starting new processes all the time.
Spawning new processes for every user is possible but would probabaly be less scalable than even thread-switching.
2 replies →
https://news.ycombinator.com/item?id=27452276
Yet GitHub can't show more than a dozen comments on the same page. Needing you to click "view more" to bring them in 10 at a time.
HN is an island of sanity in a sad world.
In fairness, HN wouldn't show more than what, twenty-ish thread roots at a time, requiring you to click "more" to bring in more... which could contain the same set of thread roots you'd been looking at, depending on upvote activity.
(I assume that this update has removed that HN restriction, but haven't bothered to go look to verify this assumption.)
1 reply →
It's amazing what's possible when you don't use microservices
Text only processing is amazingly fast, as are static websites. Javascript is heavy, man.
Good, sbcl it's great for CL. And now with current CLX from QuickLisp (the one with daily releases, I can't remember it's name) MCClim runs snappy even under Intel n270 ATom machines. Under ECL it almost runs snappy, but the performance gain it's astronomical. From a really laggy UI to instant rendering.
EDIT: UltraLisp for QuickLisp.
Is QuickLisp entering the 1990s and enabling TLS yet?
Check out ocicl! https://github.com/ocicl/ocicl
1 reply →
If dang is listening: I'd like your comments on how to pull this off. Replacing the engine of a live site without some old forgotten part breaking is hard to accomplish. I rarely see this kind of thing happen without a week of frantic bug fixing and users grumbling.
sbcl is a workhorse. I wonder if the Racket folks didn't consider the Arc under production workloads general purpose enough to fix. I actually don't know of any other projects that use Racket in anger
I'll always have a soft spot in my heart for Armed Bear because that JVM library ecosystem is enormous https://github.com/armedbear/abcl
The Racket folks have always been most helpful and never turned down a request to fix anything.
Apologies that may have come across as more accusative than I intended. I was just surprised that whatever missing(?) feature or behavior that would cause one to move off of Racket wouldn't be of interest to other Racket users
11 replies →
As "in anger" is only something I have seen in the biomedical community, mediKanran should tick the box.
https://minikanren.org/workshop/2020/minikanren-2020-paper7....
As someone who runs a website based on the Arc code that was opened sourced... I'd love to be able to use Clarc.
what is the site?
https://twostopbits.com
I use the HN Arc code, but the site is about retro computing and gaming.
1 reply →
Are there any other popular (>10k DAUs) sites that still use an esoteric, homegrown tech stack? If you have worked on them, what do you think, is it a legacy mess nobody wants to touch, or a pleasure to work with?
PG made an assertion once that websites (in contrast to desktop software) are free to use any stack of their choosing, as long as it can take in HTTP requests and output JSON or HTML. This intuitively seems to be true, especially so with how powerful modern machines can get, but it seems like it hasn't increased stack diversity much.
The advantages of boring technology and "resume-driven development" seem to outweigh whatever gains you may get from using something custom.
Are there any other popular (>10k DAUs) sites that still use an esoteric, homegrown tech stack? If you have worked on them, what do you think, is it a legacy mess nobody wants to touch, or a pleasure to work with?
I do. It's absolutely lovely.
We can make decisions based on what's best for the user, and not based on what the latest fad is.
In the time I've been in charge of this company's web sites, we've reduced cost drastically, improved reliability, and cut time to production in half.
Hiring can be problematic, because there's a lot of people out there who can't think through problems; or only know how to do x in one tool, and are unwilling/unable to learn something else.
The big keys are: We're not a tech company, so management doesn't care what we do, as long as it gets done. We build a lot of our own tools, so they fit perfectly into our workflow, which makes them and us more efficient. And we don't have to have 99.999999% uptime for no reason. Management is OK if the web sites are slow or unavailable for a few minutes each week, as long as they're back to normal in less time than it takes for someone to call and complain. And our clients love to call and complain.
Hmmm. Does that mean we'll get dark mode now?
uBlock origin filter:
Does not work in embedded browsers in RSS readers. We need a proper site CSS, not client-side patches.
2 replies →
https://news.ycombinator.com/item?id=23199062
That thread is 5 years old, and nothing really came out of it.
The genius solution in there is probably this one:
...which you can try by doing this in the browser console:
But I get that there are a lot of opinions. Just try one, put up a vote over a week, do it over 4-6 weeks, settle on the one that has the best feedback...
2 replies →
I use the awesome “Dark Reader” browser extension, which gives you dark mode on any website.
Does not work in in-app browsers.
Considering Hacker News thinks font-size:9pt is acceptable for body text in 2025, don't hold your breath.
This is what cmd +/- is for
I like it, in fact my standard terminal font size is even smaller. I hate all the modern websites wasting tons of whitespace, so that you need to hit C-- ~3 times to make it usable.
whats wrong with that?
8 replies →
Can't you use tampermonkey or a similar tool that lets you apply your own stylesheet?
The OP isn't really asking for a "dark mode" like a literal reading of his comment might suggest. He's asking for an officially supported dark mode that evolves with the site and doesn't break random functionality one day. It's easy to use Stylist or TamperMonkey to make a dark mode that works at one instant of time. It's much harder to maintain one indefinitely in the face of constant changes made by developers not concerned with breaking your work, which they probably don't even know about.
16 replies →
I don't read HN in normal browsers. If you read the RSS feed and click through, for instance, it's instant white flash from the embedded browser in the RSS reader, which cannot be customized but honors dark mode.
5 replies →
There is an API somewhere, could wrap that with whatever you feel like.
That’s not really the point, my RSS reader’s in-app browser couldn’t deal with that.
Ask your browser for the reading mode
That does not prevent a big white flash in the middle of the night, and does not work inside all in-app browsers.
Definitely makes more sense than Racket imo, Common Lisp is a lot more pragmatic and SBCL is like magic.
Rewrites are definitely not “always a bad idea” as Joel Spolsky once said. What they are is highly situational.
HN has a bunch of factors that make it amenable to a rewrite. It has gigantic scale, not a ton of complexity at a business level, and what it “is” is pretty slow moving at this point.
That means it’s not a great example to justify a rewrite at work :) that said the success does prove rewrites are possible. Bravo on shipping!
But they didn't rewrite HN; they created a different implementation of the language it's written in.
The success of Hacker News doesn’t come from flashy features, but from a community that consistently produces high-quality content. That said, I can’t help but wonder if there are any updates to the UI/UX in the works, LOL.
Is it still Paul Graham/Robert Morris working on it? Skimmed the article but did not see a ref.
need to check out what it adds to CL: http://arclanguage.org/
Alas, they moved on long ago.
Not saying "security by obscurity never works" but am saying it's a shame the defensive wall of anti-spam/abuse depends on some secrecy, because it's a low wall. If the concern with knowing the secret sauce is how easy it would be to defeat, then its a low wall. But, as long as it stays secret, it's doing it's job.
I'm not an infosec professional, or a competent LISP coder, I'm not in a position to say what's better. This is just what pros in the field say to me.
(It's mentioned in the article)
> Hacker News now runs on top of Common Lisp
> there’s now an Arc-to-JS called Lilt, and an Arc-to-Common Lisp called Clarc.
> But Clarc’s code isn’t released, although it could be done:
> Releasing the new HN code base however wouldn’t work:
I'm not sure if I follow all that. If the Clarc is not released, then how does HN run on it?
> I'm not sure if I follow all that. If the Clarc is not released, then how does HN run on it?
The same person who writes Clarc also deploys HN (assumption, but seems dang does can do both :) ), so using unreleased software is just a matter of navigating to the right local directory.
I assume it's worth it to keep it in Arc and not rewrite in something more widely available, is that so?
https://news.ycombinator.com/item?id=23483715
Thanks for clarifying
Hacker News is the url I use to test most fussy connections because it's so light, and will load under even the slightest trickle of data. When I was doing research in Ghana, it was the only site I could get to reliably load for news in the field, and thus spent a month reading only HN (good luck getting the _New York Times_ to load without a gigabit connection). Appreciate how it stays — and has stayed — svelte and fast throughout the years.
That's great, but I think they should improve the responsiveness. It's still a bit wonky. You can see it on the top right corner if you narrow the screen and then widen it
I'm sure if you came up with a tiny diff of the CSS that would improve the user experience without degrading anything else, and send it to hn@ycombinator.com, they can get it deployed :)
Sure, if they paid for diffs. I don't think they do. Dev time is not free :)
You should have rewritten it in sh and called Sharc, or BASIC and called it BArc, or PHP and called it PHarc, or ML and called it MLarcy. Or license it under GPL-3 and call it Gnarc!
Is this open source software that I can run my own hacker news as well?
http://arclanguage.org/
> anti-abuse measures that would stop working if people knew about them
A heavy lesson in that for other implementors of discussion-forum cum blog-comment systems.
When we will get to see the code of clarc? I hope that there is no "business logic" relevant to running HN in the language implementation, is there?
That's addressed in the article. There absolutely is:
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work. The time to do it will be if and when we eventually release the alternative Arc implementations we’ve been working on.
I think the (anonymous? I can't find a name) author of the OP slipped slightly at the end of that otherwise-impeccable sequence of quotes. That last comment (http://arclanguage.org/) to Clarc. It includes a sample application which is an early version of HN, scrubbed of anything HN- or YC-specific.
3 replies →
I am asking about the core language implementation. No need to publish the whole source code of HN, just the part of source code of clarc.. You do not have "anti-abuse measures" in the language implementation and runtime, do you? Is it that hard to seperate a language implementation and code written in the language?
1 reply →
Random question, how big is hacker news? It’s plain text so I’d imagine it’s reasonably compact?
<back of napkin>
Based on the current id, about 45,000,000 items.
Assuming 1KB per item, about 45GB.
So with code and OS, probably it would fit on a $10 thumb drive without compression.
</back of napkin>
If I am within a couple of orders of magnitude, it is hard for me to see a benefit from compression.
I cannot believe how people are praising a centralized, heavily censored links site. Take a look at HN's privacy policy, they sure do make money from monetizing every single thing you say and also fingerprint you all the time. We should have a decentralized link sharing site
Centralised and heavily censored, yes, but AFAICT the censoring by-and-large respects free speech and diversity of opinion, while effectively stopping spam / abuse, and thus maintaining the high quality of content that keeps us all coming back.
And surely HN is way less monetised (and therefore way more trustworthy) than virtually every other links site / every social media platform out there?
Depends on what you consider "monetization." Are there ads? Not explicitly, but YC startups do advertise themselves in threads here, and aspiring entrepreneurs do use visibility on HN (both to users and YC) as part of their strategy. If you think this is just a forum of nerds engaging in organic conversation and intellectual diversion, you'd be mistaken. Your attention is currency here as much as anywhere.
Lemmy or Mastodon, depending on how you want to do it.
Dang! Thanks dang!
could probably have saved themselves a lot of trouble and asked Claude to rewrite it in C++
Is the bel project still alive?
Look, I like the way HN looks but there aren't many sites that essentially look like bare html but still struggle with displaying more than 300 comments.
What do you mean? With the current internals, a 300 posts HN page would weigh ~500kb; different ones will hardly be more compact. Where is the «struggle»?
Not sure about 300 comments but a post with 5300 comments takes about 10 seconds to load:
https://news.ycombinator.com/item?id=43208973
In March this year, HN changed pagination behavior. Previously, one needed to paginate through pages to read more than X comments. Around March, they now serve all comments at once.
A post having over a thousand comments is extremely rare so not a big deal.
2 replies →
HN has been known to fail in the past with heavy or high velocity threads to the point that dang has asked people to log off en masse to reduce server load. That shouldn't happen for a simple text forum.
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work. The time to do it will be if and when we eventually release the alternative Arc implementations we’ve been working on.
Is this a case where security through obscurity is good, or bad? Legit question. I am curious to read the responses it may prompt.
I found this though: https://news.ycombinator.com/item?id=27457350
> There are a lot of anti-abuse features, for example, that need to stay secret (yes we know, 'security by obscurity' etc., but nobody knows how to secure an internet forum from abuse, so we do what we know how to do). It would be a lot of work to disentangle those features from the backbone of the code.
The question still stands for curiosity!
The OP got everything right except that bit. This is a reason for not open-sourcing HN (the application), but it doesn't relate to open-sourcing Clarc (the language implementation). We could do that without revealing any anti-abuse stuff.
More at https://news.ycombinator.com/item?id=44099560.
Abuse of this sort isn't a security issue in the network sense. i.e. the security of Hacker News is not imperiled by people creating spam accounts, but nonetheless we want to stop that.
Obscurity is extremely good at filtering out low to medium skilled griefers. It won’t stop anyone who is highly motivated, but it will slow them down significantly.
Hacker News is small enough that obscurity would give moderators enough time to detect bad actors and update rules if necessary.
Is HN really that small, considering "HN hug of death"? If it really is small, then hey, we may have already talked! :)
2 replies →
This is related to Kerckhoffs principle:
"The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"
This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.
When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.
Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say security by obscurity is bad.
Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply. There's not a secret key that can be changed in a way the system will recover its functionality if the secret key is discovered.
There is a series of measures that have worked in the past, and are still working today despite a huge population of active spamming and disrupting agents, and they should be kept secret as long as they keep working.
Is this a case where security through obscurity is good, or bad? Legit question. I am curious to read the responses it may prompt.
To me; philosophically; and to a first approximation, all security is through obscurity.
For example encryption works for Alice so long as Bob can't see the key...
... or parking the Porsche in the garage, reduces the likelihood someone knows there is a Porsche and reduces the likelihood they know what challenges exist inside the garage. Now put a tall hedge and a fence around it and the average passerby has to stop and think "there's probably a garage behind that barrier."
To put it another way, out of sight has a positive correlation to out of mind.
Yes of course a determined well funded Bob suggests obscurity with Bob's determination and budget. If Bob is willing to use a five dollar wrench, Alice might tell Bob the key.
There are forks of what I assume is the scrubbed HN codebase, e.g. https://github.com/jgrahamc/twostopbits
Read earlier in the thread that they run the open sourced version https://news.ycombinator.com/item?id=44099315
This likely isn't so much "security through obscurity" because it's not really about security in the traditional sense but instead about anti-griefing measures.
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them.
We’ve all heard about how “security through obscurity” isn’t real security, but so many simple anti-abuse measures are very effective as long as their exact mechanism isn’t revealed.
HN’s downvote and flagging mechanisms make for quick cleanup of anything that gets through, without putting undue fatigue on the users.
Things called "security" that don't follow Kerckhoffs's principle aren't security. There are a lot of things adjacent to security, like spam prevention, that sometimes get dumped into the same bucket, but they're not really the same.
Security measures uphold invariants: absent cryptosystem breaks and implementation bugs, nobody is forging a TLS certificate. I need the private key to credibility present my certificate to the public. Hard guarantee, assuming my assumptions hold.
Likewise, if my OS is designed so sandboxed apps can't steal my browser cookies, that's a hard guarantee, modulo bugs. There's an invariant one can specify formally --- and it holds even if the OS source code leaks.
Abuse prevention? DDoS avoidance? Content moderation? EDR? Fuzzy. Best effort. Difficult to verify. That these things are sometimes called security products doesn't erase the distinction between them and systems that make firm guarantees about upholding formal invariants.
HN abuse prevention belongs to the security-adjacent but not real security category. HN's password hashing scheme would fall under the other category.
This is simply not true. At the highest levels, security is about distributing costs between attackers and defenders, with defenders having the goal of raising costs past a threshold where attacks are no longer reasonable expenses for any plausible attacker. Obfuscation, done well, can certainly play a role in that. The Blu-ray BD+ scheme is a great case study on this.
3 replies →
> We’ve all heard about how “security through obscurity” isn’t real security
This is something that programmers enjoy repeating but it has never been true in the real world.
You can only say that if you have no idea about cryptography. It is definitely true in the real world, but it needs the right context to be relevant.
It is related to Kerckhoffs principle: "The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"
This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.
When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.
Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say "security by obscurity" is bad.
Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply.
it does not apply to the "real" world, but the digital one
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work
The business logic in encoded into the original structure, making migration to anything different effectively impossible - without some massive redesign.
This, I think more than any response, indicates why the philosophy of “it’s working don’t touch it” will always win and new features” requests will be rejected.
HN didn’t depaginate based on user desires, it was based on internal tooling making that feature available within the context of the HN overall structure.
HN has zero financial or structural incentive to do anything but change as little as possible. That’s why this place, unique in the internet at this point unfortunately has lasted.
HN is not *trying* to grow, it’s trying to do as little as possible while staying alive; so by default, it’s more coherent to maintain because its structure isn’t built for it and changing the structure would break the encoded rituals (anti-abuse measures).
Something to think about when you’re trying to solve for many problems like “legacy code” “scaling needs” etc… it all comes back to baseline incentives
It's trying to grow in the sense that we want new users. Otherwise it will get stale. I fear that's already happening.
Is there data or otherwise that HN is growing stale? Or more of a general vibe
I mean this in the spirit of genuine curiosity: what staleness risk is there given the massive breadth of experience the existing userbase already has?
5 replies →
Man, I wish GUIs in general were like this. Not that I don't want progress, but some interactions (especially in basic OS stuff) really doesn't need to be redone every 5 years.
Muscle memory belongs on a balance sheet.
Honestly I don't understand why more things aren't like this. I don't need a revamped landing page for my GP/council/department/directorate/organisation/etc - just finish the previous version with the features that were promised. I don't need another half-assed version that will also be abandoned at 40-50%.
Business incentives
HN is odd in that it’s subsidized by a hyper capitalist who still has at least a romantic concept of the “early” internet
My assumption is that HN will die when the current leadership turns over because while HN does help marketing/intel for YC it’s a cost center
All cost centers eventually get closed
1 reply →
Just a note on the grammar of this: "HN runs on top of SBCL since a few months".
"since a few months" sounds wrong, it isn't idiomatic english. Consider replacing it with:
"HN has been running on top of SBCL for a few months now."
It sounds wrong because "since" is generally combined with a point in time, but "a few months" is a duration, not a date. Also the first paragraph switches tense forms, which makes it stand out even more.
Rewriting it to "since a few months ago" seems to be the easiest way to fix this, though my favorite way to express the the same thing is "as of a few months ago".
It should be noted that the author, like most people you're likely to interact with in this bubble, is not a native speaker of English. What matters is getting the message across - which they did.
You'll end up not being very productive if you spend your time pointing all of these little slips out.
I have routinely noticed this sort of construction since a few years [1]. Does it correspond to standard usage in other languages? If so, which ones?
---
[1] See what I did there? Eh? Eh?
2 replies →
I appreciate the feedback, I edited the post. I actually should have noticed it, it's a grammar lesson I remember quite well and a mistake I spot in others' posts :] My first wording was mentioning "September of 2024", which I replaced with "a few months" at the last minute.
[dead]
[flagged]
[flagged]
[flagged]
Hacker News has so little capability, almost any experienced developer using a modern AI Coding Agent could replicate the entire thing in a weekend, and perhaps in a single day.
I'm not saying it's bad, or criticizing anyone. I mean it does what it does, and it works, and people like it. But no one should care what technology they're using because there's just nothing impressive going on from a technical perspective.
Good software tends to resemble an iceberg - what you see is just a small bit of what's actually in there. I wouldn't be so hasty in assumptions here.
I should've been more clear that my claim was only about the ability to post messages, have them stored in a database, and then have a tree-view that displays and edits the posts. That's 99% of what users do right? That entire functionality could be done by an AI Agent nowadays in about 10 minutes.
6 replies →
I don't get this attitude at all. I would think most programmers/readers are interested in the gears and cogs behind something they use on a regular basis. Especially if the work in web, backend, etc.
I've been coding for 35 years. I love HackerNews. Because it works. And it's all we need. But holding it up as some example of engineering would be silly. It's just a tree editor. I implemented a better tree editor myself today from scratch.
If you want to know what stuff I'm impressed by it's things like Mastodon, Nostr apps, and stuff that does more than edit a simple content tree of nothing but plain text. We can't even upload images. Can't do markdown. lol. It's definitely a "Less is More" app, from two decades ago. Just agree to agree with me on that. It's not an insult to them. It's just an observation.