For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.
One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.
And that's why it's harder (or easier?) to make the same landing again -- we taking way less chances. Today we know of way more failure modes than back then.
They sent people up in a tin can with the bare minimum computational power to manage navigation and control sequencing. It was barely safer than taking a barrel over Niagara Falls. We do have much more capable and reliable technology.
It's a miracle nobody died in flight during the program. Exploding oxygen tank, rockets shaking themselves to pieces during launch, getting hit by lightning on top of a flying skyscraper full of kerosene and liquid oxygen....
"popularly described" and how it's currently understood are two different things. Because it's hard to explain to lay people, it's popularly described in a number of simplified ways, but it's well understood.
CADR is an AGC assembly directive defining a "complete address" including a memory bank, in this case a subroutine to be called by the preceding BANKCALL (TC = transfer control, i.e., store return address and jump to subroutine), which switches to the memory bank specified in the CADR before jumping to the address specified in the CADR.
For a brief explanation of AGC subroutine calls, see [1].
CAR and CDR in Lisp come from the original implementation on the IBM 704, where pointers to the two components of a cons cell were stored as the (C)ontents of the (A)ddress and (D)ecrement fields of a (R)egister (memory word).
(CADR x) is just shorthand for (CAR (CDR x)), i.e., a function that returns the second element of a list (assuming x is a well-formed list).
I think it's interesting that they found what seems to be a real bug (should be independantly verified by experts). However I find their story mode, dramatization of how it could have happened to be poorly researched and fully in the realm of fiction. An elbow bumping a switch, the command module astronaut unable to handle the issue with only a faux nod to the fact that a reset would have cleared up the problem and it was part of their training. So it's really just building tension and storytelling to make the whole post more edgy. And yes, this is 100% AI written prose which makes it even more distasteful to me.
> An elbow bumping a switch [..] really just building tension and storytelling to make the whole post more edgy.
A guarded switch, no less.
But personally I'm trying to be more generous about this sort of thing: it is very very difficult to explain subtle bugs like this to non-technical people. If you don't give them a story for how it can actually happen, they tend to just assume it's not real. But then when you tell a nice story, all us dry aged curmudgeons tut tut about how irreverent and over the top it is :)
Finding the middle ground between a dry technical analysis and dramatization can be really hard when your audience is the entire internet.
The specs were derived from the code, not from the original requirements. So this is "we modeled what the code does, then found the code doesn't do what we modeled." That's circular unless the model captures intent that the code doesn't , and intent is exactly what you lose when you reverse-engineer specs. Would love to see this applied to a codebase where the original requirements still exist
One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.
Also one has to be aware that there are a lot of bugs that AI won’t find but humans would
I don’t have the expertise to verify this bug actually happened, but I’m curious.
It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means. What is not clear is how they found themselves modeling the gyros software of the apollo code to begin with.
But, I do think their explanation of the lock acquisition and the failure scenario is quite clear and compelling.
Anyways, it seems it would take a dedicated professional serious work to understand if this bug is real. And considering this looks like an Ad for their business, I would be skeptical.
(Apache Drools is an open source rule language and interpreter to declaratively formulate and execute rule-based specifications; it easily integrates with Java code.)
However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen. It doesn't actually use the emulator to prove that its thinking is right. Classic Claude being lazy (and the vibe coder not verifying).
I've vibe coded a fix so that the demonstration is actually done properly on the emulator. And also added verification that the 2 line patch actually fixes the bug: https://github.com/juxt/agc-lgyro-lock-leak-bug/pull/1
> However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen.
I see this a lot in AI slop, which I mostly get exposed to in the form of shitty pull requests.
You know when you're trying to explain Test-Driven Development to people and you want to explain how you write the simplest thing that passes the test and then improve the test, right? So you say "I want a routine that adds VAT onto a price, so I write a test that says £20+VAT is £24, and the simplest thing that can pass that test is just returning 24". Now you know and I know that the routine and its test will break if you feed it any value except £20, but we've proved we can write a routine and its test, and now we can make it more general.
Or maybe we don't care and we slap a big TODO: make this actually work on there because we don't need it to work properly now, we've got other things to do first, and every price coming up as £20+VAT is a useful indicator that we still have to make other bits work. It doesn't matter.
The problem is that AI slop code "generators" will just stop at that point and go "THERE LOOK IT'S DONE AND IT'S PERFECT!" and the people who believe in the usefulness of AI will just ship it.
Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.
The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.
It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.
I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.
I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.
therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.
It's not a shallow dismissal; it's a dismissal for good reason. It's tangential to the topic, but not to HN overall. It's only curmudgeonly if you assume AI-written posts are the inevitable and good future (aka begging the question). I really don't know how it's "sneering", so I won't address that.
The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.
Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.
I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.
Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.
For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)
Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.
HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.
There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
Software that ran on 4KB of memory and got humans to the moon still has undiscovered bugs in it. That says something about the complexity hiding in even the smallest codebases.
Well you don't have room for a lot of "defensive" code. You write the program to function on expected inputs, and hope that all the "shouldn't happen" scenarios actually don't happen.
Yet here we are compounding the issues by adding more and more layers to these systems... The higher the level it becomes the more security risks we take.
> Rust’s ownership system makes lock leaks a compile-time error.
Rust specifically does not forbid deadlocks, including deadlocks caused by resource leaks. There are many ways in safe Rust to deliberately leak memory - either by creating reference count cycles, or the explicit .leak() methods on various memory-allocating structures in std. It's also not entirely useless to do this - if you want an &'static from heap memory, Box.leak() does exactly that.
Now, that being said, actually writing code to hold a LockGuard forever is difficult, but that's mainly because the Rust type system is incomplete in ways that primarily inconvenience programmers but don't compromise the safety or meaning of programs. The borrow checker runs separately from type checking, so there's no way to represent a type that both owns and holds a lock at the same time. Only stacks and async types, both generated by compiler magic, can own a LockGuard. You would have to spawn a thread and have it hold the lock and loop indefinitely[0].
[0] Panicking in the thread does not deadlock the lock. Rust's std locks are designed to mark themselves as poisoned if a LockGuard is unwound by a panic, and any attempt to lock them will yield an error instead of deadlocking. You can, of course, clear the poison condition in safe Rust if you are willing to recover from potentially inconsistent data half-written by a panicked thread. Most people just unwrap the lock error, though.
It seems the difference between this and conventional specification languages is that Allium's specs are in natural language, and enforcement is by LLM. This places it in a middle ground between unstructured plan files, and formal specification languages. I can see this as a low friction way to improve code quality.
This is so insightfully and powerfully written I had literal chills running down my spine by the end.
What a horrible world we live in where the author of great writing like this has to sit and be accused of "being AI slop" simply because they use grammar and rhetoric well.
I was completely riveted the whole read. The description of Collins' dilemma is the first time I've seen an actual real world scenario described that might cause him to return to Earth alone.
If an LLM wrote that, then I no longer oppose LLM art.
I thought that was the least likeable part of the article. They speculated wildly, somehow making the leap that a trained astronaut would not resort to a computer reset if the problems persisted to weave the narrative that this bug was super-duper-serious indeed. They didn't need that and it weakened the presentation.
For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.
One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.
And that's why it's harder (or easier?) to make the same landing again -- we taking way less chances. Today we know of way more failure modes than back then.
They sent people up in a tin can with the bare minimum computational power to manage navigation and control sequencing. It was barely safer than taking a barrel over Niagara Falls. We do have much more capable and reliable technology.
1 reply →
It's a miracle nobody died in flight during the program. Exploding oxygen tank, rockets shaking themselves to pieces during launch, getting hit by lightning on top of a flying skyscraper full of kerosene and liquid oxygen....
9 replies →
"popularly described" and how it's currently understood are two different things. Because it's hard to explain to lay people, it's popularly described in a number of simplified ways, but it's well understood.
Related topic on CuriousMarc and co.’s AGC restoration: https://news.ycombinator.com/item?id=47641528
Still my all time favorite snippet of code.
https://github.com/chrislgarry/Apollo-11/blob/master/Luminar...
Cadr here has no relation with lisp cadr, right?
Correct.
CADR is an AGC assembly directive defining a "complete address" including a memory bank, in this case a subroutine to be called by the preceding BANKCALL (TC = transfer control, i.e., store return address and jump to subroutine), which switches to the memory bank specified in the CADR before jumping to the address specified in the CADR.
For a brief explanation of AGC subroutine calls, see [1].
CAR and CDR in Lisp come from the original implementation on the IBM 704, where pointers to the two components of a cons cell were stored as the (C)ontents of the (A)ddress and (D)ecrement fields of a (R)egister (memory word).
(CADR x) is just shorthand for (CAR (CDR x)), i.e., a function that returns the second element of a list (assuming x is a well-formed list).
[1] https://epizodsspace.airbase.ru/bibl/inostr-yazyki/American_...
https://news.ycombinator.com/item?id=16008239
I'm having a really bad Mandala effect right now where I remember some XKCD that wrote a poem about this. Maybe I'm thinking of another comic.
Can you explain this to me?
I think the point was the comments more than any of the code requiring explanation. There's nothing more permanent than a temporary solution
Wish I could... but I know of it from a previous HN post, where there is some discussion on its purpose.
https://news.ycombinator.com/item?id=22367416
I think it's interesting that they found what seems to be a real bug (should be independantly verified by experts). However I find their story mode, dramatization of how it could have happened to be poorly researched and fully in the realm of fiction. An elbow bumping a switch, the command module astronaut unable to handle the issue with only a faux nod to the fact that a reset would have cleared up the problem and it was part of their training. So it's really just building tension and storytelling to make the whole post more edgy. And yes, this is 100% AI written prose which makes it even more distasteful to me.
> An elbow bumping a switch [..] really just building tension and storytelling to make the whole post more edgy.
A guarded switch, no less.
But personally I'm trying to be more generous about this sort of thing: it is very very difficult to explain subtle bugs like this to non-technical people. If you don't give them a story for how it can actually happen, they tend to just assume it's not real. But then when you tell a nice story, all us dry aged curmudgeons tut tut about how irreverent and over the top it is :)
Finding the middle ground between a dry technical analysis and dramatization can be really hard when your audience is the entire internet.
[flagged]
The specs were derived from the code, not from the original requirements. So this is "we modeled what the code does, then found the code doesn't do what we modeled." That's circular unless the model captures intent that the code doesn't , and intent is exactly what you lose when you reverse-engineer specs. Would love to see this applied to a codebase where the original requirements still exist
But this seems like a reasonable approach for reverse-engineering, and it seems the bug they found is real.
The code was inconsistent with itself: that's not circular. Every path dropped the lock except one.
I took it as the extracted spec was weird and they looked into it.
Has someone verified this was an actual bug?
One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.
Also one has to be aware that there are a lot of bugs that AI won’t find but humans would
I don’t have the expertise to verify this bug actually happened, but I’m curious.
It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means. What is not clear is how they found themselves modeling the gyros software of the apollo code to begin with.
But, I do think their explanation of the lock acquisition and the failure scenario is quite clear and compelling.
They have some spec language and here,
https://github.com/juxt/Apollo-11/tree/master/specs
have many thousands of lines of code in it.
Anyways, it seems it would take a dedicated professional serious work to understand if this bug is real. And considering this looks like an Ad for their business, I would be skeptical.
> It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means.
Could the "AI native language" they used be Apache Drools? The "when" syntax reminded me of it...
https://kie.apache.org/docs/10.0.x/drools/drools/language-re...
(Apache Drools is an open source rule language and interpreter to declaratively formulate and execute rule-based specifications; it easily integrates with Java code.)
How did you pick out AI native and miss the rest of the SAME sentence?
> We found this defect by distilling a behavioural specification of the IMU subsystem using Allium, an AI-native behavioural specification language.
2 replies →
>It's not even clear if AI was used to find the bug
It's not even clear you read the article
5 replies →
> It's not even clear if AI was used to find the bug
The intro says “We used Claude and Allium”. Allium looks like a tool they’ve built for Claude.
So the article is about how they used their AI tooling and workflow to find the bug.
2 replies →
I've had a look at the (vibe coded) repro linked in the article to see if it holds up: https://github.com/juxt/agc-lgyro-lock-leak-bug/blob/c378438...
The repro runs on my computer, that's positive.
However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen. It doesn't actually use the emulator to prove that its thinking is right. Classic Claude being lazy (and the vibe coder not verifying).
I've vibe coded a fix so that the demonstration is actually done properly on the emulator. And also added verification that the 2 line patch actually fixes the bug: https://github.com/juxt/agc-lgyro-lock-leak-bug/pull/1
> However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen.
I see this a lot in AI slop, which I mostly get exposed to in the form of shitty pull requests.
You know when you're trying to explain Test-Driven Development to people and you want to explain how you write the simplest thing that passes the test and then improve the test, right? So you say "I want a routine that adds VAT onto a price, so I write a test that says £20+VAT is £24, and the simplest thing that can pass that test is just returning 24". Now you know and I know that the routine and its test will break if you feed it any value except £20, but we've proved we can write a routine and its test, and now we can make it more general.
Or maybe we don't care and we slap a big TODO: make this actually work on there because we don't need it to work properly now, we've got other things to do first, and every price coming up as £20+VAT is a useful indicator that we still have to make other bits work. It doesn't matter.
The problem is that AI slop code "generators" will just stop at that point and go "THERE LOOK IT'S DONE AND IT'S PERFECT!" and the people who believe in the usefulness of AI will just ship it.
More likely the llm misinterpreted something and hallucinated an error. Just yesterday Claude code hallucinated itself an infinite loop.
Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.
It's not setting off any LLM alarm bells to me. It just reads like any other scientific article, which is very often soulless
It repeats a few points too many times for a professional writer to not catch it.
I don’t mind that they let an LLM write the text, but they should at least have edited it.
the subheadings are extremely AI IMHO
1 reply →
Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.
Here's one tell-tale of many: "No alarm, no program light."
Another one: "Two instructions are missing: [...] Four bytes."
One more: "The defensive coding hid the problem, but it didn’t eliminate it."
36 replies →
For what it’s worth, Pangram thinks this article is fully human-written: https://www.pangram.com/history/f5f68ce9-70ac-4c2b-b0c3-0ca8...
The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.
It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.
Then pangram isn't very good, because that article is full of Claude-isms.
11 replies →
Pangram doesn't reliably detect individual LLM-generated phrases or paragraphs among human written text.
It seems to look at sections of ~300 words. And for one section at least it has low confidence.
I tested it by getting ChatGPT to add a paragraph to one of my sister comments. Result is "100% human" when in fact it's only 75% human.
Pangram test result: https://www.pangram.com/history/1ee3ce96-6ae5-4de7-9d91-5846...
ChatGPT session where it added a paragraph that Pangram misses: https://chatgpt.com/share/69d4faff-1e18-8329-84fa-6c86fc8258...
1 reply →
So you're saying Pangram isn't worth much?
And it turns out at least the part about Rust and locks is plain wrong. What a surprise: https://news.ycombinator.com/reply?id=47676938&goto=item%3Fi...
AI tends to write like it is getting paid by the word. This article wasn't too egregious but an editor could have improved it.
"Written by an LLM" based on what data or symptom?
I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.
I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.
therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.
This is the top reply on a substantial percentage of HN posts now and we should discourage it.
It is:
- sneering
- a shallow dismissal (please address the content)
- curmudgeonly
- a tangential annoyance
All things explicitly discouraged in the site guidelines. [1]
Downvoting is the tool for items that you think don't belong on the front page. We don't need the same comment on every single article.
[1] - https://news.ycombinator.com/newsguidelines.html
It's not a shallow dismissal; it's a dismissal for good reason. It's tangential to the topic, but not to HN overall. It's only curmudgeonly if you assume AI-written posts are the inevitable and good future (aka begging the question). I really don't know how it's "sneering", so I won't address that.
3 replies →
> Downvoting is the tool for items that you think don't belong on the front page.
You can’t downvote submissions. That’s literally not a feature of the site. You can only flag submissions, if you have more that 31 karma.
2 replies →
The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.
Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.
2 replies →
No idea why you're being downvoted. I've done my bit to redress the balance, I hope others do the same.
I did not get any “written by LLM vibes”. I enjoyed it and it pulled me in to keep reading.
Who gives a crap if it was written by an LLM. Read it or don’t read it. Your choice.
If it conveys the idea and your learn something new, then it’s mission accomplished.
You have no evidence that it was.
Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.
It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.
I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.
Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.
For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)
Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.
2 replies →
You're suggesting this is the complainant's fault?
3 replies →
HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.
There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
4 replies →
Stop voting up slop articles and I'll stop commenting on it.
2 replies →
I've seen way, way worse. Either someone LLM-polished something they already wrote, or they did their own manual editing pass.
The short sentence construction is the most suspicious, but I actually don't see anything glaring. It normally jumps out and hits me in the face.
>Hemingway's 4 Fast Rules For Effective Writing
1. Use Short Sentences
https://www.wordsthatsing.com.au/post/hemingway-rules
3 replies →
[flagged]
it's actually the second one I read that fit that description.
[flagged]
Software that ran on 4KB of memory and got humans to the moon still has undiscovered bugs in it. That says something about the complexity hiding in even the smallest codebases.
My guess is that in such low memory regimes, program length is very loosely correlated with bug rate.
If anything, if you try to cram a ton of complexity into a few kb of memory, the likelihood of introducing bugs becomes very high.
Well you don't have room for a lot of "defensive" code. You write the program to function on expected inputs, and hope that all the "shouldn't happen" scenarios actually don't happen.
Yet here we are compounding the issues by adding more and more layers to these systems... The higher the level it becomes the more security risks we take.
^ This is slop. Typical platitude that really means nothing.
Both the article and repo[1] are slop.
[1] In the repo, the "reproduce" is just a bunch of print statements about what would happen, the bug isn't actually triggered: https://github.com/juxt/agc-lgyro-lock-leak-bug/blob/c378438...
Another CTO "published" an AI slop to get attention to their vibe-coded company that will disappear in two years. Tell me something new...
> The specs were derived from the code itself
Oh dear. I strongly suggest this author look specification up in a dictionary.
It's (what they're describing is) just reverse engineering. That's what reverse engineering is.
Fortunately reverse engineering too is in the dictionary - to help anyone mistaking it for spec generation.
3 replies →
is this bug the reason why the toilet malfunctioned?
I don't think apollo 11's toilet malfunctioned, it was just not very good. Everything smelled like poop mixed with chemicals, and that was by design.
> Rust’s ownership system makes lock leaks a compile-time error.
Rust specifically does not forbid deadlocks, including deadlocks caused by resource leaks. There are many ways in safe Rust to deliberately leak memory - either by creating reference count cycles, or the explicit .leak() methods on various memory-allocating structures in std. It's also not entirely useless to do this - if you want an &'static from heap memory, Box.leak() does exactly that.
Now, that being said, actually writing code to hold a LockGuard forever is difficult, but that's mainly because the Rust type system is incomplete in ways that primarily inconvenience programmers but don't compromise the safety or meaning of programs. The borrow checker runs separately from type checking, so there's no way to represent a type that both owns and holds a lock at the same time. Only stacks and async types, both generated by compiler magic, can own a LockGuard. You would have to spawn a thread and have it hold the lock and loop indefinitely[0].
[0] Panicking in the thread does not deadlock the lock. Rust's std locks are designed to mark themselves as poisoned if a LockGuard is unwound by a panic, and any attempt to lock them will yield an error instead of deadlocking. You can, of course, clear the poison condition in safe Rust if you are willing to recover from potentially inconsistent data half-written by a panicked thread. Most people just unwrap the lock error, though.
Someone please amend the title and add "using claude code" because that's customary nowadays.
Also add "AI can make mistakes". Thank you.
Thank you for your attention to this matter.
An application of their specification language, https://juxt.github.io/allium/
It seems the difference between this and conventional specification languages is that Allium's specs are in natural language, and enforcement is by LLM. This places it in a middle ground between unstructured plan files, and formal specification languages. I can see this as a low friction way to improve code quality.
Fascinating read. Well done. Everyone involved in the Apollo program was amazing and had many unsung heroes.
This is so insightfully and powerfully written I had literal chills running down my spine by the end.
What a horrible world we live in where the author of great writing like this has to sit and be accused of "being AI slop" simply because they use grammar and rhetoric well.
I was completely riveted the whole read. The description of Collins' dilemma is the first time I've seen an actual real world scenario described that might cause him to return to Earth alone.
If an LLM wrote that, then I no longer oppose LLM art.
I thought that was the least likeable part of the article. They speculated wildly, somehow making the leap that a trained astronaut would not resort to a computer reset if the problems persisted to weave the narrative that this bug was super-duper-serious indeed. They didn't need that and it weakened the presentation.
Are there any consequences for the Artemis 2 mission (ironic)?