Comment by stego-tech
10 days ago
This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
> * enough people believe it will happen and act accordingly*
Here comes my favorite notion of "epistemic takeover".
A crude form: make everybody believe that you have already won.
A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.
This world where everybody’s very concerned with that “refined form” is annoying and exhausting. It causes discussions to become about speculative guesses about everybody else’s beliefs, not actual facts. In the end it breeds cynicism as “well yes, the belief is wrong, but everybody is stupid and believes it anyway,” becomes a stop-gap argument.
I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.
IMO this is a symptom of the falling rate of profit, especially in the developed world. If truly productivity enhancing investment is effectively dead (or, equivalently, there is so much paper wealth chasing a withering set of profitable opportunities for investment), then capital's only game is to chase high valuations backed by future profits, which means playing the Keynesian beauty contest for keeps. This in turn means you must make ever-escalating claims of future profitability. Now, here we are in a world where multiple brand name entrepreneurs are essentially saying that they are building the last investable technology ever, and getting people to believe it because the alternative is to earn less than inflation on Procter and Gamble stock and never getting to retire.
If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.
17 replies →
Or just play into the fact that it's a Keynesian Beauty Contest [1]. Find the leverage in it and exploit it.
1. https://en.wikipedia.org/wiki/Keynesian_beauty_contest
1 reply →
On the other hand talking about those believes can also lead to real changes. Slavery used to be seen widely a necessary evil, just like for instance war.
4 replies →
The "Silent Majority" - Richard Nixon 1969
"Quiet Australians" - Scott Morrison 2019
27 replies →
You could say that AI has become the new religion. Plus the concomitant opiate for the masses.
It's not just exhausting, it's a huge problem. Even if everyone is a complete saint all the time and has the best of intentions, going by beliefs about beliefs can trap us in situations where we're all unhappy.
The classic situation is the two lovers who both want to do what they think makes their partner happy, to the extent that they don't tell what they actually want, and end up doing something neither wants.
I think the goal of all sorts of cooperative planning should be to avoid such situations like the plague.
Ultimately it all comes back to the collective action problem doesn't it?
I believe that is solvable.
Isn't that how Bitcoin "works"?
8 replies →
Refined 1.01 authoritarian form: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because it's become a habit and because dissenters seem to have "accidents" falling out of high windows.
V 1.02: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because they believe that the others believe that you have enough power to crush the dissent. The moment this belief fades, you fall.
Is that not the "Emperor's New Clothes" form? That would be like version 0.0.1
it's a sad state these days that we can't be sure which country you're alluding to
Ontological version is even more interesting, especially if we're talking about a singularity (which may be in the past rather than future if you believe in simulation argument).
Crude form: winning is metaphysically guaranteed because it probably happened or probably will
Refined: It's metaphysically impossible to tell whether or not it has or will have happened, so the distinction is meaningless, it has happened.
So... I guess Weir's Egg falls out of that particular line of thought?
The refined form is unstable, a hair from an objective reality observation fluke collapsing it.
The system that persists in practice is where everybody knows how things are, but still everybody pleads to a fictional status quo, because if they did not, the others would obliterate them.
Searching on Kagi the keywords "epistemic takeover" only shows your HN comment, and an AI-generated blog post created the next day with a whole article about it.
The Internet died on a Tuesday.
You ever get into logic puzzles? The sort where the asker has to specify that everybody in the puzzle will act in a "perfectly logical" way. This feels like that sort of logic.
Its the classic interrogation technique; "we're not here to debate whether your guilty or innocent, we have all the evidence we need to prove your guilt, we just want to know why". Not sure if it makes it any different though that the interrogator knows they are lying
Isn't talking about "here’s how LLMs actually work" in this context a bit like saying "a human can't be a relevant to X because a brain is only a set of molecules, neurons, synapses"?
Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...
Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.
For instance those small experiments that train a network on addition problems mentioned in a sibling post. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behavior. The machine learning weights is just the medium it is expressed in.
What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). We don't really know this yet.
Its pretty clear that the problem of solving AI is software, I don't think anyone would disagree.
But that problem is MUCH MUCH MUCH harder than people make it out to be.
For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.
You can get around that with agentic frameworks, but all of those right now are manually coded.
So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.
It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.
,
> For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.
Fascinating reasoning. Should we conclude that humans are also incapable of intelligence? I don't know any human who can fit a terabyte of assembly into their context window.
5 replies →
>So obviously, error correction with inputs/outputs is not the way we get to intelligence.
This doesn't seem to follow at all let alone obviously? Humans are able to reason through code without having to become a completely discrete computer, but probably can't reason through any length of assembly code, so why is that requirement necessary and how have you shown LLMs can't achieve human levels of competence on this kind of task?
3 replies →
You're putting a bunch of words in the parent commenter's mouth, and arguing against a strawman.
In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.
This is separate from directly answering the question "is a singularity coming?"
The problem is, there's two groups:
One says "well, it was built as a bunch of pieces, so it can only do the thing the pieces can do", which is reasonably dismissed by noting that basically the only people predicting current LLM capabilities are the ones who are remarkably worried about a singularity occurring.
The other says "we can evaluate capabilities and notice that LLMs keep gaining new features at an exponential, now bordering into hyperbolic rate", like the OP link. And those people are also fairly worried about the singularity occurring.
So mainly you get people using "here's how LLMs actually work" to argue against the Singularity if-and-only-if they are also the ones arguing that LLMs can't do the things that they can provably do, today, or are otherwise making arguments that also declare humans aren't capable of intelligence / reasoning / etc..
2 replies →
There is more than molecules, neurons and synapses. They are made from lower level stuff that we have no idea about (well, we do in this instance but you get the point). They are just higher level things that are useful to explain and understand some things but don't describe or capture the whole thing. For that you would need to go to lower and lower level and so far it seems they go on infinitely. Currently we are stuck at the quantum level, that doesn't mean it's the final level.
OTOH, an LLM is just a token prediction engine. It fully and completely covers it. There is no lower level secrets hidden in the design nobody understands, because it could not have been created if there was. The fact that the output can be surprising is not evidence of anything, we have always had surprising outputs like funny bugs or unexpected features. Using the word "emergence" for this is just deceitful.
This algorithm has fundamental limitations and they have not been getting better, if you look closely. For instance you could vibe code a C compiler now, but it's 80% there, cute trick but not usable in real world. Just like anything, it cannot be economically vibe coded to 100%. They are not going back and vibe coding the previous simpler projects to 100% with "improved" models. Instead they are just vibe coding something bigger to 80%. This is not an improvement in limitations, it is actually communicating between the lines that the limitations cannot be overcome.
Also, enshittification has not even started yet.
I can bake a cake while having 0 understanding of the chemistry that powers the transformation. One is a pile of wet flour, the other is delicious.
A dog can create a snack by doing a trick. Doesn't mean that there isn't some mechanism going on there that neither of them understand.
1 reply →
> “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”
And there are plenty of people that take issue with that too.
Unfortunately they're not the ones paying the price. And... stock options.
History paints a pretty clear picture of the tradeoff:
* Profits now and violence later
OR
* Little bit of taxes now and accelerate easier
Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.
Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.
31 replies →
Violence was a moderating factor when people on each side were equally armed, and number was a deciding factor.
Nowadays you could squash an uprising with a few operators piloting drones remotely.
3 replies →
Every possible example of “progress” have either an individual or a state power purpose behind it
there is only one possible “egalitarian” forward looking investments that paid off for everybody
I think the only exception to this is vaccines…and you saw how all that worked during Covid
Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad
20 replies →
We have taxes now though, how much is enough?
Hint: The answer for the government is, it's never enough. "little bit of taxes" is never what we had.
Seriously though, I wouldn't mind "little bit of taxes" if there were guaranteed ways to stop funding something when it's a failed experiment, which is difficult in government. Because "a little bit more" is always wanted.
2 replies →
I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.
It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!
It's true people need something to do, but I don't think the COVID shutdown (lockdowns didn't happen in the U.S. for the most part though they did in other countries) is a good comparison because the entire society was perfused with existential dread and fear of contact with another human being while the death count was rising and rising by thousands a day. It's not a situation that makes for comfortable comparisons because people were losing their damn minds and for good reason.
That’s a fair point. I don’t mean to trivialize the actual fears and concerns surrounding the pandemic.
Make babies?
> prior to reforming society into one that does not predicate survival on continued employment and wages
There's no way that'll happen. The entire history of humanity is 99% reacting to things rather than proactively preventing things or adjusting in advance, especially at the societal level. You would need a pretty strong technocracy or dictatorship in charge to do otherwise.
You would need a new sense of self and a life free of fear, raising children where they can truly be anything they like and teach their own kids how to find meaning in a life lived well. "Best I can do is treefiddy" though..
The UK seems to be prototyping that. We're changing to a society where everyone lives by claiming benefits. (eg. https://www.gbnews.com/money/benefits-claimants-earnings-rev...)
Ugh, GBNews, outrage fodder for idiots and the elderly with no ability to navigate the modern information landscape.
You can tell it's watched almost exclusively by old people because all the ads on the channel are for those funeral pre-pay services or retirement homes.
Safe to ignore anything they have to say.
3 replies →
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe (edit: about whether or not the singularity will happen).
I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.
if people believe its a threat and it is also real then what matters is timing
Which would also mean the accelerationists are potentially putting everyone at risk. I'd think a soft takeoff decades in the future would give us a much better chance of building the necessary safeguards and reorganizing society accordingly.
3 replies →
Depends on what a post singularity world looks like, with Roko's basilisk and everything.
> If the singularity does happen, then it hardly matters what people do or don't believe.
Depends on how you feel about Roko's basilisk.
God Roko's Basilisk is the most boring AI risk to catch the public consciousness. It's just Pascal's wager all over again, with the exact same rebuttal.
1 reply →
"If men define situations as real, they are real in their consequences."
Thomas theorem is a theory of sociology which was formulated in 1928 by William Isaac Thomas and Dorothy Swaine Thomas.
https://en.wikipedia.org/wiki/Thomas_theorem
> ”when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself”
Laughed out loud at that - and cried a little.
I have had trouble explaining people: “No! don’t use your email password! This is not your email you are logging in to, your email address is a username for this other service. Don’t give them your email password!”
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
We've already been here in the 1980s.
The tech industry needs to cultivate people who are interested in the real capabilities and the nuance around that, and eject the set of people who am to turn the tech industry into a "you don't even need a product" warmed-over acolytes of Tony Robbins.
All the discussion of investment and economics can be better informed by perusing the economic data in Rise and Fall of American Growth. Robert Gordon's empirical finding is that American productivity compounded astonishingly from 1870-1970, but has been stuck at a very low growth rate since then.
It's hard to square with the computer revolution, but my take post-70s is "net creation minus creative destruction" was large but spread out over more decades. Whereas technologies like: electrification, autos, mass production, telephone, refrigeration, fertilizers, pharmaceuticals, these things produced incomparable growth over a century.
So if you were born in the 70s America, your experience of taxes, inflation, prosperity and which policies work, all that can feel heavier than what folks experienced in the prior century. Of course that's in the long run (ie a generation).
I question whether AI tools have great net positive creation minus destruction.
This entire chain of reasoning takes for granted that there won't be a singularity
If you're talking about "reforming society", you are really not getting it. There won't be society, there won't be earth, there won't be anything like what you understand today. If you believe that a singularity will happen, the only rational things to do are to stop it or make sure it somehow does not cause human extinction. "Reforming society" is not meaningful
There will be earth!
I thought the Singularity had already happened when the Monkeys used tools to kill the other Monkeys and threw the bone into the sky to become a Space Station.
> It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)
Here's your own fallacy you fell into - this is important to understand. Neither do you nor me understand "how LLMs actually work" because, well, nobody really does. Not even the scientists who built the (math around) models. So, you can't really use that argument because it would be silly if you thought you know something which rest of the science community doesn't. Actually, there's a whole new field in science developed around our understanding how models actually arrive to answers which they give us. The thing is that we are only the observers of the results made by the experiments we are doing by training those models, and only so it happens that the result of this experiment is something we find plausible, but that doesn't mean we understand it. It's like a physics experiment - we can see that something is behaving in certain way but we don't know to explain it how and why.
Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?
>Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.
We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.
Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:
"Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"
What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.
Let's revisit your statement.
"the mechanics of how LLMs work to produce results are observable and well-understood".
Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?
9 replies →
The concept “understand” is rooted in utility. It means “I have built a much simpler model which produces usefully accurate predictions, of the thing or behaviour I seek to ‘understand’”. This utility is “explanatory power”. The model may be in your head, may be math, may be an algorithm or narrative, it may be a methodology with a history of utility. “Greater understanding” is associated with models that are simpler, more essential, more accurate, more useful, cheaper, more decomposed, more composable, more easily communicated or replicated, or more widely applicable.
“Pattern matching”, “next token prediction”, “tensor math” and “gradient descent” or the understanding and application of these by specialists, are not useful models of what LLMs do, any more than “have sex, feed and talk to the resulting artifact for 18 years” is a useful model of human physiology or psychology.
My understanding, and I'm not a specialist, is there are huge and consequential utility gaps in our models of LLMs. So much so, it is reasonable to say we don't yet understand how they work.
You can't keep pushing the AI hype train if you consider it just a new type of software / fancy statistical database.
Yes, there is - benefit of a doubt.
Pro tip: call it a "law of nature" and people will somehow stop pestering you about the why.
I think in a couple decades people will call this the Law of Emergent Intelligence or whatever -- shove sufficient data into a plausible neural network with sufficient compute and things will work out somehow.
On a more serious note, I think the GP fell into an even greater fallacy of believing reductionism is sufficient to dissuade people from ... believing in other things. Sure, we now know how to reduce apparent intelligence into relatively simple matrices (and a huge amount of training data), but that doesn't imply anything about social dynamics or how we should live at all! It's almost like we're asking particle physicists how we should fix the economy or something like that. (Yes, I know we're almost doing that.)
In science these days, the term "Law" is almost never used anymore, the term "Theory" replaced it. E.g Theory of special relativity instead of Law of special relativity.
Agree. I think it is just people have their own simplified mental model how it works. However, there is no reason to believe these simplified mental models are accurate (otherwise we will be here 20-year earlier with HMM models).
The simplest way to stop people from thinking is to have a semi-plausible / "made-me-smart" incorrect mental model of how things work.
Did you mean to use the word "mental"?
The problem that I see with this analysis is that the author is trying to describe a curve in data that is kind of a punctuated equilibrium on steroids. Given that the takeoff event is when systems are capable of recursive self-improvement and that event will be a huge inflection point in the curve, it feels like they are trying to predict the second half of a biphasic graph based on data from the first half, when the second half is distinctly different than the first.
Thinking about how this might work, the slope of the first half does not predict the inflection point. Once the threshold for recursive self-improvement gets crossed, the curve changes drastically. From what I have heard MAYBE we are just starting to see the glimmers of possible recursive self-improvement in GPT-5.3 / Opus 4.6. If so, this discussion, while interesting, is trying to predict the new curve based on a single relevant data point.
> here’s how LLMs actually work
But how is that useful in any way?
For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.
> We really have no idea how did ability to have a conversation emerge from predicting the next token.
Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question.
No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.
To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.
2 replies →
If such a simplistic explanation was true, LLM's would only be able to answer things that had been asked before, and where at least a 'fuzzy' textual question/answer match was available. This is clearly not the case. In practice you can prompt the LLM with such a large number of constraints, so large that the combinatorial explosion ensures no one asked that before. And you will still get a relevant answer combining all of those. Think combinations of features in a software request - including making some module that fits into your existing system (for which you have provided source) along with a list of requested features. Or questions you form based on a number of life experiences and interests that combined are unique to you. You can switch programming language, human language, writing styles, levels as you wish and discuss it in super esoteric languages or morse code. So are we to believe this answers appear just because there happened to be similar questions in the training data where a suitable answer followed? Even if for the sake of argument we accept this explanation by "proximity of question/answer", it is immediately that this would have to rely on extreme levels of abstraction and mixing and matching going on inside the LLM. And that it is then this process that we need to explain how works, whereas the textual proximity you invoke relies on this rather than explaining it.
2 replies →
> Maybe you don't.
My best friend who has literally written a doctorate on artificial intelligence doesn't. If you do, please write a paper on it, and email it to me. My friend would be thrilled to read it.
2 replies →
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?
You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.
19 replies →
I thought the Hinton talking to Jon Stewart interview gives a rough idea how they work. Hinton got Turing and Nobel prizes for inventing some of the stuff https://youtu.be/jrK3PsD3APk?t=255
> We really have no idea how did ability to have a conversation emerge from predicting the next token.
Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.
Sometimes things seems unbelievable simply because they aren't true.
> It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating.
It's funny how, in order to explain one complex phenomenon, you took an even more complex phenomenon as if it somehow simplifies it.
3 replies →
"'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".
Just say it simply,
1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.
2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.
3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.
I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.
You may be throwing the baby out with the bathwater. I learned more last year from ChatGPT Pro than I'd learned in the previous 5, FWIW.
Just say 'LLMs'. Whenever someone name drops a specific model I can't help but think it's just an Ad bot.
4 replies →
I've recently found LLMs to be an excellent learning tool, using it hand-in-hand with a textbook to learn digital signal processing. If the book doesn't explain something well, I ask the LLM to explain it. It's not all brain wasting.
Well said. I use it the same way. Sometimes, a technical book will assume that you know a concept or will even use an acronym that is not explained (but obvious) or is just plainly not very explicit. I also use it to directly test my knowledge of the subject (you have to be careful of the people-pleasing behavior, but in my experience they tend to gently tell you where you're wrong rather than lie to you). Same goes for hands-on books. Sometimes the example are not very interesting, or you have something of your own that you would like to try. As long as you use it carefully like this, it can be really transformative. I do agree that there is a potential risk of offloading too much thinking to it, but if you keep that in mind, I don't see the problem.
Exactly. LLMs are really just an extension of the internet. You can use the internet to expand on what you know or you can use the internet to rot your brain.
We have agency to decide and if the majority decide on brain rot I really don't care.
I have been learning things from the internet for 30 years and LLMs are just the greatest gift. If someone isn't leveraging these tools to increase what they know good luck.
I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.
A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.
You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.
I agree, you're probably right! Thanks!
The goal is to eliminate humans as the primary actors on the planet entirely
At least that’s my personal goal
If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled
As it stands today and in all the annals of history there does not exist a system that does what I just described.
Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.
Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist
And no mondragon does not have one of these
This looks like a very comfortable, pleasant way of civilization suicide.
Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])
Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.
[1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...
Civilization suicide is the ideal
17 replies →
Well, demonstrably you have at least some measure of interest in interaction with other humans based on the undeniable fact that you are posting on this site, seemingly several times a day based on a cursory glance at your history.
Because every effort people use to do anything else is a waste of resources and energy and I want others to stop using resources to make bullshit and put all of them into ASI and human obviation
There are no more important other problems to solve other than this one
everything else is purely coping strategies for humans who don’t want to die wasting resources on bullshit
Nobody can stop you from having this view, I suppose. But what gives you the right to impose this (lack of) future on billions of humans with friends and families and ambitions and interests who, to say the least, would not be in favor of “human obviation”?
You should probably build an organization that can counter it
2 replies →
Most people need more social contact, not less. Modern tech is already alienating enough.
>The goal is to eliminate humans as the primary actors on the planet entirely At least that’s my personal goal
and another of your post states
> The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly
You seem to contradict yourself if not entirely confused.
What’s the contradiction?
1 reply →
Whereas I agree that working with machines would help dramatically in achieving science, there would be in your world no one truly understanding you. You would be alone. Can't imagine how you could prefer that.
Bell labs was pushed aside because Bell Telephone was broken up by the courts. (It's currently a part of Nokia of all things - yeah, despite your storytelling here, it's actually still around :-)
Not sure if transhumanism is the only solution to the problems you mentioned - I think it's often problematic because people like Thiel claim to have figured it out, and look for ways to force people into their "contrarian" views, although there is nothing but disregard for any other opinions other than their own.
But you are of course free to believe and enjoy the vision of such a future but this is something that should happen on a collective level. We still live in a (to some extent idealistic) but humanistic society where human rights are common sense.
In the mean time your use of resources has an opportunity cost for other people. So expect backlash
Sounds like planet described in The Naked Sun by Isaac Asimov
Why would the machines want to work with you or any other human?
Man, I used to think exactly like you do now, disgust with humans and all. I found comfort in machines instead of my fellow man, and sorely wanted a world governed by rigid structures, systems, and rules instead of the personal whims and fancies of whoever happened to have inherited power. I hated power structures, I loathed people who I perceived to stand in the way of my happiness.
I still do.
The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.
Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.
But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.
To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.
Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.
But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.
I applaud and admire your effort to discuss their point in the good faith <3
Thank you.
...Man, men really will do anything to avoid going to therapy.
To me this sounds so sad
I don't see a credible path where the machines and robots help you...
> "eliminate humans as the primary actors on the planet entirely"
...so they can work with you. The hole in your plan might be bigger than your plan.
took a bit of time to read your work, interesting stuff even if it triggers people XD
the full realization of Humanity's potential indeed needs to permit such Choice that you could live your whole life without seeing another Human.
I still think we can build something better, rather than hope for AI or Alien overlords taking us to the next step ;)
Now this is transhumanism! Don't let the cope and seething from this website dissuade you from keeping these views.
Thank you!
Ah yes, because the majority of people pushing for transhumanism aren't complete psyco / sociopaths! You're in great company! /sarcasm
> Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
This is so true and more often than it should be. I am going to use this phrase !
Currently, everything suggests the torment nexus will happen before the singularity.
> [...] prior to reforming society [...]
Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)
I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.
Literally nobody’s trying because there is no solution
The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly
and so there is no solution because humans can’t plan or execute on a plan
The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.
9 replies →
What is your argument for why denecessitating labor is very bad?
This is certainly the assertion of the capitalist class,
whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.
It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.
The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,
in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.
I think this goes beyond capitalism.
People willing to do something harder or more risky than others will always have a bigger chance to get a better position. Be that sports, labor or anything in life.
I am 1000% OK with living in a world where basic needs are fully provided for,
and competition and drive are worked out in domains which do not come at the expense of someone else's basic needs.
Scifi has speculated about many potential outlets for "human drive," the frontier/pioneer spirit being a big one; if I could name my one dream for my kids it'd be that they live in an equitable post-scarcity society which has turned its interests to exploring the solar system and beyond.
Sports, "FKT" competitions, and social capital ("influence") are also relatively innocuous ways to absorb the drive for hierarchy and power.
The X factor is, is the will to dominate/control/be subjugated to, suppressible or manageable.
If not we're in for a bad time.
Equally unhinged. Cheers to you!
You’re “yaas queen”ing a blog post that is just someone’s Claude Code session. It’s “storytelling” with “data,” but not storytelling with data. Do you understand? I mean I could make up a bunch of shit too and ask Claude Code to write something I want to stay with it too.
I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.
First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.
So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.
The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.
> Nobody knows how LLMs work.
I'm sorry, come again?
Nobody knows how LLMs work.
Anybody who claims otherwise is making a false claim.
nobody can how how something that is non-deterministic works - by its pure definition
5 replies →
I think they meant "Nobody knows why LLMs work."
15 replies →
this is wrong
It is not. I would suggest engaging in the other branch of this thread, because people who agreed with you voiced their opinion and they were proven utterly wrong.
Humanity does not understand how LLMs work. This is definitive.
2 replies →
I thought the answer was "42"
Reality won't give a shit about what people believe.
>It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)
You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.
> Folks vibe with the latter
I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).
Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.
It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.
It seems pretty obvious to me the ruling class is preparing for war to keep us occupied, just like in the 20s, they'll make young men and women so poor they'll beg to fight in a war.
It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.
2 replies →
For ages most people believed in a religion. People are just not smart and sheepy followers.
Most still do.
That is a very uncharitable take. Faith is often a source of meaning or purpose independent of intelligence. I would recommend "A Confession by Tolstoy" for a different perspective
romans 1:20