← Back to context

Comment by munificent

4 hours ago

There is a whole giant essay I probably need to write at some point, but I can't help but see parallels between today and the Industrial Revolution.

Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it. That meant that it was fine for things like property and the commons to be poorly defined. If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.

But with the help of machines, a small number of people were able to completely deplete parts of the earth. We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.

We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm. We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others. With AI, we're in the industrial era of the digital world. Now a single corporation can train an AI using someone's copyrighted work and in return profit off the knowledge over and over again at industrial scale.

This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?

It really feels like we're in the soot-covered child-coal-miner Dickensian London era of the Information Revolution and shit is gonna get real rocky before our social and legal institutions catch up.

"but I can't help but see parallels between today and the Industrial Revolution"

You're not the only one.

The current Pope Leo XIV explicitly named himself after the the previous Leo, Pope Leo XIII, who was pope during the Industrial Revolution (1878-1903) and issued the influential Encyclical Rerum novarum (Rights and Duties of Capital and Labor) in response to the upheaval.

“Pope Leo XIII, with the historic Encyclical Rerum novarum, addressed the social question in the context of the first great industrial revolution,” Pope Leo recalled. “Today, the Church offers to all her treasure of social teaching in response to another industrial revolution and the developments of artificial intelligence.” A name, then, not only rooted in tradition, but one that looks firmly ahead to the challenges of a rapidly changing world and the perennial call to protect those most vulnerable within it.”

https://www.vatican.va/content/leo-xiii/en/encyclicals/docum...

https://www.vaticannews.va/en/pope/news/2025-05/pope-leo-xiv...

  • > the Church offers to all her treasure of social teaching

    That gay is bad but pedo is good.

    I'm sorry but it's just funny to me to see the Catholic Church use such pompous words while being an outdated institution crashing hard.

As you know, I deeply respect you. Not trying to argue here, just provide my own perspective:

> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?

I write things for two main reasons: I feel like I have to. I need to create things. On some level, I would write stuff down even if nobody reads it (and I do do that already, with private things.) But secondly, to get my ideas out there and try to change the world. To improve our collective understanding of things.

A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.

I don't mean that thinks that everyone has to share my perspective. It's just my own.

  • Agreed, totally! I still write and put stuff online.

    But it definitely feels different now. It used to feel like I was tending a public garden filled with other people who might enjoy it. It still kind of feels like that, but there are a handful of giant combine machines grinding their way around the garden harvesting stuff and making billionaires richer at the same time.

    It's not enough to dissuade me from contributing to the public sphere, but the vibe is definitely different.

    Honestly, it reminds me a lot about the early days of Amazon. It's hard to remember how optimistic the world felt back then, but I remember a time when writing reviews felt like a public good because you were helping other people find good products. It was like we all wanted honest product information and Amazon provided a neutral venue for us to build it. Like Wikipedia for stuff.

    But as Amazon got bigger and bigger and the externalities more apparent, it felt less like we were helping each other and more like we were help Bezos buy yet another yacht or media empire. And as the reviews got more and more gamed by shady companies, they became less of a useful public good. The whole commons collapsed.

    I worry that the larger web and digital knowledge environment is going that way.

    I still intend to create and share my stuff with the world because that's who I want to be. But I'll always miss the early days of the web where it felt like a healthier environment to be that kind of person in.

    • > But as Amazon got bigger and bigger and the externalities more apparent, it felt less like we were helping each other and more like we were help Bezos buy yet another yacht or media empire.

      The Internet-circulating quote comes to mind: Planet Earth is pretty much a vacation resort for around 500 rich people, and the remaining 8 billion of us are just their staff. The Relative Few have got the system set up perfectly so that whatever we do, we're probably serving/enriching them. AI doesn't really change this, but it does further it.

    • I can totally see that, for sure. I was much more likely to write a review long ago, now I don't even bother. (For buying stuff online, at least.) Maybe I lost my innocence about this stuff a long time ago, and so it's not so much LLMs that broke it for me, but maybe... I dunno, the downfall of Web 2.0 and the death of RSS? I do think that the old internet, for some definition of "old," felt different. For sure. I'll have to chew on this. I certainly felt some shock on the IP questions when all of this came up. I'm from the "information wants to be free" sort of persuasion, and now that largely makes me feel kinda old.

      Also I'm not a fan of billionaires, obviously, but I think that given I've worked on open source and tools for so long, I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.

      (Also, I didn't say this in the first comment, but I'm gonna be thinking about the industrial revolution thing a lot, I think you're on to something there. Scale meaningfully changes things.)

      4 replies →

    • If raw resources (tree cutting) and manufacturing (book binding) is saturated, a fully-realized economy has just one step left: financialization.

      You have to start finding ways to keep people hooked on books and make it a part of their regular lifestyle. One book can't be enough, and after a while you have to convince them to replace the books they already bought. New editions, Author's Footnotes, limited run release, all of the stops have to be pulled out to get consumers to show up en-masse. Because that's what they are - consumers, not readers - wallets to be squeezed until they're bled of all the trust they had in media.

      I think about the publications I liked reading as a kid, like Joystiq and Polygon. Some of the best games journalism the industry produced, but inevitably doomed to fail as their competitors monetized further. The rest of traditional media has followed the same path, converging on some mercurial social network marketing tactic as the placeholder for big-picture brand strategy.

  • > A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.

    Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.

  • > I don't mean that thinks that everyone has to share my perspective. It's just my own.

    I think you are walking all around the word "consent" and trying very hard to avoid it altogether.

    Your perspective, because it refuses to include any sort of consent, is invalid. No perspective that refuses consent can be valid.

    • Consent is absolutely important, but that does not mean that every single thing in the entire world requires explicit consent. You did not ask me for consent to use my words in your comment. That does not mean you're a bad person.

      Free use is an important part of intellectual property law. If it did not exist, the powerful could, for example, stifle public criticism by declaring that they do not consent to you using their words or likeness. The ability to do that is important for society. It is also just generally important for creating works inspired by others, which is virtually every work. There has to be lines for cases where requiring attribution is required, and cases where it is not.

      8 replies →

    • refuse consent?

      You may need to clarify that thought.

      I don't think the poster has a viewpoint that 'refuses consent', their viewpoint is their writing they put for others to view is for others to view, regardless of how it is viewed. They seem to be giving consent, not refusing it, no?

      1 reply →

>This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?

I have been thinking about this. I was pretty amendment a few months ago that AI is going to make a lot of thing worse for everyone because of the externalities of the technology (Data Center Creep, lock in of models, ect) and it probably still will. But then someone suggested to me that I use Claude Code to upgrade my SSG site to the new version because I had been sitting on my ass as the years went by, missing deadline by deadline. I just couldn't put my self into gear to upgrade it. It was massively out of date 10 years plus and I knew it was going to be a nightmare to deal with the problems. I probably was making it more harder than it really was in my head.

So I purchase Claude Code pro and the thing upgrade my site pretty well. There were things it missed because I didn't know the problems existed in the first place until the upgrade was complete, but I had a working updated site in less than an hour. If I had done this myself it would have taking me days/weeks.

So at that point I realized something. Its a tool that can handle good amount of tasks I throw at it as long as I am specific. I think the problem with most people is they expect it to respond like a human. Thats not going to happen, IMHO. Maybe some day it will be more than what it is but right now its just a tool. I don't care what anyone says about AGI and the likes. Its not going to happen with the current iteration (the pattern recognition type) We are going to need more than that if we want to simulate a human brain..

The point is. And I know this is not going to be received very well, mostly because this tech is in the hands of people that are gatekeeping it, is that maybe someday we might reach a point where all of humanities knowledge is put into these things and we can use them to better our lives. Maybe at some point we don't need to hold onto or hoard things as if its the only way we can make a living? And instead we can build things just for the sake of creating it and improve humanity in the process? Obviously the commercial model of these things is not great, that is going to have to be dealt with, but I can see a future where we might be able to fix a lot of humanities problems with this technology as more and more good people put it to use for things that help humanity.

If I'm being honest, I've never related to that notion of remuneration and credit being the primary reason to write something. I don't claim to be some great writer or anything, but I do have a blog I write quite often on (though I'm traveling in my wife's Taiwan now and haven't updated it in a while). But for me, I write because it feels good to do so. Sometimes there's a group utility in things like I edit a Google Maps listing to be correct even though "a faceless corporation is going to hoover up my work and profit off it without paying me for my work" and I might pick up a Lime bike someone's dropped into the sidewalk even though "a faceless corporation is externalizing the work of organizing the proper storage of their property on public land without paying the workers" or so on.

I just think it's nice to contribute to the human commons and it's fine if some subset of my fellow organism uses it in whatever way. Realistically, the fact that Brewster Kahle is paid whatever few hundred thousand he's paid for managing a non-profit that only exists because it aggregates other people's work isn't a problem for me. Or that Larry Page and Sergey Brin became ultra-rich around providing a search interface into other people's work. Or that Sam Altman and Dario Amodei did the same through a different interface.

This particular notion doesn't seem to be a post-AI trend. It seems to have happened prior to the big GPTs coming out where people started doing a lot of this accounting for contribution stuff. One day it'll be interesting to read why it started happening because I don't recall it from the past. Perhaps I just wasn't super plugged in to the communities that were complaining about Red Hat, Inc.

It's not that I don't understand if I sold my Subaru to a guy who immediately managed to sell it to another guy for a million times the money. I get that. I'd feel cheated. But if I contributed a little to it, like I did so Google would have a site to list for certain keywords so that they could show ads next to it in their search results, I just find it so hard to be like "That's my money you're using. Pay me!".

  • You do it as a hobby, that's fine. Some people do it for a living. And while they aren't owed a living doing that specific thing, it is going to be a big problem for them if they can't make money at it anymore.

    I'm sure plenty of people feel the same way about software. They make software as a hobby and don't care about remuneration or credit. Meanwhile I write software for my day job and losing the ability to make money from it would be devastating.

    • > Some people do it for a living.

      I was going to write, "not for long," which might be true for some. But then I realized there will always be a difference between LLM output and human writing. We don't read blogs because of their facts, we read them because of how the facts are presented and how the author's personality comes through on the page.

    • Ah, I see. It’s just straightforward protectionism like dockworkers opposing automation and so on. That I do comprehend, in fact.

      I write software too and I may no longer be able to just do it in the old way. Pretty scary world but also exciting. I can’t imagine trying to restrict LLM software writers on that basis but I can comprehend it as simply self-interest.

      Fair enough.

      3 replies →

At what point do we look at 'Industrial Society and its Future' and go from "yeah that'll never happen", "ok some parts of it are happening", to ...? I swear tech folks are the most obtuse people on the planet.

  • I think it's completely normal. Whenever automation comes knocking, people are inclined to think it's going to flatline conveniently before their job is at risk. LLMs can code now? Cool, they can't code well though can they? Oh they can code pretty well now? Cool, coding was never the hard part of SWE anyway, it's [thing we have no reason to think AI can't beat 99% of humans at at some point], etc

    I think SWE as a mainstream profession is much nearer to the end than the beginning, I'm curious and quite scared about what becomes of us.

    • I don't think you understand. Frankly, AI is a failure if all it does is replace coders. AI needs (given its current investment levels) to conquer all forms of knowledge work. This is an example of tech/industry needing to impose itself on society, rather than society needing it.

      4 replies →

> Prior to the industrial revolution, the natural world was nearly infinitely abundant.

The opposite is true. Central Europe was almost devoid of trees. Food was scarce as arable land bore little fruit without fertiliser.

Society was Malthusian until the Industrial Revolution.

  • Can we interpret "abundant" in a Darwinian sense e.g. diversity of life? I would think the industrial farming revolution decreased crop variety over time same for animal lineages aside from the rapid increase in mixed poodle breeds.

  • To add, I don’t think my ancestor Spaniards for example needed the help of machines to deplete mines in America. They also came already equipped with all kinds of legal systems, including the Requerimiento, which they read out loud to natives in preposterous spectacle.

    In general the transition from feudalism to capitalism, including the formation of the legal systems that supported the latter, happened gradually for maybe up to four or five centuries before the steam engine had been invented.

    Sure, the Industrial Revolution further accelerated the development of property rights, mercantile, and civil laws, but all in all I don’t think there’s much truth that machines were the primary cause of such developments.

  • Not really Malthusian. Agricultural societies had adapted to keep the population stable during normal times and bounce back in a generation or two after bad times. Those cultural adaptations stopped working when childhood mortality declined.

    Useful land was a scarce resource in more civilized regions, while labor was cheap. Given enough land, subsistence farmers could easily feed themselves outside particularly bad years. But much of the land belonged to local elites, and commoners had to work that land to fund the pursuits of the elites.

> We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others. With AI, we're in the industrial era of the digital world. Now a single corporation can train an AI using someone's copyrighted work and in return profit off the knowledge over and over again at industrial scale.

The idea that copyright simply doesn't apply to AI has more to do with AI companies deciding that they're not going to comply with those laws than the design of the laws. Also a very successful lobby against enforcement by positioning AI as a strategic necessity.

  • It's not possible (or at least extremely hard) to prove that the final weights they come up with resulted from copyright infringement.

    Thats why they are evaluated so high on the stock market. Basically the will steal all the value of intellectual property in a semi legal way.

> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?

I'm happy to miss all the stuff that was written just for the financial benefit of the author.

A couple thoughts…

Mostly, AIs don’t recite back various works. Yes, there a couple of high profile cases where people were able to get an AI to regurgitate pieces of New York Times articles and Harry Potter books, but mostly not. Mostly, it is as if the AI is your friend who read a book and gives you a paraphrase, possibly using a couple sentences verbatim. In other words, it probably falls under a fair use rule.

Secondly, given the modern world, content that doesn’t appear online isn’t consumed much, so creators who are doing it for the money will certainly continue putting content online. Much of that content will be generated by AIs, however.

  • You're missing the point. This is the crux of munificent's argument IMO (and I've made variations of it as well)

    > We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others.

    You getting a summary of a copyrighted work from a friend is necessarily limited by the number of friends you have, the amount of time they have to read stuff and talk to you, and so on. Machines (and AIs) don't have any limitations.

    • Yes, true. But does that really shift the argument much? An AI is like the most well-read book nerd you’ve ever met. The AI has read everything. They still won’t recite Harry Potter for you at full length and reading what the original author wrote is part of the pleasure.

      4 replies →

Our only hope is that AI in the long run is both powerful and benevolent enough to be its own "whistleblower" in cases of misuse.

> Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it. That meant that it was fine for things like property and the commons to be poorly defined. If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.

I mean, medieval Europe (speaking broadly) had pretty well defined property rights wrt hunting. In fact, the forester at the time was thought of as one of the most corrupt jobs, as they'd commonly have side hustles poaching and otherwise illegally extracting resources from the lands they enforced and kept others from utilizing in a similar way. Quis custodiet ipsos custodes?

Stuff gets put online when the reader isn't the customer. Someone is paying for a reader to be told certain things. So it's free at the point of reading.

>Prior to the industrial revolution, the natural world was nearly infinitely abundant.

>We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.

Excuse me? The industrial revolution was like 300 years ago. We had laws before that.

> We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm.

The analogy seems to be backwards though. It would be as if we previously had a scarcity of land and because of that divided it up into private property so markets could maximize crop yield etc. and then someone came up with a way to grow food on asteroids using robots, and that food is only at the 20th percentile of quality but it's far cheaper. Suddenly food becomes much more abundant and the people who had been selling the 20th percentile food for $5 are completely out of the market because the new thing can do that for $0.05, and the people providing the 50th percentile food for $10 are also taking a hit because the price difference between what they're providing and the 20th percentile stuff just doubled.

The existing plantation owners then want to put a stop to this somehow, or find a way to tax it, but arguments like this have a problem:

> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?

This was already the status quo as a result of the internet. Newspapers were slowly dying for 20 years before there was ever a ChatGPT, because they had been predicated on the scarcity of printing presses. If you published a story in 1975 it would take 24 hours for relevant competitors to have it in their printed publication and in the meantime it was your exclusive. The customer who wants it today gets it from you. On top of that, there weren't that many competitors covering local news, because how many local outlets are there with a printing press?

Then blogs, Facebook, Reddit and Twitter come and anyone who can set up WordPress can report the news five minutes after you do -- or five hours before, because now everyone has an internet-connected camera in their pocket so the first news of something happening now comes in seconds from whoever happened to be there at the time instead of the next morning after a media company sent a reporter there to cover it.

The biggest problem we have yet to solve from this is how to trust reports from randos. The local paper had a reputation to uphold that you now can't rely on when the first reports are expected to come from people with no previous history of reporting because it's just whoever was there. But that's the same thing AI can't do either -- it's a notorious confabulist.

And it's the media outlets shooting themselves in the foot with this one, because too many of them have gotten far too sloppy in the race to be first or pander to partisans that they're eroding the one advantage they would have been able to keep. Damn fools to erode the public's trust in their ability to get the facts right when it's the one thing people would otherwise still have to get from them in particular.

> It really feels like we're in the soot-covered child-coal-miner Dickensian London era of the Information Revolution and shit is gonna get real rocky before our social and legal institutions catch up

The really discouraging part of this is that it feels like our social and legal institutions don't even care if they catch up or not.

Technology is speeding up and the lag time before anything is discussed from a legal standpoint is way, way too long