Reading this felt like the official obituary for the 90s techno-optimism many of us grew up on.
The "end of history" hangover is real. We went about building the modern stack assuming bad actors were outliers, not state-sponsored standard procedure. But trying to legislate good use into licenses? I don't know how you would realistically implement it and to what extent? That solution implies we have to move toward zero-trust architectures even within open communities.
As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise.
I remember reading a quote somewhere that stuck with me. Paraphrasing, "If the architecture of my code doesn't enforce privacy and resistance to censorship by default, we have to assume it will be weaponized".
I am out of ideas, practical ones, lots sound good on paper and in theory. It's a bit sad tbh. Always curious to hear more on this issue from smarter people.
Yeah, reminds me of the "Security" xkcd (https://xkcd.com/538/) - a threat from a good ol' 5-dollar wrench defeating state-of-the-art encryption.
Never estimate how state actors can use violence (or merely the threat of it) to force people to do things. The only way to respond to that is not through code or algorithms or protocols, but through political action (whether it be violent or non-violent)
It's also questionable to which extent restrictive licenses for open source software stay that relevant in the first place, as you can now relatively easily run an AI code generator that just imitates the logic of the FOSS project, but with newly generated code, so that you don't need to adhere to a license's restrictions at all.
How does one make sure the implementation is sufficient and complete? It feels like assuming total knowledge of the world, which is never true. How many false positives and false negatives do we tolerate? How does it impact a person?
> If the architecture of my code doesn't enforce privacy and resistance to censorship by default
which is impossible.
- No code is feasibly guaranteed to be secure
- All code can be weaponized, though not all feasibly; password vaults, privacy infrastructure, etc. tend to show holes.
- It’s unrealistic to assume you can control any information; case-in-point the garden of Eden test: “all data is here; I’m all-powerful and you should not take it”.
I’m not against regulation and protective measures. But, you have to be prioritize carefully. Do you want to spend most of the world’s resources mining cryptocurrency and breaking quantum cryptography, or do you want to develop games and great software that solves hunger and homelessness?
No code architecture will enforce privacy or guarantee security.
Some code architectures make privacy and security structurally impossible from the beginning.
As technologists, we should hold ourselves responsible for ensuring the game isn't automatically lost before the software decisions even leave our hands.
"As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise."
I think you are on to something there. Noise is really signal divorced from tone. Our current consensus protocols are signal based. They demonstrate control, but not rightful ownership. Pairing a tone keypair with a matching signal keypair in a multisig configuration would be compatible with current networks, but also allow a bottom-up federated trust network to potentially emerge?
Things like that should not be handled on software level, you will always loose and run out of resources. You basically have to force politicians (fat chance)
Politicians aren't generally leaders, but rather followers. To force politicians to do something, lead where people follow you. But of course, paradoxically, this will by definition make you a practitioner of politics yourself... To quote from The Hunt for Red October, "Listen, I'm a politician, which means I'm a cheat and liar. When I'm not kissin' babies I'm stealin' their lollipops. But! It also means I keep my options open."
> That solution implies we have to move toward zero-trust architectures even within open communities
Zero trust cannot exist as long as you interact with the real world.
The problem wasn't trust per se, but blind trust.
The answer isn't to eschew trust (because you can't) but to organize it with social structures, like what people did with “chain of trust” certificates back then before it became commoditized by commercial providers and cloud giants.
Text files don't have power. Appealing to old power institutions to give them power is not the way to create new power either. Legacy systems with entrenched power have tended to insulate those at the top, killing social mobility and enabling those institutions to act against broad interests.
Open source has always been a force of social mobility. You could learn from reading high quality code. Anyone could provide service for a program. You could start a company not bound by bad decision makers who held the keys.
Open source always outmaneuvers inefficiency. Those who need to organize are not beholden to legacy systems. We need technically enabled solutions to organize and create effective decision making. The designs must preserve social mobility within to avoid becoming what they seek to replace. I'm building the technically enabled solutions for at https://positron.solutions
The Internet was the “Wild West”, and I mean that in the most kind, brutal, and honest way, both like a free fantasy (everyone has a website), genocide (replacement of real world), and an emerging dystopia (thieves/robbers, large companies, organizations, and governments doing terrible things).
Which, if you think about it, is a mostly uplifting timeline.
Back in 1770 there were basically 0 democracies on the planet. In 1790 there were 2. Now there are about 70 with about 35 more somewhere in between democracy and autocracy. So most of the world's population is living under a form of democracy. I know that things are degrading for many big democracies, but it wouldn't be the first time (the period between WW1 until the end of WW2 was a bad time for democracies).
I have no idea how we get from here to a civilized internet, though.
This is the real issue. FOSS was born out of a utopian era in 60's-2000s' where the US was still a beacon of hope. That is fundamentally impossible in todays world of ultra-shark-world-eat-you capitalism and global race to the bottom.
If it didn't already exist, FOSS would not be able to get off the ground today. FOSS couldn't start and survive today. Its survival is in jeopardy.
FOSS was born because the cost of sharing information rapidly approached nothing. BBS and Usenet were loaded with shared software, simply because it was easy to share and there was incredible demand for it.
FOSS doesn't need the US or 1980s counterculture to succeed. It just needs cheap disk space and someone willing to share their code. The price of storage and internet continues to fall, and I think FOSS will be fine as long as that continues.
I don't get why you conflate privacy and resistance to censorship.
I think privacy is essential for freedom.
I'm also fine with lots of censorship, on publicly accessible websites.
I don't want my children watching beheading videos, or being exposed to extremists like (as an example of many) Andrew Tate. And people like Andrew Tate are actively pushed by YouTube, TikTok, etc. I don't want my children to be exposed to what I personally feel are extremist Christians in America, who infest children's channels.
I think anyone advocating against censorship is incredibly naive to how impossible it's become for parents. Right now it's a binary choice:
1. No internet for your children
2, Risk potential, massive, life-altering, harm as parental controls are useless, half-hearted or non-existent. Even someone like Sony or Apple make it almost impossible to have a choice in what your children can access. It's truly bewildering.
And I think you should have identify yourself. You should be liable for what you post to the internet, and if a company has published your material but doesn't know who you are, THEY should be liable for the material published.
Safe harbor laws and anonymous accounts should never have been allowed to co-exist. It should have been one or the other. It's a preposterous situation we're in.
Voluntary “censorship” (not being shown visceral media you don’t ask) and censorship for children are very important.
Bad “censorship” is involuntarily denying or hiding from adults what they want to see. IMO, that power tends to get abused, so it should only be applied in specific, exceptional circumstances (and probably always temporarily, if only because information tends to leak, so there should be a longer fix that makes it unnecessary).
I agree with you that children should be protected from beheading and extremism; also, you should be able to easily avoid that yourself. I disagree in that, IMO, anonymous accounts and “free” websites should exist and be accessible to adults. I believe that trusted locked-down websites should also exist, which require ID and block visceral media; and bypassing the ID requirement or filter (as a client) or not properly enforcing it (as a server operator) should be illegal. Granting children access to unlocked sites should also be illegal (like giving children alcohol, except parents are allowed to grant their own children access).
Nothing wrong with a GPL-like viral license for the AI era.
Training on my code / media / other data? No worries, just make sure the weights and other derived artifacts are released under similarly permissive license.
Well, I would say it should be like that already & no new license is needed. Basically if a LLM was ever based on GPL code, its output should be also GPL licensed. As simple as that.
Licenses like GPL are built on top of an enforcement mechanism like copyright. Without an enforced legal framework preventing usage unless a license is agreed to, a license is just a polite request.
Wouldn't you want the code generated by those models be released under those permissive licenses as well? Is that what you mean by other derived artifacts?
That is a complete fools errand. If it ever passes it would just mean the death of Open Source AI models. All the big companies would just continue to collect whatever data they like, license it if necessary or pay the fine if illegal (see Antropic paying $1.5 billion for books). While every Open Source model would be starved for training data within its self enforced rules and easy to be shut down if ever a incorrectly licenses bit slips into the models.
The only way forward is the abolishment of copyright.
Essentially LLMs are recontextualizing their training data. So on one hand, one might argue that training is like a human reading books and then inference is like writing something novel, (partially) based on the reading experience. But the contract between humans considers it plagiarism when we recite some studied text and then claim it as your own. So for example, books attribute citations with footnotes.
With source code we used to either re-used a library as-is, in which case the license terms would apply OR write our own implementation from scratch. While this LLM recontextualization purports to be like the latter, it is sometimes evident that the original license or at least some attribution, comment or footnote should apply. If only to help with future legibility maintenance.
I think this mixes up the 'how' with the 'why.' FOSS isn't the end in itself, I think that for most people it's just the tool that lets us work together, share what we've built, and get something back from the community.
If this is suddenly being weaponised against us, I don't see how that's not a problem.
For a lot of people, FOSS is also very much the why. It’s not just a practical tool—it represents core principles like freedom, transparency, and collaboration. Those values are the reason many contribute in the first place.
If you consider that the people weaponizing code are not honest, I as a FOSS producer am unworried. There may not be a lot of people out there able to use my code compared to LLMs scraping it, but I'm giving a leg up to other humans trying to do what I do.
If what I'm doing is interesting or unusual, LLMs will firstly not recognize that it's different, secondly will screw up when blindly combining it with stuff that isn't different, and thirdly if it's smart enough to not screw that up, it will ignore my work in favor of stealing from CLOSED source repos it gains access to, on the rationale that those are more valuable because they are guarded.
And I'm pretty sure that they're scraping private repos already because that seems the maximally evil and greedy thing to do, so as a FOSS guy I figure I'm already covered, protected by a counterproductive but knowingly evil behavior.
These are not smart systems, but even more they are not wise systems, so even if they gain smarts that doesn't mean they become a problem for me. More likely they become a problem for people who lean on intellectual property and privacy, and I took a pretty substantial pay cut to not have to lean on those things.
I think you'll find, especially within the tech community, people struggle with purity and semantics. They see that supporting and promoting FOSS is to be okay with its use for war, oppression, or whatever mental gymnastics they need to just not care or promote bad things. They will argue about what "free and open" means and get mixed up in definitions, political alignments, etc.
It is pretty obvious to me, that being blase about whomever using FOSS for adversarial reasons is not very "open" or "free". Somewhere in the thread there is an argument about the paradox of intolerance and I don't really care to argue with people on the internet about it because it is hard to assume the debate is in good faith.
My point is this: Throw away all your self described nuance and ask this yourself whether or not you think any malicious, war-monger, authoritarian, or hyper-capitalist state would permit a free and open source software environment? If the objective of a business, government, or billionaire is power, control, and/or exclusivity then, well, your lofty ideals behind FOSS have completely collapsed.
I agree with the sentiment, but unfortunately often this is too simplistic.
For example, a lot of Palestinians are not tolerant towards LGBT people -> a lot of LGBT people are not tolerant towards Israelis -> a lot of Israelis are not tolerant towards Palestinians.
Also how do you know if you are intolerant or intolerant towards intolerance?
> But the part about FOSS being used in a project not aligned with the creator's values seams hypocritical
I agree with you.
Imagine a parallel Earth where there was a free OS that the majority in the world used called GNU/Felix.
Felix (it/its), who wrote GNU/Felix and who was the project’s strong but kind leader, one day had a head injury that somehow decreased its empathy but raised its IQ.
Subordinates of Felix on the council of leadership noticed that it was adding features that would track all user data to use in some nefarious plan.
In this case, most would agree that for both the freedom and good of all, Felix should no longer lead this effort.
However, they would want to be sure that even the Will Bates’ great company Bikerosoft didn’t lead the project either, because despite its wonderful and ubiquitous Bikerosoft Office apps and Ezure cloud tools and infrastructure, it was a profit-based company.
My worry: that it suddenly becomes treason to commit to or pull certain repositories, that certain repositories become nationalized, or that other nation states do the same. Heinous forks and theft of code en masse for fear of being shot by a drone streaming video over ffmpeg.
Of code becoming a regulated munition, and tools that rat you out if you are designing resistance software.
I suppose this is relevant to a subset of HN audience who attend FOSDEM. Even the talk abstract is worth discussion as it highlights an important side effect of FOSS goals and the current state of the world.
I'm not saying there's no point to this discussion, but it would bring focus if this was decoupled from the broader topic of open source.
There has never been any inherent political or economic value in open source software. Those things come from deliberate decisions by authors and users such as licensing and mass adoption.
Open source is not synonymous with the GPL and most businesses try to avoid open source software when implementing their core competency.
> NGI Zero, a family of research programmes including NGI0 Entrust, NGI0 Core and NGI0 Commons Fund, part of the Next Generation Internet initiative.
with the Next Generation Internet thing at the end receiving money/financing from the political supra-state entity called the EU [1] . So I guess said speech-holder is not happy because political entities which are seen by the EU as adversarial are also using open-source code? Not sure how war plays into this, as I’m sure he must be aware of the hundreds of billions of euros the EU has allocated for that.
One way war plays into FOSS is that enemy nations are no longer supposed to be contributing to the same projects, being from nationality XYZ is now as relevant as programming skills one has to offer, likewise open source software from specific countries might no longer be allowed.
I imagine anytime the people that control the war resources decide to use them, there are plenty of other people not interested or involved in the destruction. If the UK declares war on an African nation tomorrow, since the US is an ally you would say those other people in the US should disallow devs from the target African nation from contributing to their project?
You are engaging with a russian troll larping as Bulgarian/Moldovan whatever. Look through his messages: it's all about Ukraine and EU, inclusing reposts from ruzzian propaganda subreddit etc. They are not arguing in good faith.
Michiel is indeed one of the driving forces behind NLNet's NGI0 program. That said, just because they're distributing money they received from the EU, that doesn't mean that they're intimately aware of the full EU budget.
> that doesn't mean that they're intimately aware of the full EU budget.
There's no "intimate" knowledge required in order to be aware of the EU spending tens to hundreds of billions of euros on the war close to its Eastern border, it has been one of the main topics of discussion in the media for a good time now. Unless this speech holder has lived under a rock since February 2022, which doesn't seem to be the case (he was the one mentioning the "war" thing).
Yes, this is the real problem when USians and Europeans complain about FOSS/OS safety. They understand deeply that the FOSS system is an extension of US soft power using the tech sector and any indication that the existence of FOSS is a threat to the US' interests means that it's values must be destroyed because these people don't really believe in the things they say they do
>AI in its current form has no actual sense of truth or ethics.
This is untrue. It does have sense of truth and ethics. Although it does get few things wrong from time to time but you can't reliably get it to say something blatantly incorrect (at least with thinking enabled). I would say it is more truthful than any human on average. Ethically I don't think you can get it to do or say something unethical.
the burden of proof lies on someone making a positive assertion. why do you think it's possible for "AI" to be either of those things at this point in time (let alone whether it's possible at all).
Reading this felt like the official obituary for the 90s techno-optimism many of us grew up on.
The "end of history" hangover is real. We went about building the modern stack assuming bad actors were outliers, not state-sponsored standard procedure. But trying to legislate good use into licenses? I don't know how you would realistically implement it and to what extent? That solution implies we have to move toward zero-trust architectures even within open communities.
As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise.
I remember reading a quote somewhere that stuck with me. Paraphrasing, "If the architecture of my code doesn't enforce privacy and resistance to censorship by default, we have to assume it will be weaponized".
I am out of ideas, practical ones, lots sound good on paper and in theory. It's a bit sad tbh. Always curious to hear more on this issue from smarter people.
"If the architecture of my code doesn't enforce privacy"
This is still techno-optimism. The architecture of your code will not to that. We are long past the limits of what you can fix with code.
The only action that matters is political and I don't think voting cuts it.
Yeah, reminds me of the "Security" xkcd (https://xkcd.com/538/) - a threat from a good ol' 5-dollar wrench defeating state-of-the-art encryption.
Never estimate how state actors can use violence (or merely the threat of it) to force people to do things. The only way to respond to that is not through code or algorithms or protocols, but through political action (whether it be violent or non-violent)
3 replies →
> We are long past the limits of what you can fix with code.
example of what is not possible to fix with code?
7 replies →
> trying to legislate good use into licenses
It's also questionable to which extent restrictive licenses for open source software stay that relevant in the first place, as you can now relatively easily run an AI code generator that just imitates the logic of the FOSS project, but with newly generated code, so that you don't need to adhere to a license's restrictions at all.
Perhaps we need reputation on the network layer? Without it being tied to a particular identity.
It would require it not to be easy to farm (Entropy detection on user behaviour perhaps and clique detection).
How does one make sure the implementation is sufficient and complete? It feels like assuming total knowledge of the world, which is never true. How many false positives and false negatives do we tolerate? How does it impact a person?
3 replies →
> If the architecture of my code doesn't enforce privacy and resistance to censorship by default
which is impossible.
- No code is feasibly guaranteed to be secure
- All code can be weaponized, though not all feasibly; password vaults, privacy infrastructure, etc. tend to show holes.
- It’s unrealistic to assume you can control any information; case-in-point the garden of Eden test: “all data is here; I’m all-powerful and you should not take it”.
I’m not against regulation and protective measures. But, you have to be prioritize carefully. Do you want to spend most of the world’s resources mining cryptocurrency and breaking quantum cryptography, or do you want to develop games and great software that solves hunger and homelessness?
No code architecture will enforce privacy or guarantee security.
Some code architectures make privacy and security structurally impossible from the beginning.
As technologists, we should hold ourselves responsible for ensuring the game isn't automatically lost before the software decisions even leave our hands.
"As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise."
I think you are on to something there. Noise is really signal divorced from tone. Our current consensus protocols are signal based. They demonstrate control, but not rightful ownership. Pairing a tone keypair with a matching signal keypair in a multisig configuration would be compatible with current networks, but also allow a bottom-up federated trust network to potentially emerge?
There are no technology solutions to what are fundamentally limits of the human society
We reached the limits of societal coherence and there’s no way to bridge the gap
Things like that should not be handled on software level, you will always loose and run out of resources. You basically have to force politicians (fat chance)
Politicians aren't generally leaders, but rather followers. To force politicians to do something, lead where people follow you. But of course, paradoxically, this will by definition make you a practitioner of politics yourself... To quote from The Hunt for Red October, "Listen, I'm a politician, which means I'm a cheat and liar. When I'm not kissin' babies I'm stealin' their lollipops. But! It also means I keep my options open."
> That solution implies we have to move toward zero-trust architectures even within open communities
Zero trust cannot exist as long as you interact with the real world. The problem wasn't trust per se, but blind trust.
The answer isn't to eschew trust (because you can't) but to organize it with social structures, like what people did with “chain of trust” certificates back then before it became commoditized by commercial providers and cloud giants.
> trying to legislate good use into licenses
Text files don't have power. Appealing to old power institutions to give them power is not the way to create new power either. Legacy systems with entrenched power have tended to insulate those at the top, killing social mobility and enabling those institutions to act against broad interests.
Open source has always been a force of social mobility. You could learn from reading high quality code. Anyone could provide service for a program. You could start a company not bound by bad decision makers who held the keys.
Open source always outmaneuvers inefficiency. Those who need to organize are not beholden to legacy systems. We need technically enabled solutions to organize and create effective decision making. The designs must preserve social mobility within to avoid becoming what they seek to replace. I'm building the technically enabled solutions for at https://positron.solutions
The Internet was the “Wild West”, and I mean that in the most kind, brutal, and honest way, both like a free fantasy (everyone has a website), genocide (replacement of real world), and an emerging dystopia (thieves/robbers, large companies, organizations, and governments doing terrible things).
It’s changing but not completely.
Which, if you think about it, is a mostly uplifting timeline.
Back in 1770 there were basically 0 democracies on the planet. In 1790 there were 2. Now there are about 70 with about 35 more somewhere in between democracy and autocracy. So most of the world's population is living under a form of democracy. I know that things are degrading for many big democracies, but it wouldn't be the first time (the period between WW1 until the end of WW2 was a bad time for democracies).
I have no idea how we get from here to a civilized internet, though.
""The "end of history" hangover is real. ""
This is the real issue. FOSS was born out of a utopian era in 60's-2000s' where the US was still a beacon of hope. That is fundamentally impossible in todays world of ultra-shark-world-eat-you capitalism and global race to the bottom.
If it didn't already exist, FOSS would not be able to get off the ground today. FOSS couldn't start and survive today. Its survival is in jeopardy.
FOSS was born because the cost of sharing information rapidly approached nothing. BBS and Usenet were loaded with shared software, simply because it was easy to share and there was incredible demand for it.
FOSS doesn't need the US or 1980s counterculture to succeed. It just needs cheap disk space and someone willing to share their code. The price of storage and internet continues to fall, and I think FOSS will be fine as long as that continues.
1 reply →
I don't get why you conflate privacy and resistance to censorship.
I think privacy is essential for freedom.
I'm also fine with lots of censorship, on publicly accessible websites.
I don't want my children watching beheading videos, or being exposed to extremists like (as an example of many) Andrew Tate. And people like Andrew Tate are actively pushed by YouTube, TikTok, etc. I don't want my children to be exposed to what I personally feel are extremist Christians in America, who infest children's channels.
I think anyone advocating against censorship is incredibly naive to how impossible it's become for parents. Right now it's a binary choice:
1. No internet for your children
2, Risk potential, massive, life-altering, harm as parental controls are useless, half-hearted or non-existent. Even someone like Sony or Apple make it almost impossible to have a choice in what your children can access. It's truly bewildering.
And I think you should have identify yourself. You should be liable for what you post to the internet, and if a company has published your material but doesn't know who you are, THEY should be liable for the material published.
Safe harbor laws and anonymous accounts should never have been allowed to co-exist. It should have been one or the other. It's a preposterous situation we're in.
Voluntary “censorship” (not being shown visceral media you don’t ask) and censorship for children are very important.
Bad “censorship” is involuntarily denying or hiding from adults what they want to see. IMO, that power tends to get abused, so it should only be applied in specific, exceptional circumstances (and probably always temporarily, if only because information tends to leak, so there should be a longer fix that makes it unnecessary).
I agree with you that children should be protected from beheading and extremism; also, you should be able to easily avoid that yourself. I disagree in that, IMO, anonymous accounts and “free” websites should exist and be accessible to adults. I believe that trusted locked-down websites should also exist, which require ID and block visceral media; and bypassing the ID requirement or filter (as a client) or not properly enforcing it (as a server operator) should be illegal. Granting children access to unlocked sites should also be illegal (like giving children alcohol, except parents are allowed to grant their own children access).
I thought it was easy: watch videos with your kid, don't allow them to doomscroll or be raised by the "featured"/"front page" algorithms.
3 replies →
[flagged]
1 reply →
I agree that communities should try to protect themselves from malicious actors.
But the part about FOSS being used in a project not aligned with the creator's values seams hypocritical:
IMO FOSS is a gift to humanity and as such:
"A gift should be given freely, without obligation or expectation, as a true expression of love and kindness"
Nothing wrong with a GPL-like viral license for the AI era.
Training on my code / media / other data? No worries, just make sure the weights and other derived artifacts are released under similarly permissive license.
Well, I would say it should be like that already & no new license is needed. Basically if a LLM was ever based on GPL code, its output should be also GPL licensed. As simple as that.
Licenses like GPL are built on top of an enforcement mechanism like copyright. Without an enforced legal framework preventing usage unless a license is agreed to, a license is just a polite request.
We need countries to start legally enforce that. Nothing will change otherwise. I stopped open sourcing my code and LLMs are one of the big reason.
Wouldn't you want the code generated by those models be released under those permissive licenses as well? Is that what you mean by other derived artifacts?
1 reply →
It really should be like that indeed. Where is RMS? Is he working on GPLv4?
15 replies →
Interesting. Is there a license that acts this already ?
That is a complete fools errand. If it ever passes it would just mean the death of Open Source AI models. All the big companies would just continue to collect whatever data they like, license it if necessary or pay the fine if illegal (see Antropic paying $1.5 billion for books). While every Open Source model would be starved for training data within its self enforced rules and easy to be shut down if ever a incorrectly licenses bit slips into the models.
The only way forward is the abolishment of copyright.
2 replies →
AI is not humanity. Also many open source licenses have attribution clauses, which AI does not honor when it regurgitates.
I think the attribution is a very good point!
Essentially LLMs are recontextualizing their training data. So on one hand, one might argue that training is like a human reading books and then inference is like writing something novel, (partially) based on the reading experience. But the contract between humans considers it plagiarism when we recite some studied text and then claim it as your own. So for example, books attribute citations with footnotes.
With source code we used to either re-used a library as-is, in which case the license terms would apply OR write our own implementation from scratch. While this LLM recontextualization purports to be like the latter, it is sometimes evident that the original license or at least some attribution, comment or footnote should apply. If only to help with future legibility maintenance.
I think this mixes up the 'how' with the 'why.' FOSS isn't the end in itself, I think that for most people it's just the tool that lets us work together, share what we've built, and get something back from the community.
If this is suddenly being weaponised against us, I don't see how that's not a problem.
For a lot of people, FOSS is also very much the why. It’s not just a practical tool—it represents core principles like freedom, transparency, and collaboration. Those values are the reason many contribute in the first place.
7 replies →
If you consider that the people weaponizing code are not honest, I as a FOSS producer am unworried. There may not be a lot of people out there able to use my code compared to LLMs scraping it, but I'm giving a leg up to other humans trying to do what I do.
If what I'm doing is interesting or unusual, LLMs will firstly not recognize that it's different, secondly will screw up when blindly combining it with stuff that isn't different, and thirdly if it's smart enough to not screw that up, it will ignore my work in favor of stealing from CLOSED source repos it gains access to, on the rationale that those are more valuable because they are guarded.
And I'm pretty sure that they're scraping private repos already because that seems the maximally evil and greedy thing to do, so as a FOSS guy I figure I'm already covered, protected by a counterproductive but knowingly evil behavior.
These are not smart systems, but even more they are not wise systems, so even if they gain smarts that doesn't mean they become a problem for me. More likely they become a problem for people who lean on intellectual property and privacy, and I took a pretty substantial pay cut to not have to lean on those things.
I think you'll find, especially within the tech community, people struggle with purity and semantics. They see that supporting and promoting FOSS is to be okay with its use for war, oppression, or whatever mental gymnastics they need to just not care or promote bad things. They will argue about what "free and open" means and get mixed up in definitions, political alignments, etc.
It is pretty obvious to me, that being blase about whomever using FOSS for adversarial reasons is not very "open" or "free". Somewhere in the thread there is an argument about the paradox of intolerance and I don't really care to argue with people on the internet about it because it is hard to assume the debate is in good faith.
My point is this: Throw away all your self described nuance and ask this yourself whether or not you think any malicious, war-monger, authoritarian, or hyper-capitalist state would permit a free and open source software environment? If the objective of a business, government, or billionaire is power, control, and/or exclusivity then, well, your lofty ideals behind FOSS have completely collapsed.
2 replies →
[flagged]
I agree with the sentiment, but unfortunately often this is too simplistic.
For example, a lot of Palestinians are not tolerant towards LGBT people -> a lot of LGBT people are not tolerant towards Israelis -> a lot of Israelis are not tolerant towards Palestinians.
Also how do you know if you are intolerant or intolerant towards intolerance?
6 replies →
"Pas de liberté pour les ennemis de la liberté"
Saint-Just
> But the part about FOSS being used in a project not aligned with the creator's values seams hypocritical
I agree with you.
Imagine a parallel Earth where there was a free OS that the majority in the world used called GNU/Felix.
Felix (it/its), who wrote GNU/Felix and who was the project’s strong but kind leader, one day had a head injury that somehow decreased its empathy but raised its IQ.
Subordinates of Felix on the council of leadership noticed that it was adding features that would track all user data to use in some nefarious plan.
In this case, most would agree that for both the freedom and good of all, Felix should no longer lead this effort.
However, they would want to be sure that even the Will Bates’ great company Bikerosoft didn’t lead the project either, because despite its wonderful and ubiquitous Bikerosoft Office apps and Ezure cloud tools and infrastructure, it was a profit-based company.
> US lawmakers starting to question whether RISC-V is an issue of national security:
https://itif.org/publications/2024/07/19/the-us-china-tech-c...
My worry: that it suddenly becomes treason to commit to or pull certain repositories, that certain repositories become nationalized, or that other nation states do the same. Heinous forks and theft of code en masse for fear of being shot by a drone streaming video over ffmpeg.
Of code becoming a regulated munition, and tools that rat you out if you are designing resistance software.
Thsi talk is scheduled for January 31st, or am I missing something? Why is it being posted here? There is no video yet.
This is correct.
I suppose this is relevant to a subset of HN audience who attend FOSDEM. Even the talk abstract is worth discussion as it highlights an important side effect of FOSS goals and the current state of the world.
all talks are recorded. so you can watch it live or on replay. Talks are free to attend; they are @ ULB campus in Brussels. 31st of jan - 1 of feb.
I'm not saying there's no point to this discussion, but it would bring focus if this was decoupled from the broader topic of open source.
There has never been any inherent political or economic value in open source software. Those things come from deliberate decisions by authors and users such as licensing and mass adoption.
Open source is not synonymous with the GPL and most businesses try to avoid open source software when implementing their core competency.
> Open source is not synonymous with the GPL and most businesses try to avoid open source software when implementing their core competency.
What you mean here? Businesses often implement their own core code, but they don't deliberately favour closed-source software.
[flagged]
The guy holding this talk apparently does this:
> NGI Zero, a family of research programmes including NGI0 Entrust, NGI0 Core and NGI0 Commons Fund, part of the Next Generation Internet initiative.
with the Next Generation Internet thing at the end receiving money/financing from the political supra-state entity called the EU [1] . So I guess said speech-holder is not happy because political entities which are seen by the EU as adversarial are also using open-source code? Not sure how war plays into this, as I’m sure he must be aware of the hundreds of billions of euros the EU has allocated for that.
[1] https://ngi.eu/
One way war plays into FOSS is that enemy nations are no longer supposed to be contributing to the same projects, being from nationality XYZ is now as relevant as programming skills one has to offer, likewise open source software from specific countries might no longer be allowed.
I imagine anytime the people that control the war resources decide to use them, there are plenty of other people not interested or involved in the destruction. If the UK declares war on an African nation tomorrow, since the US is an ally you would say those other people in the US should disallow devs from the target African nation from contributing to their project?
1 reply →
You are engaging with a russian troll larping as Bulgarian/Moldovan whatever. Look through his messages: it's all about Ukraine and EU, inclusing reposts from ruzzian propaganda subreddit etc. They are not arguing in good faith.
Michiel is indeed one of the driving forces behind NLNet's NGI0 program. That said, just because they're distributing money they received from the EU, that doesn't mean that they're intimately aware of the full EU budget.
(Disclosure: I once received NGI0 funding.)
> that doesn't mean that they're intimately aware of the full EU budget.
There's no "intimate" knowledge required in order to be aware of the EU spending tens to hundreds of billions of euros on the war close to its Eastern border, it has been one of the main topics of discussion in the media for a good time now. Unless this speech holder has lived under a rock since February 2022, which doesn't seem to be the case (he was the one mentioning the "war" thing).
Yes, this is the real problem when USians and Europeans complain about FOSS/OS safety. They understand deeply that the FOSS system is an extension of US soft power using the tech sector and any indication that the existence of FOSS is a threat to the US' interests means that it's values must be destroyed because these people don't really believe in the things they say they do
>AI in its current form has no actual sense of truth or ethics.
This is untrue. It does have sense of truth and ethics. Although it does get few things wrong from time to time but you can't reliably get it to say something blatantly incorrect (at least with thinking enabled). I would say it is more truthful than any human on average. Ethically I don't think you can get it to do or say something unethical.
To the people downvoting me: why do you think AI is untruthful or unethical?
the burden of proof lies on someone making a positive assertion. why do you think it's possible for "AI" to be either of those things at this point in time (let alone whether it's possible at all).
3 replies →