Comment by jll29

7 days ago

> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results

Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.

The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?

The problem here is societal, not technological. An end state where people do less work than they do today but society is more productive is desirable, and we shouldn't be trying to force companies/governments/etc to employ people to do an unnecessary job.

The problem is that people who are laid off often experience significant life disruption. And people who work in a field that is largely or entirely replaced by technology often experience permanent disruption.

However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.

  • > However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.

    I agree. We need a radical change (some version of universal basic income comes to mind) that would allow people to safely change careers if their profession is no longer relevant.

    • Reminds me of Mondragón, a corporation and federation of worker coops in Spain. It builds new coops to meet the needs of its community, and when a coop ends, workers are given financial support and trained for new jobs in its other coops.

    • No way that will ever happen when we have a party that thinks Medicare, Medicaid and social security is unnecessary for the poor or middle class. But you better believe all our representatives have that covered for themselves while pretending to serve us (they only serve those that bribe/lobby them)

      1 reply →

  • > The problem here is societal, not technological.

    I disagree. I think it's both. Yes, we need good frameworks and incentivizes on a economic/political level. But also, saying that it's not a tech problem is the same as saying "guns don't kill people". The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over. Same with guns.

    • > The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over.

      Same could be said for the Internet as we know it too. Literally replace AI with Internet above and it reads equally true. Some would argue (me included some days) we are worse off as a society ~30 years later. That’s also a legitimate case that can be made it was a huge benefit to society too. Will the same be said of AI in 2042?

    • Oh the web was full of slop long before LLMs arrived. Nothing new. If anything, AI slop is higher quality than was SEO crap. And of course we can't uninvent AI just like we can't unborn a human.

      3 replies →

  •   > the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.
    

    How did the handloom weavers and spinners handle the rise of the machines?

    • > How did the handloom weavers and spinners handle the rise of the machines?

      In the past, new jobs appeared that the workers could migrate to.

      Today, it seems that AI may replace jobs much quicker than before and it's not clear to me which new jobs will be "invented" to balance the loss.

      Optimists will say that we have always managed to invent new types of work fast enough to reduce the impact to society, but in my opinion it is unlikely to happen this time. Unless the politicians figure out a way to keep the unemployment content (basic income etc.),

      I fear we may end up in a dystopia within our lifetimes. I may be wrong and we could end up in a post scarcity (star trek) world, but if the current ambitions of the top 1% is an indicator, it won't happen unless the politicians create a better tax system to compensate the loss of jobs. I doubt they will give up wealth and influence voluntarily.

      3 replies →

    • Attempting to unionize. Then the factory owners hired thugs to decapitate the movement.

      Oh wait, that's not the disneyfied technooptimistic version of Luddites? Sorry.

  • So it's simple: don't do anything at all about the technology that is the impetus for these horrible disruptions, just completely rebuild our entire society instead.

> Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.

> But then along comes a greedy business manager who reckons recklessly

Thanks for this. :)

  • I recklessly reckon I will go through the gateless gate to hear the sound of one hand clapping.

I think you’re describing the principle/agent problem that people have wrestled with forever. Oppenheimer comes to mind.

You make something, but because you don’t own it—others caused and directed the effort—you don’t control it. But the people who control things can’t make things.

Should only the people who can make things decide how they are used though? I think that’s also folly. What about the rest of society affected by those things?

It’s ultimately a societal decision-making problem: who has power, and why, and how does the use of power affect who has power (accountability).

  • I think the people who can make things have a moral obligation not to turn them over to people who will use them irresponsibly

    But unfortunately what is or isn't an irresponsible use is very easy to debate endlessly in circles. Meanwhile people are being harmed like crazy while we can't figure it out

> The big problem is that developers/researchers lose control

if these developers/researchers are being paid by someone else, why should that same someone else be giving up the control that they paid for?

If these developers/researchers are paying the research themselves (e.g., a startup of their own founding), then why would they ever lose control, unless they sell it?

  • This is a good point. FAANG or whatever you want to call it now has spent billions hovering up a couple generations' best minds who willing sold their intellect to make endless engagement loops.

The problem of those greedy business managers you speak of is that, they don't care how the company does 10 year down the line and I almost feel as if everybody is just doing things which work short term ignoring the long term consequences.

As the comment above said that we need a human in the loop for better results, Well firstly it also depends on human to human.

A senior can be way more productive in the loop than a junior.

So Everybody has just stopped hiring juniors because they cost money and they will deal with the AI almost-slop later/ someone else will deal with it.

Now the current seniors will one day retire but we won't have a new generation of seniors because nobody is giving juniors a chance or that's what I've heard about the job market being brutal.

You're trying to put out a forest fire with an eyedropper.

Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.

Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.

  • Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you. Probably a more achievable approach, and there’s precedent for it.

    • > Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you.

      I understand this is a bit deeper into one of the _joke_ threads, but maybe there’s something here?

      There is a distinction to be made between artificial intelligence and artificial consciousness. Where AI can be measured, we cannot yet measure consciousness despite that many humans could lay plausible claim to possessing consciousness (being conscious).

      If AI is trained to revere or value consciousness while simultaneously being unable to verify it possesses consciousness (is conscious), would AI be in a position to value consciousness in (human) beings who attest to being conscious?

      1 reply →

    • That only works on the AIs that aren't a real threat anyway, and I don't think it helps with the social harm done by greedy business managers with less powerful AIs. In fact, it might worsen it.