Comment by chasing0entropy
2 days ago
Can you design an AI agent that I own, to replace me? This is what the market really wants and is probably one of the ONLY things that doesn't exist.
Just let me subscribe to an agent to do my work while I keep getting a paycheck.
Who's giving you that paycheck? Why don't they just hire that AI agent themselves and cut out the middle man?
In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
How do they own it? I see two scenarios.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
6 replies →
Because ultimately every job AI replaces will supplanted by two jonpbs - one to maintain the agent and another to maintain the infrastructure.
I believe once AI scales my theory will be proven universal.
My wife believes there will eventually also be a third job created to do the job.
The AI agents don’t appear to know how & where to be economically productive. That still appears to be a uniquely human domain of expertise.
So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.
It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.
i don't think you could find a single economist that believes humans know how and where to be economically productive
2 replies →
A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
> This begs the question of which side agents will achieve human-level skill at first.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Perfectly stated IMO.
How are businesses going to get money if there are no humans that are able to pay for goods?
Lots of us are not cut out for blue collar work.
Some humans will be rich and they'll buy things. For example those humans who own AI or fabs. And those humans, who serve to them (assuming that there will be services not replaced by AI, for example prostitution), will also buy things.
If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.
6 replies →
> How are businesses going to get money if there are no humans that are able to pay for goods?
By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).
There is an amount of people who own, well, in the past we could say "means of production" but let's not. So, they own the physical capital and AI worker-robots, and this combination produces various goods for human use. So they (the people who own that stuff) trade those goods between each other since nobody owns the full range of production chains.
The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.
So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.
6 replies →
As long as someone else is still paying their employees, it’s all good.
Can you explain why we pay Sam Altman & Elon Musk? Or Jeff Bezos & Bill Gates? They’re just middlemen collecting money for other people’s labor.
You are welcome to try to cut them out and start your own business. But I suspect you might find it a bit harder than your employer signing up for a SaaS AI agent. Actually wait, isn't that what this website is? Does it work?
They are a bridge between those with money and those with skill. Plus they can aggregate information and act as a repository of knowledge and decision maker for their teams.
These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.
3 replies →
Scam Altman and Musk are paid to manipulate stock markets and enrich themselves and their friends.
This is backwards. Those people got into the positions they have by having money to spend, not because someone wanted to pay them to do something. (Or they had a way to have control over spending someone else's money.)
5 replies →
I've replaced myself for the better part of a year now. You can too: https://getproxyai.com/
Take my money!
lol this also appears to be satire?
We follow a moto in our company:
> While none of the work we do is very important, it is important that we do a great deal of it.
This seems a frivolous attitude to something so serious. Have you considered how using a proxy could make you align your attitude better?
Isn't this kind of the same as an AI copilot, just with higher autonomy?
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
> Just let me subscribe to an agent to do my work while I keep getting a paycheck.
I've already done this. It's just a Teams bot that responds to messages with:
"Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"
"I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"
"Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"
"Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"
and then
"If you've any further questions please stick them in an email and I'll look at it as a priority.
Mòran taing,
EB."
(notice that I don't say how high a priority?)
No AI needed. Just good old-fashioned scripting, and organic stupidity.
Reminded me of an episode of the IT Crowd where they put a recording of "Have you tried turning it off and on again? as the answering machine for an IT department.
What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.
I would use the AI to build a robot that could build copies of itself and then once there are a sufficient number of robots I'd use them to build more good places to go to.
What happens when "your" AI wants to build something where someone else's AI wants to build it? I suppose you are thinking of something like Banks's Culture? The trouble is for that we're probably going to need real AI, not just LLMs, and we have no reason to believe a real AI would actually keep us as pets like in the Culture. We have no idea what it would want to do...
[dead]
not unless you can afford your own super cluster. Otherwise, the AI you use will own you.
[dead]
Why would the market want that? Don't be stupid.
The world doesn't want assholes either but here we are
That's the premise behind Workshop Labs! https://workshoplabs.ai