Comment by borroka
15 days ago
I do not doubt that AI and AI-powered and -native applications will become part of the fabric of our personal and professional lives.
What I don't understand is why, outside of "because I can", people need to automate parts of life I did not know the existence of.
- Why, outside of edge cases, do people have to automate the payment of bills beyond the automatic cc processing? - How many times a month do they have to set up their barber appointment?
It seems to me that the applications of Clawd and similar tools either automate trivial stuff or work on actions and circumstances that should not be there.
As an example, the other day I had a doctor visit, and between filling forms online, filling other forms online, confirming three times I would have been there and that I filled the online forms, driving to the doctor's office, and waiting, I probably spent 2 hours of my time (the visit was 2 months after I asked for it, by the way).
The visit lasted 5-7 minutes: the doctor did not have a look at the forms I filled out beforehand, and barely listened to what I was telling him during the visit.
I worry that, since "AI" will do it, there will be more forms to be filled that nobody will read, more forms to be filled to confirm that AI or me or a guardian filled the forms, and longer wait times because AI will bombard our neurons with some entertainment.
But what I want is a visit with a doctor who listens to me, they are not in a rush, and have my problem solved. If AI helps, it's great, but I don't want busy work done by AI, I don't want, because it is not needed, busy work at all.
I would love an AI to curate my feed to transition from enragement equals engagement to pure enchantment feeding me things it decides I would enjoy. And I think that's completely within the abilities of current models. It's just that it's less profitable than driving me into an endless doom scroll loop of despair.
And that's just off the top of my head. AI is neither good or evil, but we've made some pretty poor choices deploying it.
While I find the aspiration noble, it seems to me that we don't even know ourselves what we want, or, alternatively, we re-discover every day how our revealed preferences differ from our stated ones. We don't even trick other people, we trick ourselves.
There was also some evolutionary biology/psychology theory developed by Robert Trivers years ago on self-deception and fitness.
We buy a book thinking we are going to like it, and then we don't even open it. The recommender systems give us more of what we interact with (with some quite extreme funnel effects at times, like when we curiously look at a pimple popper video and for the next ten minutes the algo gives us pimple after pimple), but we find out, as stated but not as revealed, that we don't want more of what we interact with.
Nobody wants, in theory and as stated, to be constantly enraged by social media, but most of us, since numbers don't lie, are revealed to enjoy getting enraged.
I don't think AI will have a different effect in the near future, as the main problem is that we don't know, broadly speaking, what we want, apart from the obvious, e.g., I want to watch a football game and I am going to turn on the tv and watch it.
Sounds like Brazil.
Sunny coastal California