Comment by heddycrow

2 days ago

I wish we were talking about what's next versus what's increasingly here.

How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.

I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.

Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.

But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.

What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?

From a game-theory perspective, if players rush the field with AI-generated content because it's where all the advantages are this year, then there's going to be room on the margins for trust-signaling players to advance themselves with more obviously handspun stuff. Basically, a firm handshake and an office right down the street. Lunches and golf.

The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.

  • That gets my brain moving, thanks. What do you think those who are poor/rich in a trust economy look like? How much of a transformation to trust economy do you think we make?

> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?

You're assuming they can be fixed.

> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.

I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".

  • I don't have the time to read all four stories that ChatGPT turned up right this minute, but I now have cause to believe that at least some minority of those peasants you refer to did find fun in solving their problems.

    I'm with that group of people. What was your point in bringing this up?

    Wait, was I just trolled? If so, lol. Got me!

I suggest we go back to before and be human about things - and build trust in-person.

  • Dunbar's number leaps to mind. I wonder what our systems look like at large when we have cause to strengthen our 150 meaningful connections.

    Would this truly be a move back? I've met people outside my social class and disposition who seem to rely quite heavily on networking this way.

    • This is exactly the reason

      Human biological limits prevent the realization of stable equilibrium at the scale of coordination necessary for larger emergent superstructures

      Humans need to figure out how to become a eusocial superorganism because we’re past the point where individual groups don’t produce externalities that are existential to other groups/individuals

      I don’t think that’s possible, so I’m just building the machine version

      4 replies →

  • This is childish thinking. Whatever we do, we cannot go back to "before". Which "before"? How do we go back?

    You can't regress back to a being a kid just because the problems you face as an adult are too much to handle.

    However this is resolved, it will not be anything like "before". Accept that fact up front.

  • Unfortunately there’s no “roll back to last stable” - the current version is actually still the most stable

    If you try to “go back” you’ll just end up recreating the same structure but with different people in charge

    Meet the New boss same as the old boss - biological humans cannot escape this state because it’s a limit of the species