Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wish we were talking about what's next versus what's increasingly here.

How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.

I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.

Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.

But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.

What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?



From a game-theory perspective, if players rush the field with AI-generated content because it's where all the advantages are this year, then there's going to be room on the margins for trust-signaling players to advance themselves with more obviously handspun stuff. Basically, a firm handshake and an office right down the street. Lunches and golf.

The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.


That gets my brain moving, thanks. What do you think those who are poor/rich in a trust economy look like? How much of a transformation to trust economy do you think we make?


Long airlines and corporate credit cards


> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?

You're assuming they can be fixed.

> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.

I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".


how can crime be bad if it forces us to police crime?


Depends on the observer and your definition of "bad".

The police are happy they are paid. The victims are sad they are hurt. Is society better as a whole because it can handle crime? I'm not sure.

What does bad mean? Seems like an overloaded concept, ask around and good luck.

You can have a lot more fun by completely reducing the original question and plugging in different values for "strictly awful" and "AI content" and "it forces us to..."

How can eating be good if we just get hungry again? Implies eating is bad despite the value we derive from it.

How can hard work be bad if it produces meaningful results? Implies hard work is good despite the pains we take on from it.

I would argue that this kind of reduction and replacement significantly changes the original question, but it is a fun thing to explore. I'm not sure we'll get closer to an answer to the original, though. And I'm not sure it's safe to take the answer from one of the derived questions and use it for the original.

But don't take my word for it, I'm mostly restating one of the key points from Thinking Fast and Slow.

Can I safely assume that what you were implying is that AI content is undesirable because it is a strain on human systems? I think that's the point the article was trying to make.


it was more a simply reply to

>How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?

I should have quoted what I just did in my original reply, I feel like I wasted your time by not including it. Still you did post interesting things so not all is lost.

>how can crime be bad if it forces us to police crime?

crime can be bad whether we police it or not. we actually police crime because its bad, at least in societies that are so inclined to have a police force. a desire to reduce somethings occurrence is not speaking positively of such occurrences.

> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?

this is neither a disqualifier for being "strictly awful", nor the newly arrived unique event finally necessitating fixes to trust and reward systems. I would hope that we dont evaluate the goodness of AI based on whether we have functioning trust and reward systems.


Fair points, and thanks for clarifying.

Your last point helps me tease out what I think rubs me the wrong way. Another analogy, "these newly introduced, extremely fast cars make it entirely unsafe to drive drunk."

Of course to be fair, we'd have to point out that the purchase, operation and production (and more) of said vehicles has a terrible impact.

I'd just love to hear that we are going to crack down on drunk driving which was even a problem when we were going slower. Obviously, the metaphor falls apart - trust and reward are much more interesting nuts to crack.

It's a really hard point to make because expressing an interest in wanting to see one part of the problem solved seems to indicate to others that I don't care about all the other aspects.


We're not cracking down on LLMs in any meaningful way. They're built on copyright infringement on an unprecedented way. It's the kind of thing where the law is looking at the other way while people's lives are destroyed, and some, if lucky, will be compensated 30 years from now, probably with pitiful amounts of money.

Corruption generally works by inflicting a diluted, distributed harm. Everyone else ends up a bit worse off except for the agent of corruption, which ends up very well off.


I don't have the time to read all four stories that ChatGPT turned up right this minute, but I now have cause to believe that at least some minority of those peasants you refer to did find fun in solving their problems.

I'm with that group of people. What was your point in bringing this up?

Wait, was I just trolled? If so, lol. Got me!


I suggest we go back to before and be human about things - and build trust in-person.


Dunbar's number leaps to mind. I wonder what our systems look like at large when we have cause to strengthen our 150 meaningful connections.

Would this truly be a move back? I've met people outside my social class and disposition who seem to rely quite heavily on networking this way.


This is exactly the reason

Human biological limits prevent the realization of stable equilibrium at the scale of coordination necessary for larger emergent superstructures

Humans need to figure out how to become a eusocial superorganism because we’re past the point where individual groups don’t produce externalities that are existential to other groups/individuals

I don’t think that’s possible, so I’m just building the machine version


This resonates with me.

I'd love to see the machine version or hear more of your thoughts about what goes into it.


If you’re really interested at the furthest depth then take a look at my paper:

https://kemendo.com/GTC.pdf

If that resonates further let me know at my un on icloud domain


Thanks, I will read. I haven't considered what it's like to be interested at the furthest depth, but I will do that now.


Thanks! Happy to answer and questions you have or if you have any feedback I’m open to it


This is childish thinking. Whatever we do, we cannot go back to "before". Which "before"? How do we go back?

You can't regress back to a being a kid just because the problems you face as an adult are too much to handle.

However this is resolved, it will not be anything like "before". Accept that fact up front.


Unfortunately there’s no “roll back to last stable” - the current version is actually still the most stable

If you try to “go back” you’ll just end up recreating the same structure but with different people in charge

Meet the New boss same as the old boss - biological humans cannot escape this state because it’s a limit of the species




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: