Human biological limits prevent the realization of stable equilibrium at the scale of coordination necessary for larger emergent superstructures
Humans need to figure out how to become a eusocial superorganism because we’re past the point where individual groups don’t produce externalities that are existential to other groups/individuals
I don’t think that’s possible, so I’m just building the machine version
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
People want dopamine hints, gamification, addictive distractions, and a culture of competitive perma-hustle.
If they didn't, we wouldn't be having these problems.
The problem isn't AI, it's how marketing has eaten everything.
So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."
You can't "build trust" if your primary motivation is to sell stuff to your contacts.
The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
> You can't "build trust" if your primary motivation is to sell stuff to your contacts
That is false. You build a different type of trust: people need to trust that when they buy something from you it is a good product that will do what they want. Maybe someone else is better, but it won't be enough better as to be worth the time they would need to spend to evaluate that. Maybe someone else is cheaper, but you are still reasonably priced for the features you offer. They won't get fired for buying you because you have so often been worthy of the trust we give you that in the rare case you do something wrong it was nobody is perfect not that you are no longer trustworthy (you can only pull this trick off a few times before you become untrustworthy)
The above is very hard to achieve, and even when you have it very easy to lose. If you are not yet there for someone you still need to act like you are and down want to lose it even though they may never buy from you often enough to realize you are worth it.
> All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
Evil contains within itself the seed of its own destruction ;)
Sure, sometimes you should fight the decline. But sometimes... just shrug and let it happen. Let's just take the safety labels off some things and let problems solve themselves. Let everybody run around and do AI and SEO. Good ideas will prevail eventually, focus on those. We have no influence on the "when", but it's a matter of having stamina and hanging in there, I guess
It boggles my mind when, despite my general avoidance of advertising online, I see the language being used. Call me old fashioned, but "viral" is a bad thing to me. "Addictive" is a bad thing. "Tricks" are bad! But this is the language being used to attract customers, and I suppose it works well enough.
If they didn't, we wouldn't be having these problems.
That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
If you believe that advertising, especially data-driven personalised and targeted advertising, is essentially way of hacking someone's mind to do things it doesn't actually want to do, then it becomes fairly obvious that it's not entirely the individual's fault.
If adverts are 'Buy Acme widgets!' they're relatively easy to resist. When the advert is 'onion2k, as a man in his 40s who writes code and enjoys video games, maybe you spend too much time on HN, and you're a bit overweight, so you should buy Acme widgets!' it calls for people to be constantly vigilant, and that's too much to expect. When people get trapped by an advert that's been designed to push all their buttons, the reasonable position is that the advertiser should take some of the responsibility for that.
That’s true…but I do think people need to learn more that avoidance is a strategy too. The odds are too stacked against the average person to engage properly so just don’t. I don’t know. Sure there are certain unavoidable things but for a large part I think you can just choose to zone out of a lot of the consumerist world now
> That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
Within the last year I opened an Instagram account just so I could get updates from a few small businesses I like. I have almost no experience with social media. This drove home for me just how much the "this is where their attention goes, so that's revealed preference" thing is bullshit.
You know what I want? The ability to get these updates from the handful of accounts I care about without ever seeing Instagram's algo "feed". Actually, even better would be if I could just have an RSS feed. None of that is an option. Do I sometimes pause and read one of the items in the algo feed that I have to see before I can switch over to the "following" tab? I do, of course, they're tuned to make that happen. Does that mean I want them? NO. I would turn them off if I could. My actual fucking preference is to turn them off and never see them again, no matter that they do sometimes succeed in distracting me.
Like, if you fill my house with junk food I'll get fatter from eating more junk food, but that doesn't mean I want junk food. If I did, I'd fill my house with it myself. But that's often the claim with social media, "oh, it's just showing people more of what they actually want, and it turns out that's outrage-bait crap". But that's a fucking lie bolstered by a system that removes people's ability to avoid even being presented with shit while still getting what they want.
I do think that in general people are just conditioned by advertising in a general sense. I have family (by marriage) where most conversations just boil down to "I bought [product] and it was _so_ good." or "I encountered a minor problem, and solved it by buying [product]." It's pretty unbearable.
There are times I need a widget but I don't know it exists and so someone needs to inform me. Other times I know I need a widget, but I don't know about Acme and I will want to check them out too before buying.
Most ads are just manipulating me, but there are times I need the thing advertised if only I knew it was an option.
The core of this issue is a power imbalance. Advertisers have the full power of American capital at their disposal, and as many PhDs who know exactly how to exploit human psychology as need. Asking people to "vote with their wallet", or talking about "revealed preferences", or expecting people to be able to cope with this system is nonsense in the face of the amount of power available to the marketers.
It's fundamentally exploitation on a population scale, and I believe it's immoral. But because it's also massively lucrative, capitalism allows us to ignore all moral questions and place the blame on the victims, who again, are on the wrong side of a massive power imbalance.
Who else can and will stop the infernal machine other than the people? Can't see anyone. I hope you're wrong and expecting people to cope is not nonsense, because expecting the FDA or UN or Trump or Xi to do it is even more nonsense.
What authority are you going to complain to to "correct the massive power imbalance"? Other than God or Martians I can't see anything working, and those do not exist.
Fixed it for you: People are most easily manipulated into dopamine hints, gamification, addictive distractions, and a culture of competitive perma-hustle.
That is only true as long as people are the only entities who can spend money. As soon as people give AI the power to spend money, we will see companies designing products to appeal to AI:s. A new form of SEO spam, if you will.
I think this basically proves your point. There were things about it that made me think it may have been at least "AI-assisted", until I saw your "guaranteed bot-free" thing at the bottom. Anyone doing entirely hand-written things from now on are going to be facing a headwind of skepticism.
this is a funny phenomenon that I keep seeing. I think people are going through the reactionary “YoU mUsT hAvE wRiTtEn ThIs oN a CuRsEd TyPwRiTeR instead of handwriting your letter!1!!”
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
I don't have the time or energy to judge content by its quality. There are too many opportunities for subtle errors, whether made maliciously or casually. We have to use some non-content filter or the avalanche of [mis]information will bury us. We used to be able to filter out things with misspellings and rambling walls of text, and presumably most of the rest was at least written by a human you could harangue if it came to that. Now we're trying to filter out content based on em-dashes and emoji bullet lists. Unfortunately that won't be effective for very long, but we have to use what we've got, because the alternative is to filter out everything.
If I am a company that builds agents, and I sell it to someone.
Then, that someone loses money because this agent did something it wasn't supposed to: who's responsible?
Me as the person who sold it? OpenAI who I use below? Anthropic who performs some of the work too? My customer responsible themselves?
These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
> These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
Why? You have a delivery and you entered into some guarantees as part of the contract. Whether you use an agent, or roll a dice - you are responsible for upholding the guarantees you entered into as part of the contract. If you want to offload that guarantee, then you need to state it in the contract. Basically, what the MIT Licenses do: "No guarantees, not even fitness for purpose". Whether someone is willing to pay for something where you enter no liability for anything is an open question.
Technically that's what you do when you google or ask chatgpt something, right? They make no explicit guarantees that any of what is provided back is true, correct or even reasonable. you are responsible for it.
It's you. You contracted with someone to make them a product. Maybe you can go sue your subcontractors for providing bad components if you think you've got a case, but unless your contract specifies otherwise it's your fault if you use faulty components and deliver a faulty product.
If I make roller skates and I use a bearing that results in the wheels falling off at speed and someone gets hurt, they don't sue the ball bearing manufacturer. They sue me.
Agreeing with the others. It's you. Like my initial house example, if I make a contract with *you* to build the house, you provide me a house. If you don't, I sue you. If it's not your fault, you sue them. But that's not my problem. I'm not going to sue the person who planted the tree, harvested the tree, sawed the tree, etc etc if the house falls down. That's on you for choosing bad suppliers.
If you chose OpenAI to be the one running your model, that's your choice not mine. If your contract with them has a clause that they pay you if they mess up, great for you. Otherwise, that's the risk you took choosing them
In your first paragraph, you talk about general contractors and construction. In the construction industry, general contractors have access to commercial general liability insurance; CGL is required for most bids.
Maybe I'm not privy to the minutae, but there are websites talking about insurance for software developers. Could be something. Never seen anyone talk about it though
If I am a company that builds technical solutions, and I sell it to someone. Then, that someone loses money because the solution did something it wasn't supposed to: who's responsible?
Me as the person who sold it? The vendor of a core library I use? AWS who hosts it? Is my customer responsible themselves?
These are questions that classic contracts typically cover and the legal system is used to dealing with, because technical solutions have always had bugs and do unexpected things from time to time.
If your technical solution is inherently unreliable due to the nature of the problem it's solving (because it's an antivirus or firewall which tries its best to detect and stop malicious behavior but can't stop everything, because it's a DDoS protection service which can stop DDoS attacks up to a certain magnitude, because it's providing satellite Internet connectivity and your satellite network doesn't have perfect coverage, or because it uses a language model which by its nature can behave in unintended ways), then there will be language in the contract which clearly defines what you guarantee and what you do not guarantee.
Did you, the company who built and sold this SaaS product, offer and agree to provide the service your customers paid you for?
Did your product fail to render those services? Or do damage to the customer by operating outside of the boundaries of your agreement?
There is no difference between "Company A did not fulfill the services they agreed to fulfill" and "Company A's product did not fulfill the services they agreed to fulfill", therefore there is no difference between "Company A's product, in the category of AI agents, did not fulfill the services they agreed to fulfill."
Well, that depends on what we are selling. Are you selling the service, black-box, to accomplish the outcome? Or are you selling a tool. If you sell a hammer you aren't liable as the manufacturer if the purchaser murders someone with it. You might be liable if when swinging back it falls apart and maims someone - due to the unexpected defect - but also only for a reasonable timeframe and under reasonable usage conditions.
I don't see how your analogy is relevant, even though I agree with it. If you sell hammers or rent them as a hammer providing service, there's no difference except likely the duration of liability
There difference isn't renting or selling a hammer. The difference is providing a hammer (rent/sell) VS providing a handyman that will use the hammer.
In the first case the manufacturer is only liable for defects, for normal use of the tool. So the manufacturer is NOT liable for misuse.
In the second case, the service provider IS liable for misuse of the tool. If they say, break down a whole wall for some odd reasons when making a repair, they would be liable.
In both cases there is a separation between user/manufacturer liability - but the question relevant to AI and SaaS is just that. Are you providing the tool, or delivering the service in question? In many cases, the fact the product provided is SaaS doesn't help - what you are getting is "tool as a service."
Yes they do, adding "plus AI" changes nothing about contract law, OAI is not giving you idemification for crap and you cant assign liability like that anyway.
And ARR assumes it's recurring too - it's a metric that (people incorrectly think) shows how good your business is at retaining revenue year-over-year.
If a big proportion of contracts in Y1 get terminated at the end of the 12m period (because, say, the customer forgot to renew) - then the ARR will drop like a rock in Y2.
because you know the brands and trust them, to a degree
you have prior experience with them