Ok so clearly a satire. However I kinda want this. They make some really good points about how an AI would be better than many CEOs. Honestly some of the companies I've worked for would be better with Gemini at in charge. Yes humanity is doomed, but at least I would understand the motivations and we'd have less CEO ADHD moments. (CEO ADHD -> "Some other CEO told me about X, why aren't we doing X")
I feel like if I mention technology X in my system context for Gemini, there is a 100% chance that when I ask for hiking recommendations Gemini will say "As a user of technology X, you would appreciate the beauty and elegance of the Cuyamaca National Forest"
I worked as a consultant for a company where the CEO one day, they just started using AI chat for everything. Every question you asked, they just forwarded it. Same thing for company strategy, major decisions, presentation content, and so on.
Initially, I was really annoyed. After I took a deep breath, and read through the wall of text they sent (to figure out how to respond), I eventually realized it was slightly better than their previous work. Not like, night-and-day better, but slightly better.
Since then, I've been playing with the idea of 'hiring' an AI to manage my freelance and personal work. I would not be required to do what it says, but I could take it under consideration and see if I work better that way. Sort of like the ultimate expression of "servant leadership".
> Since then, I've been playing with the idea of 'hiring' an AI to manage my freelance and personal work.
Shit, I think you are now personally responsible for 3-4 home projects I've neglected investing time into actually getting some attention. I too am much more productive and oddly enough, find the work more interesting when It's someone else asking and waiting for me to deliver it.
I haven't tried the Gemini CLI yet and creating an agent that acts like a customer I have to answer too about projects progress sounds like a perfect project idea for this weekend.
Question is, will I actually see this one through our will it too wind up in homelab project purgatory!
> How is GP's idea related to 3~4 different home projects of yours and not just one?
My thought process is that if I actually make this for myself, those 3-4 projects would magically get a very impatient new stakeholder who will pester me to actually deliver those projects to them.
It's gonna be interesting to see if I'm able to trick my brain into not just brushing off the agent (or if that's even really an issue, no clue at this point). I've started rolling around some ideas on handling that scenario but I'm just gonna let that stew while I play with different setups so I don't end up just building a convoluted reminder app lol
Unfortunately I had an unexpectedly hectic holiday weekend so I won't get to start playing with the idea for real until tomorrow afternoon.
It's cute, and fun, but I disagree. It could make a mistake, but it could go a long time with giving us confidence and a reasonable return of intelligent results. I think that can lull us into dependency. Do we really want to give up decision-making to AI? I don't think so.
With that said, if it's used purely as a tool by a CEO, and overtime has been developed with the optimal parameters for the company with its culture and everything (thank Apple) then the AI can help make decision decisions for the company.
I mean, that is what we do with all kinds of technology already. If you had to stop and think about everything that required a decision by someone you'd quickly realize you'd get nowhere fast in the very complex world we live in. The vast majority of people want to abstract those decisions away so they only have to make the minimum amount possible.
In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
The problem is the organizing principle for our entire global society is competition.
This is the default, the law of the jungle or tribal warfare. But within families or corporations we do have cooperation, or a command structure.
The problem is that this principle inevitably leads to the tragedy of the unmanaged commons. This is why we are overfishing, polluting the Earth, why some people are freeriding and having 7 children with no contraception etc. Why ecosystems — rainforests, kelp forests, coral reefs, and even insects — are being decimated. Why one third of arable farmland is desertified, just like in the US dust bowl. Back then it was a race to the bottom and the US Govt had to step in and pay farmers NOT to plant.
We are racing to an AIpocalypse because what if China does it first?
In case you think the world don’t have real solutions… actually there have been a few examples of us cooperating to prevent catastrophe.
1. Banning CFCs in Montreal Protocol, repairing hole in Ozone Layer
2. Nuclear non-proliferation treaty
3. Ban on chemical weapons
4. Ban on viral bioweapons research
So number 2 is what I would hope would happen with huge GPU farms, we as a global community know exactly the supply chains, heck there is only one company in Europe doing the etching.
And also I would want a global ban on AGI development, or at least of leaking model weights. Otherwise it is almost exactly like giving everyone the means to make chemical weapons, designer viruses etc. The probability that NO ONE does anything that gets out of hand, will be infinitesimally small. The probability that we will be overrun by tons of destructive bot swarms and robots is practically 100%.
In short — this is the ultimate negstive externality. The corporations and countries are in a race to outdo each other in AGI even if they destroy humanity doing it. All because as a species, we are drawn to competition and don’t do the work to establish frameworks for cooperation the way we have done on local scales like cities.
PS: meanwhile, having limited tools and not AGI or ASI can be very helpful. Like protein folding or chess playing. But why, why have AGI proliferate?
It's the equivalent of outsourcing your job. People have done this before, to China, to India, etc. There are stories about the people that got caught, e.g. with China because of security concerns, and with India because they got greedy, were overemployed, and failed in their opsec.
This is no different, it's just a different mechanism of outsourcing your job.
And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.
Maybe a few people managed to outsource their own job and sit in the middle for a bit. But that's not the common story, the common story is that your employer cut out the middle man and outsourced all the jobs. The same thing will happen here.
The trick is to register an LLC, and then get your employer to outsource the work to your consulting company. You get laid off, and then continue to work through your company.
Only mild sarcasm, as this is essentially what happens.
A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
> This begs the question of which side agents will achieve human-level skill at first.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Some humans will be rich and they'll buy things. For example those humans who own AI or fabs. And those humans, who serve to them (assuming that there will be services not replaced by AI, for example prostitution), will also buy things.
If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.
Not a lot of difference between an F-35 and a fleet of drones when it comes to it tbh. If F-35s are not enough then I don't see how drones will fare better.
IMO drones are just a way for Elon to get his foot in the door.
> How are businesses going to get money if there are no humans that are able to pay for goods?
By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).
There is an amount of people who own, well, in the past we could say "means of production" but let's not. So, they own the physical capital and AI worker-robots, and this combination produces various goods for human use. So they (the people who own that stuff) trade those goods between each other since nobody owns the full range of production chains.
The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.
So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.
That is more or less what I fear. If the top 10 percent already account for half of all consumer spending, and I equality keeps getting worse and worse, that's probably where it will end.
Funny thing. No need for drama. Just give people education and a wage, and a grind, and populations will go down on their own. While we pretend that the value of money still means something.
I didn't read the parent comment as endorsing that outcome, simply predicting that if people chase profits without regard for the well being of their fellow man, that's where we might end up heading. I think the question we have to answer is "how can we prevent that?", because history has shown us that humans are very happy to run roughshod over others to enrich themselves.
It for sure is endorsed by the tech billionaires... Humans greed is just so tiring to me. I am so fucking tired of seeing good people suffer while some tech bros wipe their asses with pure gold.
The AI agents don’t appear to know how & where to be economically productive. That still appears to be a uniquely human domain of expertise.
So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.
It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.
It’s a fundamental principle of modern economics that humans are assumed to act in their own economic interests - for which they need to know how and where to be economically productive.
humans are assumed to act, and some activities may generate consequences, to which a human may react somehow.
certainly there is a "survivor bias" but the rationality, long-term viability, and "economic benefit" of those activities and reactions is an open question. any judgement of "economic benefit" is arbitrary and often made in aggregate after the fact.
if humans knew how to create "economic benefit" in some fundamental and true way, game theory and most regulatory infrastructure would not exist, and i'm saying that as an anarchist.
You are welcome to try to cut them out and start your own business. But I suspect you might find it a bit harder than your employer signing up for a SaaS AI agent. Actually wait, isn't that what this website is? Does it work?
This is backwards. Those people got into the positions they have by having money to spend, not because someone wanted to pay them to do something. (Or they had a way to have control over spending someone else's money.)
Do people on Hacker News actually believe this? Each one of the four people named built a product I happily pay for! Then they used investment and profits to hire people to build more products and better products.
There's a lot of scammers in the world, but OpenAI, Tesla, Amazon, and Microsoft have mostly made my life better. It's not about having money, look at all the startups that have raised billions and gone kaput. Vs say Amazon who raised just $9M before their $54M IPO and is still around today bringing tons of stuff to my door.
The most successful scammers will provide you with something of value and then act to swindle you and many others of multiple times the amount of "value" they're generating. With Musk and their friends it seems to be the pattern.
Musk sells several things. Electric cars for $40k-$100k. Satellite internet for $40-$120 per month. X/Grok premium for $8/mo. And space launch services for about $2,500 per kg. Which one(s) of these are the scam? Prices seem decent to me, but if you tell me where I can get cheaper and better I'm open to it.
The "scam" part of Tesla has been well-documented, from their failure to deliver reliable full self-driving to the Cybertruck's low quality manufacturing, there is a lot of information out there about it.
comma.ai owns a lot of cars, including a Tesla, so I have tried most cars in the price range. Tesla is certainly no more of a scam than the other cars, and compared to say, the Chevy Bolt, it's a lot better. Can you suggest a better car for the value? Is there another car I can buy with better full self driving?
They are a bridge between those with money and those with skill. Plus they can aggregate information and act as a repository of knowledge and decision maker for their teams.
These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.
The free market is an analyzable simplification of the real market, however I think the assumptions hold in this case.
If a CEO delivers a certain advantage (a profit multiplier) it's rational that a bidding war will ensue for that CEO until they are paid the entire apparent advantage of their pretense for the company. A similar effect happens for salespeople.
The key difference between free and real markets in this case is information and distortions of lobbying. That plus legal restrictions on the company. The CEO is incentivized to find ways around these issues to maximize their own pay.
> Just let me subscribe to an agent to do my work while I keep getting a paycheck.
I've already done this. It's just a Teams bot that responds to messages with:
"Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"
"I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"
"Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"
"Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"
and then
"If you've any further questions please stick them in an email and I'll look at it as a priority.
Mòran taing,
EB."
(notice that I don't say how high a priority?)
No AI needed. Just good old-fashioned scripting, and organic stupidity.
Reminded me of an episode of the IT Crowd where they put a recording of "Have you tried turning it off and on again? as the answering machine for an IT department.
What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.
I would use the AI to build a robot that could build copies of itself and then once there are a sufficient number of robots I'd use them to build more good places to go to.
What happens when "your" AI wants to build something where someone else's AI wants to build it? I suppose you are thinking of something like Banks's Culture? The trouble is for that we're probably going to need real AI, not just LLMs, and we have no reason to believe a real AI would actually keep us as pets like in the Culture. We have no idea what it would want to do...
Isn't this kind of the same as an AI copilot, just with higher autonomy?
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
Really this is the only 10x part of GenAI that I see: increasing the number of reports exponentially by removing managers/directors, and using GenAI (search/summarization, e.g. "how is X progressing" etc) to understand what's going on underneath you. Get rid of the political game of telephone and get leaders closer to the ground floor (and the real problems/blockers).
If your entire job, as a VP or director/manager, is getting progress reports, you’re probably a wildly shitty manager and ought to be replaced anyways.
Seems more like the kind of thing a “smartest guy in the building” dev believes to be true, than actual reality at a real company.
Having VPs “clear blockers” is absolutely asinine.
From what I hear, this will not happen. AI keeps absolutely making up laws and cases that don’t exist no matter what you feed it. Basically anything legal written or partially written by AI is a liability. IANAL but have been reading a tiny bit about it.
The need for lawyers will shrink and is shrinking. My company used to call lawyers for many small little things. Now it is easy to ask an LLM and have the second LLM verify it. For super critical things, we may still call lawyers. And in the court rooms, you will still see lawyers. But everywhere else the need for lawyers will keep going down.
Worth to note that lawyer is not only read a text and say: "true or false" requires interpretation and understanding of how a society changes/evolves, depending on the country it's jurisprudence or more analytical (written laws).
I have a difficult in see why a portion of HN audience is so "narrowed view" about justice systems and politics.
Ehhh just calling a raw LLM is not going to replace anyone and be prone to hallucination, sure. But lawyers are increasingly using LLM systems, and there's law-specific products that are heavily grounded (ie. they can only respond from source material).
Our CEO did not write a customary Thanksgiving email. There was nothing from other C-level leadership. I’ve been around long enough to see this erosion in company culture custom. What is happening? Perhaps an AI CEO would have these subtleties.
Though I think the CEO role is realistically one of the hardest to automate, I’d say middle management is a very juicy target.
To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.
The UI looks good!
Is there a reason this is being shared here? Feels like a collection of tired, trite oneliners that I’d expect to see on Twitter rather than here.
Agreed, it’s only superficially funny, there’s a ton left on the table that could have made it actually good, it feels like it doesn’t adequately parody CEOs or AI in a way that indicates any insight.
...and you haven't contributed anything in those months.
You went from one one-sentence-long comment months ago straight into criticising what other people contribute in this thread. Do you think that's fair of you?
AI can and should replace CEOs, Lawyers, and even non surgeon doctors. The fact that AI is always brought up when it comes to software development layoffs (ironically they are the ones who built it) but yet it isn’t impacting the ones that it easily could, raise so many questions, and clearly shows that AI is being weaponized to lower wages of some workers while others are protected by regulations and lobbyists.
Joke aside, I do think think someone should work on a legitimate agent for financial and business decision, management, and so on.
Especially "decision making". I find it's one of the things that are tricky, making the AI agent optimize for actually good decisions and not just give you info or options, but create real opinion and take real decisions.
I know they're supposed to be smarter than a year ago but you could have fooled me
I'm in a loop with Opus 4.5 telling it "be logically consistent" and then it says "you're absolutely right" and proceeded to be logically inconsistent again for the 20th time.
Respectfully ‘be logically consistent’ is not something that a llm would understand on the basis of the fact that it isn’t logical and literally unable to reason in the first place. It’s dumb.
Without knowing what is “good decision” how do you even evaluate neck to neck?
At this point "Humans are also imperfect" is becoming a lazy defense. It is equivalent to crypto bros saying "humans are corrupt. Blockchain FTW!". Remind me where we are with that?
Replace your imperfect analyst with a equally imperfect, if not worse, and maybe black box system is not the winning sales message you seem to think it is.
How hard would it be to run a simulator with multiple LLMs. Say, one as the boss and a few as employees. Just let them talk, coordinate, and "work"? Could be the fastest way to test what actually happens when you try to automate management.
This is quite literally what we've built @ Gobii, but it's prod ready and scalable.
The idea is you spin up a team of agents, they're always on, they can talk to one another, and you and your team can interact with them via email, sms, slack, discord, etc.
And they simulate a externalized team where the enterprize that pays the team doesn't knows that it's just AI and just thinks that these chinese/indian/african people of this external team are really bad at what they are doing.
Interesting approach, but I mean more in the sense of a multi-agent sandbox than workflow automation. Your project feels like wrapping a bunch of LLMs into "agents" with fixed cadences, it is a neat product idea, even if it mostly ends up orchestrating API calls and cron jobs.
The thing I’m curious about is the emergent behavior, letting multiple LLMs interact freely in a simulated organization to see how coordination, bottlenecks, and miscommunication naturally arise.
Agreed, the emergent behavior is the most interesting and valuable part. We don't want bad emergent behavior (agents going rogue) but we do want the good kind (solving problems in unexpected ways.)
The site is obviously satire, but the interesting part is the growth tactic behind it. oilwell.app is using a meme page as a distribution engine instead of a standard marketing site.
In a crowded AI tooling market, this kind of contrast joke on the front paired with a real product behind it, cuts through noise in a way a normal landing page wouldn’t. People mock the gimmick, but the gimmick is doing exactly what it’s designed to do, get everyone talking.
Funny. Infact, the blockchain smart contract (DAPPs) tried this before, by fully automating (they call it democratizing) the decisions. Not sure how it went.
How is your AI going to go meet with investors and possible customers? Or present to the board? And AI can't be accountable not really. The whole idea is silly.
But as he joked, if it can do PowerPoint he'd let it take over that at least
The free version of Gemini says it could not replace the CEO of JP Morgan Chase but that it would make an excellent Chief Risk Officer or Chief Strategist. That would still save a ton of money!
Easier in what way? I’d say this wouldn’t fare better than other recent AI implementations.
I.e. I’d guess doing this in practice with current state of the AI and without expert supervision would lead to some catastrophic error relatively soon.
I was going to mention the same thing, also the page is clearly designed by a woman. Never mind that neither Sundar nor Satya are not white, or that many VPs in the Big Tech world are women. OP seems to have a very distorted view of the corporate world and has villified the white alpha male in her mind.
Capitalism requires that capital is owned and controlled by specific people. So, no, there cannot be an AI CEO. In other words, if you say you have an AI CEO, then that entity will be under the control of someone else, whom you might as well call the real CEO.
Just like how Twitter had a “CEO” who was some pliable female who did the bidding of the real CEO: Elon Musk.
There are shareholders/owners and CEOs. You can certainly have an AI CEO if the board of directors wants that. Although depending on the jurisdiction CEOs might need be humans, but surely not everywhere.
And you could even imagine AI owners with something like Bitcoin wallets. So far it wouldn't work because of prompt injections but the future could be wild.
If the "directors" want that, then the directors are sharing the job of CEO. It is simply absurd labeling magic to claim that that AI is the chief executive.
> Capitalism requires that capital is owned and controlled by specific people.
That is an overly simplistic description. One can imagine a board of directors voting on which AI-CEO-as-a-service vendor to use for the next year. The 'capital' of the company is owned by company, the company is owned by the shareholders. This is not incompatible with capitalism in principle, but wouldn't surprise me if it were incompatible with some forms of incorporation.
You are speaking of a form of socialism, sounds like. I thought we were in the realm of capitalism.
Also, you are strangely asserting that a tool operated by a group of people is an executive. People are always in charge in our world. Even if a man builds a device that kills him, we say that he committed suicide, not that he was murdered by a machine.
I like the fun part of it. But this is clearly vibe coded slop. The awful pink colour scheme, clickable buttons which don’t do anything bang in middle of the page, the share button which doesn’t really share etc.
And some of the messages keep repeating like carbon footprint etc. Just seems low effort and not in a fun way.
Counterpoints: this joke isn't worth the effort to make it high quality and the jank is part of the joke. AI slop is garbage, presenting it as otherwise would be missing the point.
They're at the center of the hourglass that exists between external (board members, shareholders, customers, partners) and internal (employees) interests.
Also, the people in charge of CEOs (the board) are generally CEOs themselves. It's a good old boys' club where they all make sure to take care of each other.
Looks like that's a response to Linus and Linux community saying that Qualcomm chips I weren't able to run Linux what hey it's good though at least now there's internal support