Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?
Coordinating with people is hard and only gets harder as you live. And actually, finding someone that is earnestly receptive to hearing you pitch your half-baked startup ideas (just an example) and is in some capacity qualified to be at all helpful, is uhhh, not easy.
Really? Sometimes I think I'm not very social, then I read something like this. Don't you have any friends? Colleagues? Maybe that's the problem you need to solve rather than sitting in a room burning energy for endless token streams with LLMs that anyone has access to?
Ah, I couldn't help myself practice my creative writing in the other reply. This reply is more constructive:
Both LLM based rubber-ducking and human discussions seem like a win win. I see no reason to jump to labeling unhealthy social connections just for pairing with LLMs.
lol. nobody is proposing this "well if not friends, then...". Appreciate your concern. I am fine.
This is for Internet posterity: thought-partnering with AI does not in fact make you a sorry socially inept loser that needs globular-toast to come in and help you dial that helpline.
Also: one's friends do not, in reality want to thought-partner about work issues, esoteric hobbies, and that million dollar idea.
Also: these friends, every and any one of them it seems, will not in fact speak the word of God into your ear as manifest insight for said work issue, million dollar idea, and so forth.
But a non well-written prompt is not a good prompt. What are you really going to do with a shit prompt? It's meta: we need better writers all the way down.
Agree. Also, deference to consensus has always been a thing. "Best practices" is a thing at all levels of school and work. So it's very much a human thing, AI drastically compresses the timeline.
Importantly, it's not wrong. I say this as someone that seems to have the contrarian gene. I am worried too, that status-quo is now instant and all-consuming for anyone anywhere. But there's still hope in that AI compresses ramp up speed for anyone that would have the capacity to branch out anyway. So that's good.
Why is the window closing though? Because the prices went up? Or companies have to demonstrate belt-tightening? Or the AI mandate has teams building their own saas?
There's several aspects. For one, I don't see teams building their own SaaS even with AI. Companies buy rather than build to avoid significant operational and maintenance burdens as well as transfer risk and liability to a third party. AI does not change that calculus.
What AI instead is enabling a shift from Software as a Service to Service as a Software. In other words: SaaS is dead, long live SaaS. Most vendors in SaaS started because software is high margin and has limited scaling costs. But as they mature, they find clients also want guidance, professional services, and clear outcomes. This is part of the rise of the Forward Deployed Engineer (FDE) as a formal role. So it's not enough to sell the software, you also now to have sell how to use the software and what transformations are possible using the software. Essentially you can sell software to an individual but you sell transformations ("value alignment") to teams, divisions, orgs.
Another is that inference will become more expensive rather than cheaper over time. The capex spend on data centers has to be paid back by someone. This is the standard Silicon Valley playbook. Start cheap, gain marketshare, operate as a cartel, and then massively hike prices (i.e Uber, Airbnb...). So vendors (even if they operate with value-based pricing) still have to protect their inference costs will see more value from going upmarket early with larger contract deal sizes
TL;DR
Companies will still buy SaaS but a new variant -> services and outcomes rather than purely software. This coupled with increasing inference costs means value alignment will more likely require a negotiated conversation than a 1-click purchase
Thank you for taking the time. Your dot-connection ability is well honed.
This is particular apt for me as I switch from employment from product consumer tech to a smaller-scale sales-led company.
What's your take on the forward-deployed engineer setup in the mind's of the startup as a long-term, viable/lucrative model? I've always heard the warning for tech startups to avoid the traps of becoming a consultancy.
That warning is still valid and prudent. Consultancies are highly customized one-time engagements that do not easily scale. Startups prefer to have long-term relationships built on useful software.
The difference of customization vs. implementation which is where the role of the FDE shines. A consultancy builds something specific for a customer whereas a startup builds general purpose software. The FDE can then act as a force multiplier in educating the customer on how to use the product to its full potential.
Essentially as AI is making software more commoditized (i.e, there's a billion notetaking apps for example), the ability to sell a solution and outcome that solves individual customer pain points while still supporting a unified product experience serves as a differentiator moat. If every platform in the market offers the same features, you'll go with the person that offers the best hightouch sales experience. This is why over time as startups scales, opex switches from eng to sales and marketing.
So my friend works for a sports betting app and I personally do judge him from a philosophical point of view. I would never! Same with Meta, I would never!
But since I never once thought to de-friend him, I thought more about it. I leaned in. And TLDR: we are all part of this machine. Literally, everyone's work output gets bundled up into public retirement funds invested in these baddie public companies.
What's really the difference? Guy earns his paycheck directly, must be worse than all of us complicit to make money on stock market go up? Yes stock-market metaphor is intentional. The original gambler's paradise.
reply