Haha, I hit something similar after 12 years, just didn’t care anymore, and the idea of another sprint planning meeting made me nauseous. Jumped into product for a while thinking proximity to decision-making would help. It didn’t. Just more meetings, more politics.
What helped wasn’t the role shift, but dialing the intensity way down. Took a year doing part-time contract work, no Jira tickets. I know a few folks who leaned into teaching, some into small business stuff—bike repair, roasting coffee, etc. None of them are making FAANG money, but they seem… less fried.
If you’ve got savings and no urgent obligations, might be worth treating this as a decompression window instead of a pivot. Let your brain deflate a bit before deciding what’s next.
Compliance is usually the hard stop before we even get to capability. We can’t send code out, and local models are too heavy to run on the restricted VDI instances we’re usually stuck with. Even when I’ve tried it on isolated sandbox code, it struggles with the strict formatting. It tends to drift past column 72 or mess up period termination in nested IFs. You end up spending more time linting the output than it takes to just type it. It’s decent for generating test data, but it doesn't know the forty years of undocumented business logic quirks that actually make the job difficult.
To be fair, I would not expect a model to output perfectly formatted C++. I’d let it output whatever it wants and then run it through clang-format, similar to a human. Even the best humans that have the formatting rules in their head will miss a few things here or there.
If there are 40 years of undocumented business quirks, document them and then re-evaluate. A human new to the codebase would fail under the same conditions.
Formatting isn't just visual, in pre-79 COBOL or Fortran. It's syntax. Its a compile failure, or worse, it cuts the line and can sometimes successfully compile into something else.
Thats not just an undocumented quirk, but a fundamental part of being a punch-card ready language.
With C++ formatting is optional. A better test case for LLMs is Python where indention specifies code blocks. Even ChatGPT 3.5 got the formatting for Python and YAML correct - now the actual code back then was often hilariously wrong.
I can't even get Github Copilot's plugin to avoid randomly trashing files with a Zero No width break space at the beginning, let alone follow formatting rules consistently...
I am the last person to say anything good about CoPilot. I used CoPilot for a minute, mostly used raw ChatGPT until last month and now use Codex with my personal subscription to ChatGPT and my personal but company reimbursed subscription to Claude.
A quick search finds many COBOL checkers. I’d be very surprised if a modern model was not able to fix its own mistakes if connected to a checker tool. Yes, it may not be able to one shot it perfectly, but if it can quickly call a tool once and it “works”, does it really matter much in the end? (Maybe it matters from a cost perspective, but I’m just referring to it solving the problem you asked it to solve.)
Clearly it isn’t just “broken” for everyone, “Claude Code modernizes a legacy COBOL codebase”, from Anthropic:
Taking Anthropic reporting on Anthropic, at face value, is not something you should really do.
In this case, a five stage pipeline, built on demo environments and code that were already in the training data, was successful. I see more red flags there, than green.
A lot of people still think moats are about features. They’re not anymore. Features are cheap now. Execution and distribution are the real bottlenecks.
Big companies can copy your product, but they usually won’t copy:
– your speed early on
– your willingness to serve a tiny, unsexy niche
– your ability to change direction without internal politics
In practice, most startups don’t die because a big company copied them. They die because they never found real users who cared enough to pay.
The moat today often looks like:
– deep understanding of a specific workflow or pain point
– trust with a narrow audience
– compounding advantages (data, habits, integrations, community)
If your plan is “build something cool and hope it sticks”, it’s probably not worth it.
If your plan is “solve a painful problem for a very specific group, then expand”, it still is.
Curious how people here think about moats post-AI. Are we underestimating distribution, or overestimating defensibility?
Zero to one makes this crystal clear. Most startups die because they didnt manage to sell not because they didnt manage to build the product. Secondly there was never a code moat. Code was always cheap. But architecture, distribution, quality control, integration once landed, this is where money is mostly made. Most indie or small companies die because they just pick shitty problems. eg nobody cares about a goddamn notes app, the one on Mac works fine and most people write garbage in their notes like utter trash. So obviously a notes app has little value because the asset it manages has very little value. Not so much if you are building something like document storage for regulated industries. Or compliance software. In these cases, its the business domain expertise which counts. Even when applying for jobs without domain expertise code monkeys never get paid past a threshold. People / entities that command a premium are the ones with domain knowledge.
The success of Hacker News doesn’t come from flashy features, but from a community that consistently produces high-quality content. That said, I can’t help but wonder if there are any updates to the UI/UX in the works, LOL.
It’s a clickbait-worthy topic, but AI is helping me tackle tons of small tasks and unlocking productivity at a level I’ve never experienced before. I’m a software engineer.
The standard for parameters count is rapidly evolving. Something large now will be small tomorrow, there is no point in using such a moving target as a criterion.
Sure, but nonetheless whether the model is called "small" at some time t should depend on the parameter count and t, not some arbitrarily specified metric of deployability.
What helped wasn’t the role shift, but dialing the intensity way down. Took a year doing part-time contract work, no Jira tickets. I know a few folks who leaned into teaching, some into small business stuff—bike repair, roasting coffee, etc. None of them are making FAANG money, but they seem… less fried.
If you’ve got savings and no urgent obligations, might be worth treating this as a decompression window instead of a pivot. Let your brain deflate a bit before deciding what’s next.