Hacker Newsnew | past | comments | ask | show | jobs | submit | rileymichael's commentslogin

> This report was produced by me — Claude Opus 4.6 — analyzing my own session logs [...] Please give me back my ability to think.

a bit ironic to utilize the tool that can't think to write up your report on said tool. that and this issue[1] demonstrate the extent folks become over reliant on LLMs. their review process let so many defects through that they now have to stop work and comb over everything they've shipped in the past 1.5 months! this is the future

[1] https://github.com/anthropics/claude-code/issues/42796#issue...


The other day I accidentally `git reset --hard` my work from April the 1st (wrong terminal window).

Not a lot of code was erased this way, but among it was a type definition I had Claude concoct, which I understood in terms of what it was supposed to guarantee, but could not recreate for a good hour.

Really easy to fall into this trap, especially now that results from search engines are so disappointing comparatively.


If your code was committed before the reset, check your git reflog for the lost code.

Yeah, git reset --hard is something I do like once a week! lol

With the reflog, as you mentioned, it's not hard to revert to any previous state.


Guess you’ve sorted it but it might be in the session memory in your root folder. I’ve recovered some things this way.


> but could not recreate for a good hour.

For certain work, we'll have to let go of this desire.

If you limit yourself to whatever you can recreate, then you are effectively limiting the work you can produce to what you know.


you should limit your output (manual or assisted) to a level that is well under your understanding ceiling.

Kernighan’s Law states that debugging is twice as hard as writing. how do you ever intend on debugging something you can’t even write?


It's simple, they'll just let the LLM debug it!

This is why I believe the need for actually good engineers will never go away because LLMs will never be perfect.


Exactly. It's a force multiplier - sometimes the direction is wrong.

Same week I went into a deep rabbit hole with Claude and at no point did it try to steer me away from pursuing this direction, even though it was a dead end.


> Kernighan’s Law states that debugging is twice as hard as writing.

100%, but in a professional setting you often work with code _not_ written by you. What if that code is written by someone well above my ceiling?


They seem to have some notions of pipelines and metrics though. It could be argued that the hard part was setting up the observability pipeline in the first place - Claude just gets the data. Though if Claude is failing in such a spectacular way that the report is claiming, yes it is pretty funny that the report is also written by Claude, since this seems to be ejecting reasoning back to gpt4o territories

If you don't have swarms of agentic teams with layers of LLMs feeding and checking LLMs over and over again, you're going to be left behind.

pretty hard to find this on their blog, looks like incidents are tucked away at the bottom. an issue of this size deserve a higher spot.

(also looks like two versions of the 'postmortem' are published at https://blog.railway.com/engineering)


> there is a large force on HN that want to deny the value of tokens

there is an even larger force on HN that financially _needs_ the value of tokens to be inflated (so much so that bots have overwhelmed the site)


That’s not me. I am simply an engineer who gets a ton of value out of these tools.

That’s exactly what a bot would say ;-)

Really? Do you think there are more bots and employees of AI stakeholder companies than there are vanilla engineers on the site?

by far. at this point there are very few tech companies without exposure to AI

> Engineering salaries are significantly higher than nearly every other industry on average and on median

now compare the profit per employee at tech (software engineering) companies and those industries..


At the top end (say, top 100 tech companies) it’s pretty high indeed. Public companies, for sure, as otherwise their stock price would tank. It’s not uncommon in this industry to have margins above 70-80%.

But there are thousands if not tens of thousands where the profit per employee is minimal or negative.

I can’t find a source for all tech (the data wouldn’t exist for private firms anyway) but I think it’s telling to look at this list, scroll down to about the middle and look around at salaries you or your colleagues are pulling. Software revenues are certainly high but the industry is afloat because of these high margin businesses creating returns so that low margin businesses can exist. Without the massive infusion in upfront capital, very uncommon in other industries, it’s simply not sustainable.

Typically a market that’s buoyed by its top performers but has significant amounts of capital tied up in under performers is called “a bubble”.

https://www.trueup.io/revenue-per-employee


looking forward to the `addressing-githubs-recent-availability-issues-3` news post


structural search and replace in intellij is a superpower (within a single repo). for polyrepo setups, openrewrite is great. add in an orchestrator (simple enough to build one like sourcegraph's batch changes) and you can manage hundreds of repositories in a deterministic, testable way.


couldn't have said it better. all of the people clamoring on about eliminating the boilerplate they've been writing + enabling refactoring have had their heads in the sand for the past two decades. so yeah, i'm sure it does seem revolutionary to them!


There have been a handful of leaps - copilot was able to look at open files and stub out a new service in my custom framework, including adding tests. It’s not a multiplier but it certainly helps


most frameworks have CLIs / IDE plugins that do the same (plus models, database integration, etc.) deterministically. i've built many in house versions for internal frameworks over the years. if you were writing a ton of boilerplate prior to LLMs, that was on you


Habe they? I’ve used tools that mostly do it, but they require manually writing templates for the frameworks. In internal apps my experience has been these get left behind as the service implementations change, and it ends up with “copy your favourite service that you know works”.


> they require manually writing templates for the frameworks

the ones i've used come with defaults that you can then customize. here are some of the better ones:

- https://guides.rubyonrails.org/command_line.html#generating-...

- https://hexdocs.pm/phoenix/Mix.Tasks.Phx.Gen.html

- https://laravel.com/docs/13.x/artisan#stub-customization

- https://learn.microsoft.com/en-us/aspnet/core/fundamentals/t...

> my experience has been these get left behind as the service implementations change

yeah i've definitely seen this, ultimately it comes down to your culture / ensuring time is invested in devex. an approach that helps avoid drift is generating directly from an _actual_ project instead of using something like yeoman, but that's quite involved


Sorry - I’m aware that rails/dotnet have these built into visual studio and co, but my point was about our custom internal things that are definitely not IDE integrated.

> it comes down to ensuring time is invested in devex

That’s actually my point - the orgs haven’t invested in devex buyt that didn’t matter because copilot could figure out what to do!


the best way is via CRaC (https://docs.azul.com/crac/) but only a few vendors support it and there’s a bit of process to get it setup.

in practice, for web applications exposing some sort of `WarmupTask` abstraction in your service chassis that devs can implement will get you quite far. just delay serving traffic on new deployments until all tasks complete. that way users will never hit a cold node


But then we can complain about the long start time for each instance or JVM. It is choosing a different trade-off.


start time generally isn't a huge concern for web applications (outside of serverless) since you've got the existing deployment serving traffic until its ready. if you're utilizing kubernetes, the time to create the new pods, do your typical blue-green promotion w/analysis tests etc. is already a decent chunk of time regardless of the underlying application. if you get through it in 90 seconds instead of 60, does that really matter?


i’ve said this before, but the “left behind” narrative is FUD nonsense. as an llm avoider i’ve never felt further _ahead_ than now. all of my peers who never bothered to learn their tools (which gave tangible benefits) have opted into deskilling themselves further.

it’s readily apparent who has bought into the llm hype and who hasn’t



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: