Hacker Newsnew | past | comments | ask | show | jobs | submit | compton93's commentslogin

I don't want to review code the author doesn't understand

This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.


Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.


> This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.

This is very, very germane and a very quotable line. And these people have been around from long before LLMs appeared. These are the people who dash off an incomplete idea on Friday afternoon and expect to see a finished product in production by next Tuesday, latest. They have no self-awareness of how much context and disambiguation is needed to go from "idea in my head" to working, deterministic software that drives something like a process change in a business.


The unfortunate truth is that approach does work, sometimes. It's really easy and common for capable engineers to think their way out of doing something because of all the different things they can think about it.

Sometimes, an unreasonable dumbass whose only authority comes from corporate heirarchy is needed to mandate the engineers start chipping away at the tasks. If they weren't a dumbass, they'd know the unreasonable thing they're mandating, and if they weren't unreasonable, they wouldn't mandate the someone does it.

I am an an engineer. "Sometimes" could be swapped for "rarely" above, but the point still stands: as much frustration as I have towards those people, they do occasionally lead to the impossible being delivered. But then again, a stopped clock -> twice a day etc.


That approach sometimes does work, but usually very poorly and often not at all.

It can work very well when the higher-up is well informed and does have deep technical experience and understanding. Steve Jobs and Elon Musk are great, well-known examples of this. They've also provided great examples of the same approach mostly failing when applied outside of their areas of deep expertise and understanding.


if they're only right twice a day, you can run out of money doing stupid things before you hit midnight. in practice, there's a difference between a PHB asking a "stupid" question that leads to engineers having a lightbulb moment, vs a PHB insisting on going down a route that will never work.


You can change "software" to "hardware" and this is still an all too common viewpoint, even for engineers that should know better.


People keep asking me if AI is going to take my job and recent experience shows that it very much is not. AI is great for being mostly correct and then giving someone without enough context a mostly correct way to shoot themselves in the foot.

AI further encourages the problem in DevOps/Systems Engineering/SRE where someone comes to you and says "hey can you do this for me" having come up with the solution instead of giving you the problem "hey can you help me accomplish this"... AI gives them solutions which is more steps away to detangle into what really needs to be done.

AI has knowledge, but it doesn't have taste. Especially when it doesn't have all of the context a person with experience, it just has bad taste in solutions or just the absence of taste but with the additional problem that it makes it much easier for people to do things.

Permissions on what people have access to read and permission to change is now going to have to be more restricted because not only are we dealing with folks who have limited experience with permissions, now we have them empowered by AI to do more things which are less advisable.


The question about whether it takes jobs away is more whether one programmer with taste can multiply their productivity between ~3-15x and take the same salary while demand for coding remains constant. It's less about whether the tool can directly replace 100% of the functions of a good programmer.


Imagine a boring dystopia where everyone is given hallucinated tasks from LLMs that may in some crazy way be feasible but aren't, and you can't argue that they're impossible without being fired since leadership lacks critical thinking.


Reminds me of the wonderful skit, The Expert: https://www.youtube.com/watch?v=BKorP55Aqvg



That is incredibly accurate - I used to be at meetings like that monthly. Please submit this as an HN discussion.


That is a very good description of the Paranoia RPG.


Unfortunately this is the most likely outcome.


I’ve started to experience/see this and it makes me want to scream.

You can’t dismiss it out of hand (especially with it coming from up the chain) but it takes no time at all to generate by someone who knows nothing about the problem space (or worse, just enough to be dangerous) and it could take hours or more to debunk/disprove the suggestion.

I don’t know what to call this? Cognitive DDOS? Amplified Plausibility Attack? There should be a name for it and it should be ridiculed.


It's simply the Bullshit Asymmetry Principle/Brandolini's Law. It's just that bullshit generation speedrunners have recently discovered tool-assists.


In corporate, you are _forced_ to trust your coworker somehow and swallow it. Specially higher-ups.

In free software though, these kinds of nonsense suggestions always happened, way before AI. Just look at any project mailing list.

It is expected that any new suggestion will encounter some resistance, the new contributor itself should be aware of that. For serious projects specifically, the levels of skepticism are usually way higher than corporations, and that's healthy and desirable.


> Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

I would find it very insulting if someone did this to me, for sure, as well as a huge waste of my time.

On the other hand I've also worked with some very intransigent developers who've actively fought against things they simply didn't want to do on flimsy technical grounds, knowing it couldn't be properly challenged by the requester.

On yet another hand, I've also been subordinate to people with a small amount of technical knowledge -- or a small amount of knowledge about a specific problem -- who'll do the exact same thing without ChatGPT: fire a bunch of mid-wit ideas downstream that you have already thought about, but you then need to spend a bunch of time explaining why their hot-takes aren't good. Or the CEO of a small digital agency I worked at circa 2004 asking us if we'd ever considered using CSS for our projects (which were of course CSS heavy).


Reminds me of "Appeal to Aithority". (not a typo)

An LLM said it, so it must be true.

https://blog.ploeh.dk/2025/03/10/appeal-to-aithority/


A friend experienced a similar thing at work - he gave a well-informed assessment of why something is difficult to implement and it would take a couple of weeks, based on the knowledge of the system and experience with it - only for the manager to reply within 5 min with a screenshot of an (even surprisingly) idiotic ChatGPT reply, and a message along the lines of "here's how you can do it, I guess by the end of the day".

I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.


Same here. You throw a question in a channel. Someone responds in 1 minute with a code example that either you had laying around, or would take > 5 minutes to write.

The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.

And of course it didn't work.

But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.


I experienced mentioning an issue I was stuck on during standup one day, then some guy on my team DMs me a screenshot of chatGPT with text about how to solve the issue. When I explained to him why the solution he had sent me didn't make sense and wouldn't solve the issue, he sent me back the reply the LLM would give by pasting in my reply, at which point I stopped responding.

I'm just really confused what people who send LLM content to other people think they are achieving? Like if I wanted an LLM response, I would just prompt the LLM myself, instead of doing it indirectly though another person who copy/pastes back and forth.


The appropriate response is giving these people an LLM-generated explanation on why sending people LLM-generated slop is inappropriate and provides zero value.


> I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."

Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.


Sounds like a teachable moment.

If it's that simple, sounds like you've got your solution! Go ahead and take care of it. If it fits V&V and other normal procedures, like passing tests and documentation, then we'll merge it in. Shouldn't be a problem for you since it will only take a moment.


Absolutely agree :) If only he wasn't completely non-technical, managing a team of ~30 devs of varying skill levels and experience - which is the root cause of most of the issues, I assume.


> and a message along the lines of "here's how you can do it, I guess by the end of the day".

— How about you do it, motherfucker?! If it’s that simple, you do it! And when you can’t, I’ll come down there, push your face on the keyboard, and burn your office to the ground, how about that?

— Well, you don’t have to get mean about it.

— Yeah, I do have to get mean about it. Nothing worse than an ignorant, arrogant, know-it-all.

If Harlan Ellison were a programmer today.

https://www.youtube.com/watch?v=S-kiU0-f0cg&t=150s


Hah, that's a good clip :) Those "angry people" are really essential as an outlet for the rest of us.


At a company I used to work at I saw the CEO do this publicly (on slack) to the CTO who was an absolute expert on the topic at hand, and had spent 1000s of hours optimizing a specific system. Then, the CEO comes in and says I think this will fix our problems (link to ChatGPT convo). SOO insulting. That was the day I decided I should start looking for a new job.


You should send him a chatGPT critique of his management style.

(Or not, unless you enjoy workplace drama.)


It's the modern equivalent of sending a LMGTFY link, except the insult is from them being purely credulous and sincere


My company hired a new CTO and he asked chatgpt to write some lengthy documents about "how engineering gets done in our company".

He also writes all his emails with chatgpt.

I don't bother reading.

Oddly enough he recently promoted a guy who has been fucking around with LLMs for years instead of working as his right hand man.


> Oddly enough he recently promoted a guy who has been fucking around with LLMs for years instead of working as his right hand man.

Why is that odd? From the rest of your description, it seems entirely predictable.


That's directly lethal, in a limited sympathy with engineers that don't immediately head for the exit sort of fashion. Best of luck


The most experienced people quit, yes. There's some other not as experienced who are left, but seeing how a noob with less seniority and a large ego is now their boss, I expect they're proof reading their CVs as well.

I think under current management immigrants have no chance of getting promoted.


Especially when you try to correct them and they insist AI is the correct one

Sometimes it's fun reverse engineering the directions back into various forum, Stack Overflow, and documentation fragments and pointing out how AI assembled similar things into something incorrect


Win32 API mouse input and trigger a mouse click in Windows when crosshair on enemy head (more likely when crosshair is within enemy head position).


That would require a lot more than editing memory offsets though.


No they just read the memory. The whole point of an external cheat is to only read memory. They can still use Win32 to send inputs.


How do you determine whether the mouse is over an enemy head? Is there some variable the engine maintains for that?


For Counter-Strike 1.6, source and GO your crosshair would change and indicate that you are aiming at a player. Not sure about CS2, but wouldn't rule it out either.

It's a bit slow, but you could grab the player ID, then check if the player is on your team or not and then fire. Either by sending a mouse input, or if I remember correctly by writing to a specific address.

However, with enough knowledge (which is mostly documented online) you could actually pull out the hitbox, skeleton and animation data and just run the line-box intersection step yourself. Easier to do internally by hooking in-game functions though.


There is a list of entities in memory, models have "bones" for animation purposes, knowing the address of entity in memory you can find out if it's an enemy player (compare team ids), where the head bone origin is, and you also can read the view angles of your own player to see where you're looking. The tricky part would be doing a ray cast to see that you can actually hit the enemy and not shoot a wall - internal cheats can just call builtin game functions, externals can't.


No experience, but I think the game seems to track it? The default crosshair seems to change depending on what you're aiming at.


Read the player position and camera from memory, read enemies positions from memory, use basic maths to detect if camera is pointing at an enemy.

or the game engine could track internally what the player is looking at (GTA does this).


In Source Engine games, your inputs are stored in a struct ("usercmd", if I remember correctly) before being sent to the server in a client tick. You can modify that struct, a mouse click there is a bit flip on one of the fields. Rotation and movements are float fields. Modifying that struct makes your client send the "inputs" without a need to actually "call" anything.


> literally name themselves as "Postgresql Data Warehouse" but correct me if I'm wrong

That's not their primary product. Crunchy Postgres is their primary offering and they recently announced Crunchy Data Warehouse.


I thought Crunchy Data Warehouse was their main product, looking at most of their marketing posts. What's the advantage of using their managed PostgreSQL offering on the cloud, compared to native offerings such as AWS RDS and GCP Cloud SQL?


1) built using an open source kubernetes operator, as I understand 2) Crunchy provides true superuser access and access to physical backups – that's huge


Why is that huge out of interest?


Business continuity. If you don't have access to your backups, there's nothing you can do to work around a vendor issue.


Sounds like Stackgres?


I consulted on a project that was using MongoDB even though it was obvious from the concept that an RDBMS would be better, however I went in with an open mind and gave MongoDB a red hot crack. It straight up ignored indexes with no explainable reason why. We had a support contract and they just gave us the run around.


yeah i've had similar experience but never had indexes get ignored! 100% not surprised though.


When you get a chance can you take a look my reply here: https://news.ycombinator.com/item?id=43990502

When I first stepped into a DBA role with CockroachDB I was confused why indexes we obviously need were in unused indexes. It wasn't until I did an explain on the queries I learned the planner was doing zig-zag joins instead.


What are your thoughts on Fujitsu's VCI? I typically work for ERP's but im always advocating to offload the right queries to columnar DB's (not for DB performance but for end user experience).


It is. But wait... it doesn't join the data on the application level of your application. You have to deploy their proxy service which joins the data on the application level.


It's pretty obvious when somebody has only heard of Prisma, but never used it.

- Using `JOIN`s (with correlated subqueries and JSON) has been around for a while now via a `relationLoadStrategy` setting.

- Prisma has a Rust service that does query execution & result aggregation, but this is automatically managed behind the scenes. All you do is run `npx prisma generate` and then run your application.

- They are in the process of removing the Rust layer.

The JOIN setting and the removing of the middleware service are going to be defaults soon, they're just in preview.


They've been saying that for 3 years. We actually had a discount for being an early adopter. But hey its obvious Ive never used it and only heard of it.


The JOIN mode has been in preview for over a year and is slated for GA release within a few months. Which has been on their roadmap.

The removal of the rust service is available in preview for Postgres as of 6.7.[1]

Rewriting significant parts of a complex codebase used by millions is hard, and pushing it to defaults requires prolonged testing periods when the worst case is "major data corruption".

[1]: https://www.prisma.io/blog/try-the-new-rust-free-version-of-...


They've had flags and work arounds for ages. Not sure what point you are trying to make? But like you said I've never used it, only heard of it lol.


It is hard.

Harder than just doing joins.


Honestly, everything you say makes me want to stay far from prisma _more_.

All this complexity, additional abstractions and indirections, with all the bugs gootguns and gotchas that come with it... when I could just type "JOIN" instead.


Okay? It's one setting that will be the default in 2 months. And you could always write type-safe SQL manually instead.

i greatly envy y'all having projects where the biggest complexity is... A single setting once, that's clearly documented. We live in very different worlds, apparently.


I'm curious about Motion's experience with "Unused Indices". They suggest Cockroach's dashboard listed used indexes in the "Unused Indices" list.

I think the indexes they suspect were used are unused but Motion didn't realize CockroachDB was doing zigzag joins on other indexes to accomplish the same thing, leaving the indexes that would be obviously used as genuinely not used.

It's a great feature but CRDB's optimizer would prefer a zig zag join over a covering index, getting around this required indexes be written in a way to persuade the optimizer to not plan for a zig zag join.


I worked for startup who did all of these things on CockroachDB. We could of used a single m5.xlarge PostgreSQL instance (1000 basic QPS on 150GB of data) if we optimized our queries and went back to basics, instead we had 1TB of RAM dedicated to Cockroach.

I added about 4 indexes and halved the resources overnight. But Prisma, SELECT *, graphql and what other resume building shit people implemented was the bane of my existence, typically engineers did this believing it would be faster. I remember 1 engineer had a standing ovation in slack for his refactor which was supposedly going to save us $$$$$ except our DB CPU went up 30% because he decided to validate every company every second in every session. In his defense, he added 1 line of code that caused it, and it was obscured through prisma and graphql to an inefficient query.

FWIW; I love CockroachDB but the price is directly linked to how much your software engineers shit on the database.


A guy on reddit was working on one named PrismWM but he went AWOL. There was a mac os 9 lookandfeel in JDK 1.1 that could be updated to a modern version of Java as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: