Hacker Newsnew | past | comments | ask | show | jobs | submit | thor_molecules's commentslogin

I think there is a bit of cognitive dissonance that comes with trying to build stuff with LLM technology.

LLM’s are inherently non-deterministic. In my anecdotal experience, most software boils down to an attempt to codify some sort of descision tree into automation that can produce a reliable result. So the “reliable” part isn’t there yet (and may never be?).

Then you have the problem of motivation. Where is the motivation to get better at what you do when your manager just wants you to babysit copilot and skim over diffs as quickly as possible?

Not a great epoch for being a tech worker right imo.


>LLM’s are inherently non-deterministic.

I'm not an ML guy but I was curious about this recently. There is some parameter that can be tuned to produce determinism but currently it also produces worse results. Big [citation needed], but worth a google if it's of interest. Otherwise in agreement with your post.


Temperature? Definitely tweaks the results…but I don’t know if “deterministic” is a term you can use in any way, shape, or form, in the context of LLMs.

https://www.ibm.com/think/topics/llm-temperature


Unless I’ve misunderstood something, setting a constant seed and a temperature of zero would give deterministic results.

Not good results necessarily, but consistent.


The apologists be damned. This article nails it. A grand reduction. Not a bicycle; a set of training wheels.

Where is the dignity in all of this?


> ...far too many unknown unknowns often paired with expectations of prompt (and cheap) solutions to complicated issues.

That describes pretty much all of my "full-stack" experience.

What sort of job/background do you have where you are writing low level drivers? I'd love to get into that side of things but I don't know where to start.


Telecom and automotive.

I guess hobby robotics, for example, could get you an entry to it if you choose to write the hardware interfacing parts yourself.


> Many people who first entered senior roles in 2010-2020 are finding current roles a lot less fun.

This resonates with me.

I find that the current crop of new tech (AI) produces a lot of cognitive dissonance for me in my day-to-day work.

Most initiatives/projects/whatever around AI seems to be of the "digging your own grave" variety - making tools to replace software engineers.

Definitely not fun.


I reach for the ~/bin/thing approach when I want the utility to be useable from vim.

For example, if I define an alias thing, vim won't know about it.

But as a executable on my $PATH, I can do any of the following:

  :%!thing
  :'<'>!thing
  :.!thing
A good example is on my work machine, I can't install jq because reasons. However I do have python, so I have a simple executable called fmtjson that looks like this:

  #!/bin/sh
  python -m json.tool --indent 2
When I want to format some json from vim, I just run:

  :%!fmtjson
Easy to remember, easy to type. Plus I can use it in pipes on the cli.


Consider exposing commands that the user can then assign to their own preferred keybindings instead of choosing for them


Thanks for the suggestion! The plugin currently supports toggling between <Leader>/<C-*> via USE_LEADER config flag. I will add a field in the config file for more customizability (e.g., "KEYBINDINGS": {"mapl":"<C-a>", "mapj":"<Leader>o", ...} in cfg.json).


https://github.com/tpope/vim-fugitive/blob/b068eaf1e6cbe35d1... for reference, an example from a tpope plugin


Whoa, thanks! Will definitely look into that


After reading the comments, the themes I'm seeing are:

- AI will provide a big mess for wizards to clean up

- AI will replace juniors and then seniors within a short timeframe

- AI will soon plateau and the bubble will burst

- "Pshaw I'm not paid to code; I'm a problem solver"

- AI is useless in the face of true coding mastery

It is interesting to me that this forum of expert technical people are so divided on this (broad) subject.


To be honest, HN is about this with any topic. In the domain of stuff I know well, I've seen some of the dumbest takes imaginable on HN, as well as some really well-reasoned and articulated stuff. The limiting factor tends to be the number of people that know enough about the topic to opine.

AI happens to be a topic that everyone has an opinion on.


The biggest surprise to me (generally across HN) is that people expect LLMs to develop on a really slow timeframe.

In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible.

But everyone seems to evaluate LLMs like they're fixed at today's capabilities. I keep seeing "10-20 year" estimates for when "LLMs are smart enough to write code". It's a very head in the sand attitude to the last 2 years trajectory.


Probably because we see stuff like this every decade. Ten years ago no one was ever going to drive again because self-driving cars were imminent. Turns out a lot of problems can be partially solved very quickly, but as anyone with experience knows, solving the last 10% takes at least as much time as solving the first 90.


> Ten years ago no one was ever going to drive again because self-driving cars were imminent

Right.. but self driving cars are here. And if you've taken Waymo anywhere it's pretty amazing.

Of course just because the technology is available doesn't mean distribution is solved. The production of corn has been technically solved for a long time, but doesn't mean starvation was eliminated.


>And if you've taken Waymo anywhere it's pretty amazing.

Yeah, about that: https://ca.news.yahoo.com/hilarious-video-shows-waymo-self-1...


You can’t extrapolate the future trajectory of progress from the past. It comes in pushes and phases. We had long phases of AI stagnation in the past, we might see them again. The past five years or so might turn out to be a phase transition from pre-LLM to post-LLM, rather than the beginning of endless dramatic improvements.


It would be different too if we didn't know the secret sauce here is massive amounts of data and the jump process was directly related to a jump in the amount of data.

Some of the logic here is akin to how I have lost 30lbs in 2024 so at this pace I will weigh -120lbs by 2034!


> It comes in pushes and phases. We had long phases of AI stagnation in the past

Isn't that still extrapolating the future from the past? You see a pattern if pushes and phases and are assuming that's what we will see again.


I am not a software engineer and I made working stock charting software with react/python/typescript in April 2023 when chatGPT4 came out, without really knowing typescript almost at all. Of course, after awhile it was impossible to update/add anything and basically fell apart because I don't know what I am doing.

That is going to be 2 years ago before you know it. Sonnet is a better at using more obscure python libraries but beyond that the improvement over chatgpt4 is not that much.

I never tried chatGPT4 with Julia or R but the current models are pretty bad with both.

Personally, I think OpenAI made a brilliant move to release 3.5 and then 4 a few months later. It made it feel like AGI was just around the corner at that pace.

Imagine what people would have thought in April 2023 if you told them that in December 2024 there would be a $200 a month model.

I waited forever for Sora and it is complete garbage. OpenAI was crafting this narrative about putting Hollywood out of business when in reality these video models are nearly useless for anything much more than social media posts about how great the models are.

It is all besides the point anyway. The way to future proof yourself is to be intellectually curious and constantly learning, no matter what field you are in or what you are doing. Probably have to reinvent your career a few times if you want to or not too.


"In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible."

Illegally ingesting the Internet, copyrighted and IP protected information included, then cleverly spitting it back out in generic sounding tidbits will do that.


Even o1 just floored me. I can put in heaps of c++ code and some segfault stacktraces and it gave me an actual cause and fix.

I gave it 1000s lines of C++ and it did point the problem.


Many commenters suffer the first experience bias, they tried ChatGPT and it was "meh" so they see no impact.

I have tried cursor.ai, agent mode, and I see a clear big impact.


As soon as you replace the subject of LLMs with nebulous “AI” you have ventured into a la la land where any claim can be reasonably made. That’s why we should try and stick to the topic at hand.


That's very interesting! Love finding little nooks and crannies like this


I have an iris, I would highly recommend. Similar to a Moonlander (not as many keys though).

https://keeb.io/collections/iris-split-ergonomic-keyboard/pr...


You pretty much summarized what I've been thinking about for the past couple years.

Question is, what can we do about it? Thats where I really start to feel powerless, the challenge seems insurmountable.


I am thinking of just working in non-profits and foss after my phd.


i don't think nonprofits are the right path if you're trying to avoid burnout. you might agree with a big-picture mission, but it's only a matter of time before the low pay and huge workload combine with a sense of futility to create burnout.


I am not concerned with achieving anything, so I think I'll be fine.

To clarify, I am taking a stoic stance on this, not a detached or unconcerned one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: