Hacker Newsnew | past | comments | ask | show | jobs | submit | the_duke's commentslogin

Do you have a source for that?

This is very confusingly written.

From the post I expected that the tasks were about analysing traces, but all the tasks in the repository are about adding instrumentation to code!

Some of the instructions don't give any guidance how to do it, some specify which libraries to use.

"Use standard OTEL patterns" ... that's about as useful as saying "go write some code". There are a lot of ways to do instrumentation....

I'd be very curious HOW exactly the models fail.

Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Or do they just get the instrumentation categorically wrong?

Also important: do the models have access to a web search tool to read the library docs? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.

Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.

All in all, I'm very skeptical that this is very useful as a benchmark as is.

I'd be much more interested in tasks like:

Here are trace/log outputs , here is the source code, find and fix the bug.


+1 I’m not sure if tasks like Add OTel instrumentation belongs more in a Coding bench than an SRE bench. I came here expecting to see things like, this is how Models perform on finding the root cause in 50 complicated microservice failure scenarios.

For AI-SRE tasks like finding root cause of bugs and errors, I believe the key is to provide tools to the agent to query metrics, logs, traces and understand the problem. I’m working on a similar OSS framework and benchmark (work in progress using metrics and logs - demo - https://youtube.com/playlist?list=PLKWJ03cHcPr3Od1rwL7ErHW1p...), where context is Semantics and Text2SQL to query the right metrics, logs and benchmark is on a set of Skills that Claude code or other agents can run using these tools to find the root cause of errors:

Codd Semantic/Text2SQL engine: https://github.com/sathish316/codd_query_engine

PreCogs skills and simulated scenarios: https://github.com/sathish316/precogs_sre_oncall_skills


I'm surprised by how many people think that SRE's job is to debug.

SRE's job is to make the software reliable, for instance by adding telemetry, understanding and improving the failure modes, the behavior under load etc.

So a better SRE test would not be "read the logs and fix the bug", but rather "read the code and identify potential issues".


Looked into some tests and the tasks are definitely AI written. I think then a separate AI call generated the test.

>Some of the instructions don't give any guidance how to do it, some specify which libraries to use.

In supporting a piece of cloud software with a lot of microservices I think this is a more generalized problem for humans. The app I work with demanded some logging requirements like the library to use. But that was it, different parts by different teams ended up with all kinds of different behaviors.

As for the AI side, this is something where I see our limited context sizes causing issues when developing architecture across multiple products.


This is definitely not a context problem. Very simple things like checking for running processes and killing the correct one is something that models like opus 4.5 can't do consistently correct... instead of recognizing that it needs to systematize that sort of thing -- one and done. Like, probably 50% of the time it kills the wrong thing. About 25% of the time after that it recognizes that it didn't kill the correct thing and then rewrites the ps or lsof from scratch and has the problem again. Then if I kill the process myself out of frustration it checks to see if the process is running, sees that it's not, then gets confused and sets its new task to rewrite the ps or lsof... again. It does the same thing with tests, where it decides to just, without any doubt in its rock brain, delete the test and replace it with a print statement.

> limited context sizes

Context size isn't the issue. You cannot effectively leverage an infinite context if you had one anyways. The general solution is to recursively decompose the problem into smaller ones and solve them independently of each other, returning the results back up the stack. Recursion being the key here. A bunch of parallel agents on separate call stacks that don't block on their logical callees is a slop factory.


> "Use standard OTEL patterns" ... that's about as useful as saying "go write some code".

People say to say things like "Use best practices" in your prompts all the time, and chide people who don't.


Are these the same people who say it doesn't work well? I've been experimenting with writing what I actually mean by that (with the help of an LLM, funny enough), and it seems to be giving me much better code than the typical AI soup. e.g.

  - functional core, imperative shell. prefer pure helpers.
  - avoid methods when a standalone function suffices
  - use typed errors. avoid stringly errors.
  - when writing functions, create a "spine" for orchestration
  - spine rules: one dominant narrative, one concept per line, named values.
  - orchestration states what happens and in what order
  - implementation handles branching, retries, parsing, loops, concurrency, etc.
  - apply recursively: each function stays at one abstraction level
  - names describe why something exists, not how it is computed
etc.

This is no different from writing a style guide for your team/org. You don't just say "write clean code" and expect that you'll get something you like.


To play devils advocate, why do we have to layout a simple task in PAINSTAKING DETAIL to an AI model which is "PHD LEVEL" and going to take our jobs in 6-12 months?

Why am I still holding its hand like it has the intellect and experience of a new-hire intern that's coded one project in college?

I would never expect to have to layout every detail about "how to write code" to someone I hired to code on my team, at the SWEII and above level. (I.e, sub-senior but beyond junior)

In fact, often times backlog items are "fix bug in x where y is happening" or "add instrumentation to X so that we can see why it's crashing at runtime".


> PHD LEVEL

It is PhD level. Most PhD students write awful code that's worse than AI.


I find that generally it does alright picking up the style of what exists on its own, so this is more important if it's writing something completely from scratch.

I think also "how to write code" is a matter of taste. e.g. in many ways I think I and a Laravel or Rails developer would each think that the other person's code is bad. e.g. as a small-ish thing, I think test-driven development sounds like a massive waste of time, but type-driven development is a huge productivity multiplier and makes the code a lot clearer. I'm sure that I have massive disagreements with e.g. the Go maintainers about what is straightforward.


Because the models aren't PhD level and aren't going to take our jobs in 6-12 months.

That's hype. If you want to use these things effectively you need to ignore the hype and focus on what they can actually do.


Don't worry about devil's advocate, if < 100 words feels like a gargantuan amount of documentation effort ("PAINSTAKING DETAIL"), well, there are certain stereotypes about developers (not) writing comments or documentation that come to mind. Whoever coined the term "prompt engineering" may have the last laugh (before the robots take over) after all.

I hate that it's true, but things like this make outputs night-and-day for me. This is the difference e.g. of a model writing appropriate test harnesses, or pushing back on requirements, vs writing the most absolute horrible code and test/dependency injection I've ever seen in pursuit of the listed goals.

Similar to adjacent commentors I've tried to be better at enumerating what I consider to be best practice, but I couldn't argue in good faith that instructions like these produce no noticible improvment.

(As with all things AI, it could all be percepion on my end, so YMMV, wish there was a better way to concretely evaluate effects on outcomes of different rule sets / instructions / ...)


Like with robotaxi, ok, the thing is not perfect, but how does this compare to an human ? I'm interviewing OPS / SRE at the moment , and i'm not so happy with what I see...

If you're interviewing Ops don't expect them to know anything about OTEL. Ops is about platforms, systems, and operations surrounding and supporting the application.

Integration of OTEL into an application stack requires explicitly knowledge of the code - the developers.


According to USTR data the EU had a 200bn goods surplus, but a 100bn services deficit in 2024.

So a 100bn deficit out of 800bn total US imports.

The deficit is there, but it's not nearly as lopsided as some reporting would have you believe.


Tell that to all the companies that built their entire tech stacks on US cloud providers...

Massive endeavor for a lot of setups.


While it is a "massive endeavor", it is not impossible, it essentially amounts to writing portable code. A computer is a computer, and most of the tech stack in US cloud providers is based on open source projects.

Not depending on Chinese manufacturing is borderline impossible even if you are starting from scratch. Not only it will be way more expensive, with potentially longer delays and lesser capacities, but just finding some company that can and wants to do the job can be a nightmare. From what I have seen, many local manufacturers in the US and Europe are really there to fulfill government contracts that requires local production.

Most hardware kickstarter-like projects rely on Chinese manufacturing as if it was obvious. It is not "find a manufacturer", it is "go to China". Projects that instead rely on local (US/Europe) manufacturing in order to make a political statement have to to though a lot of trouble, and the result is often an overpriced product that may still have some parts made in China.


Anyone who thinks migrations at scale is just about “writing portable code” has never done a migration at scale.

A large corporation just migrating from everything hosted on VMs can take years.

And if you are responsible for an ETL implementation and working with AWS and have your files stored on S3 (every provider big and small has S3 compatible storage) and your data is hosted on Aurora Postgres, are you going to spend time creating a complicated ETL process or are you going to just schedule a cron job to run “select outfile into S3”?

And “most” of the services on AWS aren’t based on open source software and you still have to provision your resources using IAC and your architecture. No Terraform doesn’t give you “cloud agnosticism” any more than using Python when using AWS services.


I don't think anyone here is arguing that. Just that you can make things less painful with portable code. It still won't be easy, as everybody in this chain agrees with. But we don't put things that need to be done off because it's "difficult".


If it takes a year and half to migrate from plain old VMs to AWS as the first part of “lift shift and modernize” and you have to to do it in “waves” how much difference is the code going to make?

Are you going to tell your developers to spend weeks writing ETL code that could literally be done in an hour using SQL extensions to AWS?

Are you going to tell them not to use any AWS native services? What are you going to do about your infrastructure as code? Are you going to tell them to set up a VM to host a simple cron job instead of just using a Lambda + Event Bridge?

And what business value does this theoretical “cloud agnosticism” bring - that never is once you get to scale.

It took Amazon years to move off of Oracle and much of its infrastructure still doesn’t run on AWS and still uses its older infrastructure (CDO? It’s been a while and I was on the AWS side)

I have yet to hear anyone who worries about cloud agnosticism even think about the complexity of migrations bring at scale, the risk of regressions, etc.

While I personally stay the hell away from lift and shifts and I come in at the “modernization” phase, it’s because I know the complexity and drudgery of it. I worked at AWS ProServe for 3.5 years and I now work as a staff consultant at a 3rd party consulting company.

This isn’t me rah rahing about AWS. I would say the same about GCP, Azure, the choice of database you use, or any other infrastructure decision.


If it only took 18 months for all that, I'd be very impressed. I was thinking at least a year of inevitable meetings and plannings and maybe 3 years of slow execution. And I still might be optimistic there.

>And what business value does this theoretical “cloud agnosticism” bring - that never is once you get to scale.

The "business value" here is not being beholden to an increasingly hostile "ally" who owns the land these servers operate on. If you aren't worried about that, then there is no point in doing any of this.

But if things do escalate to war, there's a very obvious attack vector to cripple your company with. Even if you're only 20% into the migration, that's better than 0%.


I don’t know how long it took before they brought AWS in and they decided to do something or if they failed beforehand and I don’t know how long it was before they brought me in.


Oh, sorry. I wasn't trying to speak on your experiences specifically. It more about general talks on the scenario of "America is compromised, we need to decouple starting now".

I of course don't know the scale of your company and how much they even wanted to migrate. Those are all variable in this.


Yup! Still very doable, and has been done tens if not hundred of thousands of times before. Migrations from e.g. AWS-> Azure/GCP, or even harder, cloud->on-prem.

How often has been replacing Chinese tech manufacturing dependency at scale done before? About 0.


The more advanced AI related workflows are the reason I finally switched away from neovim as my main coding IDE - for now.

The existing AI plugins for neovim aren't great.


With a bit of education one would know that manus is Latin for hand. That's where "manual" comes from.

And their logo is, lo and behold ... a hand!


Since it's an AI company, and not actually doing anything by hand, it wouldn't surprise me if they came up with the name "manus" because it has "anus" in it, and then designed the hand logo due to the Latin meaning of the name. [this is a sarcasm, in case that was not clear]


So it is the Ancient Romans who were obsessed with butts. Got it.


There are more English words that end in -ass than Latin words that end in -anus, so who's really obsessed?


Canonically it was winged penises, but yes.


but where has that hand BEEN? And we are back at -anus again.


I've been preaching similar thoughts for the last half year.

Most popular programming languages are optimized for human convenience, not for correctness! Even most of the popular typed languages (Java/Kotlin/Go/...) have a wide surface area for misuse that is not caught at compile time.

Case in point: In my experience, LLMs produce correct code way more regularly for Rust than for Js/Ts/Python/... . Rust has a very strict type system. Both the standard library and the whole library ecosystem lean towards strict APIs that enforce correctness, prevent invalid operations, and push towards handling or at least propagating errors.

The AIs will often write code that won't compile initially, but after a few iterations with the compiler the result is often correct. Strong typing also makes it much easier to validate the output when reviewing.

With AIs being able to do more and more of the implementation, the "feel-good" factor of languages will become much less relevant. Iteration speed is not so important when parallel AI agents do the "grunt work". I'd much rather wait 10 minutes for solid output rather than 2 minutes for something fragile.

We can finally move the industry away from wild-west languages like Python/JS and towards more rigorous standards.

Rust is probably the sweet spot at the moment, thanks to it being semi-popular with a reasonably active ecosystem, sadly I don't think the right language exists at the moment.

What we really want is a language with a very strict, comprehensive type system with dependent types, maybe linear types, structured concurrency, and a built-in formal proof system.

Something like ADA/Spark, but more modern.


What you are saying is: no, it doesn't.

Of course dynamic dispatch can be implemented in almost every language. The Linux kernel uses dynamic dispatch with C!

But that's a hack, not a language feature.


I think you might mean “ad hoc polymorphism” rather than “dynamic dispatch”. Gleam, C, Erlang, etc have the latter, not so much the former.


It's not a "hack" because many language DO NOT let you store functions with state. Gleam does, I write PHP, and that does as well.

PHP has interfaces and whatnot, but a lot of the time I do polymorphism by just having a class that has Closure members. When you can arbitrarily pass around functions like that, it's basically equivalent to an interface or abstract class, with a bit more flexibility.


Would love a Rust implementation of this.


To be fair, you can enforce this just by filling all the allocated memory with zero, so it's possible to fail at startup.

Or, even simpler, just turn off over-commit.

But if swap comes into the mix, or just if the OS decides it needs the memory later for something critical, you can still get killed.


I would be suprised if some os detects the page of zeros and removes that allocation until you need it. this seems like a common enough case as to make it worth it when memory is low. I'm not aware of any that do, but it wouldn't be that hard and so seems like someone would try it.


There's also KSM, kernel same-page merging.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: