Hacker Newsnew | past | comments | ask | show | jobs | submit | chrsw's commentslogin

Something like this maybe:

https://whitebox.systems/

Doesn't seem to meet all your desired features though.


Yes, that’s a good example — thanks for the link. Tools like this seem very strong at visualizing and exploring state, but they still tend to stay fairly close to the traditional “pause and inspect” model. What I keep struggling with is understanding how a particular state came to be — especially with concurrency or events that happened much earlier. That gap between state visualization and causality feels hard to bridge, and I’m not sure what the right abstraction should be yet.

Sounds like you want a time travel debugger, eg. rr.

Sophisticated live debuggers are great when you can use them but you have to be able to reproduce the bug under the debugger. Particularly in distributed systems, the hardest bugs aren't reproducible at all and there are multiple levels of difficulty below that before you get to ones that can be reliably reproduced under a live debugger, which are usually relatively easy. Not being able to use your most powerful tools on your hardest problems rather reduces their value. (Time travel debuggers do record/replay, which expands the set of problems you can use them on, but you still need to get the behaviour to happen while it's being recorded.)


That’s a very fair point. The hardest bugs I’ve dealt with were almost always the least reproducible ones, which makes even very powerful debuggers hard to apply in practice. It makes me wonder whether the real challenge is not just having time-travel, but deciding when and how much history to capture before you even know something went wrong.

Sounds like you want time travel debugging [1]. Then you can just run forwards and backwards as needed and look at the full evolution of state and causality. You usually want to use a integrated history visualization tool to make the most of that since the amount of state you are looking at is truly immense; identifying the single wrong store 17 billion instructions ago can be a pain without it.

[1] https://en.wikipedia.org/wiki/Time_travel_debugging


This doesn't sound like a particularly difficult problem for some scenarios.

It's definitely convoluted as it comes to memory obtained from the stack, but for heap allocations, a debugger could trace the returns of the allocator APIs, use that as a beginning point of some data's lifetime, and then trace any access to that address, and then gather the high-level info on the address of the reader/writer.

Global variables should also be trivial (fairly so) as you'll just need to track memory accesses to their address.

(Of course, further work is required to actually apply this.)

For variables on the stack, or registers, though, you'll possibly need heuristics which account for reusage of memory/variables, and maybe maintain a strong association with the thread this is happening in (for both the thread's allocated stack and the thread context), etc.


Here's another one

https://scrutinydebugger.com/

It's for embedded systems though, which is where I come from. In embedded we have this concept called instruction trace where every instruction executed with the target gets sent over to the host. The host can reconstruct part of what's been going on in the target system. But there's usually so much data, I've always assumed a live view is kind of impractical and only used it for offline debugging. But maybe that's not a correct assumption. I would love to see better observability in embedded systems.


For context, I’ve been experimenting with a small open-source prototype while thinking about these ideas: https://github.com/manux81/qddd It’s very early and incomplete — mostly a way for me to explore what breaks once you actually try to model time and causality in a debugger.

> I keep struggling with is understanding how a particular state came to be — especially with concurrency or events that happened much earlier.

Yeah, I faced this problem. I have no general solution to it, but I wonder if a fuzzer can be bred with a debugger to get a tool that can given two states of a program to find inputs that can transition program from state A to state B. Maybe you would need to define state A and/or B with some predicates, so they would be a classes of states. Or maybe the tool could fuzz the state A to see what part of it are important to transition to the state B eventually.


As an American, I feel like I'm sitting on the sidelines as the world, particularly China, zooms past us in the future of automobile technology, and more broadly, battery tech.

Not just that, but I feel there is backwards movement, or at least attempts at it, from the administration…

"Backwards into the future" was their campaign platform, so not a huge surprise I think. They didn't use exactly those words, but yeah.

Yep. Exactly this. Can't buy BYD cars either. Going to have to a hard time selling Donut Lab batteries, attracting new talent, welcoming tourists, and more.. it's completely unnecessary self-sabotage by a hateful, felonious, child-diddling slumlord and failed reality star.

It's not bad. Skimming the code I'd say it's not enterprise quality but it's definitely better than an amateur throwaway project.


Classic. non-enterprise C quality.


Probably not. And there aren't many 32-bit RISC-V cores with an MMU. I guess you can use a simulator if you found one.


I use one written in SpinalHDL. :-)

Next question is how much RAM it needs to boot and can it be used without rio ?


This is the key point for me in all this.

I've never worked in web development, where it seems to me the majority of LLM coding assistants are deployed.

I work on safety critical and life sustaining software and hardware. That's the perspective I have on the world. One question that comes up is "why does it take so long to design and build these systems?" For me, the answer is: that's how long it takes humans to reach a sufficient level of understanding of what they're doing. That's when we ship: when we can provide objective evidence that the systems we've built are safe and effective. These systems we build, which are complex, have to interact with the real world, which is messy and far more complicated.

Writing more code means that's more complexity for humans (note the plurality) to understand. Hiring more people means that's more people who need to understand how the systems work. Want to pull in the schedule? That means humans have to understand in less time. Want to use Agile or this coding tool or that editor or this framework? Fine, these tools might make certain tasks a little easier, but none of that is going to remove the requirement that humans need to understand complex systems before they will work in the real world.

So then we come to LLMs. It's another episode of "finally, we can get these pesky engineers and their time wasting out of the loop". Maybe one day. But we are far from that today. What matters today is still how well do human engineers understand what they're doing. Are you using LLMs to help engineers better understand what they are building? Good. If that's the case you'll probably build more robust systems, and you _might_ even ship faster.

Are you trying to use LLMs to fool yourself into thinking this still isn't the game of humans needing to understand what's going on? "Let's offload some of the understanding of how these systems work onto the AI so we can save time and money". Then I think we're in trouble.


> Are you trying to use LLMs to fool yourself into thinking this still isn't the game of humans needing to understand what's going on?

This is a key question. If you look at all the anti-AI stuff around software engineering, the pervading sentiment is “this will never be a senior engineer”. Setting aside the possibility of future models actually bridging this gap (this would be AGI), let’s accept this as true.

You don’t need an LLM to be a senior engineer to be an effective tool, though. If an LLM can turn your design into concrete code more quickly than you could, that gives you more time to reason over the design, the potential side effects, etc. If you use the LLM well, it allows you to give more time to the things the LLM can’t do well.


Fully agree. In my own usage of AI (which I came to a bit late but have tried to fully embrace so I know what it can and can't do) I've noticed a very unusual side effect: I spend way more of my time documenting and reviewing designs than I used to, and that has been a big positive. I've always been very (maybe too) thoughtful about design and architecture, but I usually focused on high-level design and then would get to some coding as a way of evaluating/testing my designs. I could then throw away v0 using lessons learned and start a v1 on a solid track. Now however, I find myself able to get a lot further in nailing down the design to the point I don't have to build and throw away v0. The prototype is often highly salvageable with the help of the LLM doing the refactoring/iterating that used to make "starting over" a more optimal path. That in turn allows me to maintain the context and velocity of the design much better since there aren't days, or weeks, or even months between the "lessons learned" that then have to go back and revise the design.

The caveat here though, is if I didn't have the decades of experience writing/designing software by hand, I don't think I'd have the skills needed to reap the above benefit.


" They make it easier to explore ideas, to set things up, to translate intent into code across many specialized languages. But the real capability—our ability to respond to change—comes not from how fast we can produce code, but from how deeply we understand the system we are shaping. Tools keep getting smarter. The nature of learning loop stays the same."

https://martinfowler.com/articles/llm-learning-loop.html


Learning happens when your ideas break, when code fails, unexpected things happen. And in order to have that in a coding agent you need to provide a sensitive skin, which is made of tests, they provide pain feedback to the agent. Inside a good test harness the agent can't break things, it moves in a safe space with greater efficiency than before. So it was the environment providing us with understanding all alone, and we should make an environment where AI can understand what are the effects of its actions.


Why can't you use LLMs with formal methods? Mathematicians are using LLMs to develop complex proofs. How is that any different?


maybe. I think we're really just starting this, and I suspect that trying to fuse neural networks with symbolic logic is a really interesting direction to try to explore.

that's kind of not what we're talking about. a pretty large fraction of the community thinks programming is stone cold over because we can talk to an LLM and have it spit out some code that eventually compiles.

personally I think there will be a huge shift in the way things are done. it just won't look like Claude.


I don't know why you're being downvoted, I think you're right.

I think LLMs need different coding languages, ones that emphasise correctness and formal methods. I think we'll develop specific languages for using LLMs with that work better for this task.

Of course, training an LLM to use it then becomes a chicken/egg problem, but I don't think that's insurmountable.


I don't think "understanding" should be the criteria, you can't commit your eyes in the PR. What you can commit is a test that enforces that understanding programatically. And we can do many many more tests now than before. You just need to ensure testing is deep and well designed.


You can not test that which you do not understand.


Great. Now let's start replacing fast food places with places that still serve you quickly but serve healthy food. Complete meals of whole foods.

One of the problems with the way we live and work is that it's so easy to go for the quick option. If you're working 60+ hours a week or trying to run a busy household, unhealthy food options are really attractive for you because they're so convenient. People generally know what good food is, it's just that they make the sacrifice because there's other things going on in their lives.

I've said things like this before and people respond like "well, I run my own business and raise a family and volunteer at my church and so on and on... AND cook perfectly healthy meals 3 times a day!" That's awesome for you, you're amazing, but let's get real.


There's a chain here in Phoenix called "Salad and Go" that's pretty awesome... I'd love to see a fast food place that specializes in breakfast items that include keto bread options and low carb bowls all day.

I'll also get plain beef patties or grilled chicken breasts from misc fast food places in a pinch.


Wasn't Panera supposed to be this, before the hyper-caffeinated lemonade scandal at least?


I think this was ( at least in theory ) the goal of the “slop” bowls we’ve seen pop up in the last 15 or so years, chipotle, cava, sweetgreans etc.


They also got rid of the SD Card slot. Which I actually still use.


I agree. If this were real, it would be a much bigger deal. They're preying on people's hopes.


I was thinking “this is vapor or extremely expensive” . But maybe it’s somehow both.


AI generated videos are being used in more and more ads. To cut costs I’m sure. The result is that ads that were just annoying are now terrible and jarring.

When I hear talk about AI risks I mostly hear things like runaway super intelligence doing whatever it wants and leaving humanity in the lurch. But what about more realistic concerns, like accelerating the race to the bottom by cheaply and poorly ripping off other people’s work and forcing everyone else to do the same just to keep up?


A local window blinds company ran an ad before a movie. Clearly AI. Besides the obvious fake humans, how can I trust that the AI blinds in your ad match your real product?


It's convenient! I know that they are cutting corners on the product, because they are cutting corners with their ads.


Yeah I am not an absolute GenAI hater. I’ve used it quite a bit myself, and I think there are ways to be creative about it. However 95% of what we see online, especially in the Ads space, is bottom of the barrel quality. It is obvious basic AI generated images/video most of the time and for me is an instant “I’m not going to bother with that product” marker.

One of the worst one was an allegedly “illustrated history” book. All Ads were of AI generated history book pages with tons of historical inconsistencies. Looked up the real book and it actually looked decent: hand drawn, well formatted etc. Why not use pictures of the actual book instead of whatever mess I was seeing.

However I also keep getting Ads for this other historical book that drives me nuts: https://www.kickstarter.com/projects/vilno/the-codex-book

I might be wrong but to me most of the art is looks AI generated, and the few pages they show just don’t make any historical sense. Yet they sell it as “hand drawn”. From the animations seems like some stuff was AI generated and then redrawn by hand? But the drawing themselves are plain weird: the nonsensical castle. The archers and scoped crossbow in a page about medieval crossbows, the silly submarine


Last night, I deliberately paid attention to an ad because it had the disclaimer "images generated with AI". I was curious to see what the advertisers did with the technology. The ad featured a bunch of animals of different species driving cars around a city. It clearly had that AI uncanny valley that high budget 3D modeled CGI animals doesn't have. I thought about what it would have cost for a effects team to 3D model and render all those animals, and also wondered how many rendering attempts with the AI did it take to get good footage for the ad.

The ad felt sickening to look at, like a TV with smooth motion turned on. It's like my brain is rejecting the pictures that it sees because it doesn't match the patterns of motion that it has been trained to recognize. Or was it just regular motion sickness.


I don't know if we're in the minority or not, insofar as we are able to look at these things and discern that they are fake. One problem going forward is that the proportion of us who are able to do so will grow smaller as individuals become habituated to the AI output, especially children.


Right there, it's a distraction from the actual problem. They promote safety research as if they are fighting Skynet and it works.


This is a weird complaint because ads have always been a curse on the world; they've always been "slop". The worst part about AI ads is that they're ads.


War predates human history, and the worst thing about chemical warfare is that it’s warfare; therefore there is no need to ban chemical weapons.


i don't get this "race to the bottom", "to cut costs".. like isn't that a good thing? your things will get cheaper if the cost required to market them reduces.


I don't think AI reduces cost of marketing in any significant way. Everybody has access to these tools so at best it just allows marketing companies to employ fewer ad-creation teams to pump out the same amount of advertisements.

Pushed to the extreme, where a single person could create an oscar-worthy advertisement in seconds, it doesn't suddenly mean that the superbowl will charge pennies for an ad slot.

I suspect the end state will be just-in-time rich ad creation (not bidding) tailored specifically to you.


> Pushed to the extreme, where a single person could create an oscar-worthy advertisement in seconds, it doesn't suddenly mean that the superbowl will charge pennies for an ad slot.

For a superbowl ad, there's the high cost to air the spot, but most of them also have a high cost to produce the spot (maybe not for the the one from last year that was just a dvd logo esque bouncing qr code for a crypto scam); if your marketing budget was ~ $5M, maybe you spent $1M on production and $4M on airtime. If AI gives you a good enough result for approximately no money, maybe you spend all that budget on airtime, maybe you cut the budget and still spend $4.5M on airtime. Of course, if everybody is spending more on airtime, you might not get more airtime, but you could still reduce/eliminate the production part of the budget.


>your things will get cheaper if the cost required to market them reduces.

One could imagine that once every company in a market uses AI videos to reduce said costs they will then have to spend even more to stand out from the other marketers, leaving us all back where we started, but with a lot of crappy AI videos to wade through.


They also get crappier though. I am generally okay with a lot of the tradeoffs to reduce the cost of construction and mass production. We definitely have more crappy stuff than we need—I'd prefer if we had a little less, higher quality stuff, but the balance is not too far out of whack.

With media though, I feel it's a lot worse. It's already been trending that way for text with blogspam already diluting the value of the web even before AI. But with AI this is accelerating to video and audio as well. Not only does the AI slop drown out the best of human creativity, it also raises the floor on superficial production value so that if you don't use AI you fall behind on the initial attention-grabbing first impression. I acknowledge a big part of this is due to where we are in the hype cycle, and once we absorb the capabilities of the tools, we'll figure out how to use them more tastefully, and human creativity will shine through again. But no I don't think always making everything easier and more efficient is necessarily always a good thing a priori. Friction and effort sometimes leads to positive outcomes.


Yeah I don't care how cheap/expensive Coca-Cola is.

I care how expensive ram and gpus are though and this ain't helping.


because the race to the bottom does not and will not benefit us, they will cut costs but whatever it is will still cost you the same or more.


Wake me up when my things get cheaper from cutting costs in the process.


Maybe in the past. In todays world, when something becomes cheaper, the extra revenue goes straight to the executives. The consumer doesn't see it


> your things will get cheaper if the cost required to market them reduces.

Prices won't go down. Profits go up. The winners are the shareholders.


We once built pyramids, massive castles, temples and churches which took hundreds of years to build. We don't build those things any more. Same happened to music and art. There's this eternal sloppification of everything, although at the same time things get on average better and cheaper for more people to enjoy. Quantity beats quality, i.e. capitalism optimizes for scalability.

The end game is quite sad, which will be some kind of neural device which just directly manipulates brain signals for happiness, and everything physical will be just gray goo. It's more scalable to make you think the world is beautiful, than to actually make it beautiful. We are almost there already, because we experience the world through a screen, which shows us happy things, while we care less and less about the real world around us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: