Hacker Newsnew | past | comments | ask | show | jobs | submit | kfarr's commentslogin

A friend who does game design gave me a good tip -- start with the core game loop first and only focus on that with low poly / representative shapes for game elements that you can refine in the future. Not until the core game loop is fun does it make sense to spend a minute of time on any other aspect of the game.

It definitely does not respond to flashing headlights in that manner. You’re observing its default behavior when at a 4 way stop with other vehicles not moving.


How are you saying that so confidently? Waymos respond to traffic cops directing traffic manually


You're right I don't have inside information, but we've been interacting with them on the street for years in SF. Waymos don't wait for human subjective guidance to give them clearance to pass, as evidenced by tons of videos and IRL experience. As soon as they come to a required stop, and if a vehicle or other object's linear travel path does not intersect it, it will go. Flashing lights will not change this behavior. (Yes you're right there is a regulatory requirement to respond to safety officer guidance, but compliance is spotty as evidenced by a lot of videos of vehicles entering active crime zones, etc.)

Unlike the traffic cops directing traffic that would likely require special programming, "proceed if the other car flashes its lights at you" is completely the kind of thing that could just accidentally fall out of a neural network learning to imitate humans.


Hopefully if they ever go to Sri Lanka they get localised tuning because I was surprised to find out flashing your lights over there doesn't mean "go ahead", it means "if you don't get out of my way I will ram you"


And then there's trucks flashing an indicator to say it's safe to overtake if you're behind them. In the UK it's the nearside indicator, which makes sense: it's a bit like the truck is pulling over to let you pass. In Aotearo, it's often the off-side indicator, so you think the truck is going to pull out in front of you. I've never understood what the Aotearoa drivers are thinking there

This is true for India too though traffic there isn't known for its rules.


I hate the countries that do this because it doesn't even make sense as a signal. We already have a horn. They are wasting a channel!

It also doesn't make sense because "get out of my way or I will ram you" is the default state of operating a motor vehicle. Not the goal but the physical reality of it.

At highway speeds, engine, road and wind noise usually make horns inaudible.

In Serbia, on top of get-out-of-my-way, it's also used to signal go-ahead, but also "police with speed radars ahead" to incoming traffic.


I think we're not interpreting the original comment in the same way.

In most places, I think, when driving on the highway, flashing your lights when behind someone means basically 'I would like to overtake you'. Same here in the UK. But that's very specific to that context. You would never see a 'go ahead' context that would mean 'get out of my way', right?

But what the original comment means is there are some countries where you'd think it was 'go ahead' but it really means 'get out of the way'. Like if you're both on a main road, and you are signaling to turn into a side road, the opposing car flashes the lights and that means you can turn. I assume the same in Serbia.

But in some places that can actually mean don't turn, I'm going first. Which I think is what the parent is describing.


You are right that I did not read it the same way, and yes, the unwritten rules are matching in Serbia. FWIW, I've mostly switched to using left-turn signal to indicate "I'd like to overtake", which I've seen done on EU highways.

That's not how Waymo works, though. Waymo doesn't imitate humans. Waymo is trained to obey traffic laws and avoid collisions.


Waymo has published a ton about the imitation learning they've been using since 2018. They're not imitating random cars but their drivers who are paid to drive around and follow traffic laws.

It's not enough so they use heavy reinforcement learning etc. but it's still a huge foundation to build on.


Waymo immitates humans insofar as its neural net trained on avoiding collisions after millions of miles of video footage and LIDAR data on roads shared with humans causes it to immitate humans.

It's likely manually programmed not to (incorrectly) turn the wheel to the left while stopped and waiting for an opportunity to turn. If you get rear-ended, you'll end up in the lane of oncoming traffic. It's certainly programmed to use its turn signals to indicate when it is going to turn. But after driving around thousands of cars without turn signals on but with their wheels pointed left, it "knows" to predict that they're about to turn, and might immitate humans by anticipating that action and moving to pass the stopped car on the right.


> It's likely manually programmed not to (incorrectly) turn the wheel to the left while stopped and waiting for an opportunity to turn.

I'm both surprised and not surprised that people do this. You'll hit the divider.


The divider? What divider?

A quaint, positive anecdotal comment?? On MY internet?!?!


How do you know? It’s trained on videos where it might see that happen often.

Why wouldn't it be trained to do that? You can easily include that in the training data.

It's not like the people building Waymo have never heard of flashing your brights before.


What else is an LLM supposed to do with this prompt? If you don’t want something done, why are you calling it? It’d be like calling an intern and saying you don’t want anything. Then why’d you call? The harness should allow you to deny changes, but the LLM has clearly been tuned for taking action for a request.


Ask if there is something else it could do? Ask if it should make changes to the plan? Reiterate that it's here to help with anything else? Tf you mean "what else is it suppose to do", it's supposed to do the opposite of what it did.


I think there is some behind the scenes prompting from claude code for plan vs build mode, you can even see the agent reference that in it's thought trace. Basically I think the system is saying "if in plan mode, continue planning and asking questions, when in build mode, start implementing the plan" and it looks to me(?) like the user switched from plan to build mode and then sent "no".

From our perspective it's very funny, from the agents perspective maybe very confusing.


I'd want two things:

First, that It didn't confuse what the user said with it's system prompt. The user never told the AI it's in build mode.

Second, any person would ask "then what do you want now?" or something. The AI must have been able to understand the intent behind a "No". We don't exactly forgive people that don't take "No" as "No"!


Because i decided that i don't want this functionality. That's it.


Seems like LLMs are fundamentally flawed as production-worthy technologies if they, when given direct orders to not do something, do the thing


for the same reason `terraform apply` asks for confirmation before running - states can conceivably change without your knowledge between planning and execution. maybe this is less likely working with Claude by yourself but never say never... clearly, not all behavior is expected :)


> What else is an LLM supposed to do with this prompt?

Maybe I saw the build plan and realized I missed something and changed my mind. Or literally a million other trivial scenarios.

What an odd question.


> What an odd question.

I don't see anything odd about this question.

What kind of response did the user expect to get from LLM after spending this request and what was the point of sending it in the first place?


Genuine questions: what do you think the request was? To build the plan? To prepare the commit? Do you never have a second thought after looking at your output, or realize you forgot something you wanted to include? Could it be that they saw "one new function", thought "boy, there should really be two... what happened?" and changed their mind?

To your original comment, it would be like calling your intern to ask them to order lunch, and them letting you know the sandwich place you asked them to order from was closed, and should they just put in an order for next Tuesday at an entirely different restaurant instead? And then that intern hearing, "no, that's not what I want" saying "well, I don't respect your 'no'" and doing it anyways.

"Do X" -> "Here are the anticipated actions (which might deviate from your explicit intent), should I implement?" -> "no, that's not actually what I want"

is a clear instruction set and a completely normal thought pattern.


My point is that I don't see any scenario in which sending a ton of input tokens (the whole past conversation) and expecting output that literally does nothing is absolutely pointless.

Like, sure, if the model were "smarter" it would probably generate something like "Okay, I won't do it.". What is the value of the response "Okay, I won't do it."? Why did you just waisted time and compute to generate it?

> Do you never have a second thought after looking at your output, or realize you forgot something you wanted to include? Could it be that they saw "one new function", thought "boy, there should really be two... what happened?" and changed their mind?

Sure, all of those are totally valid. And in each of this cases it would be better to just don't make the request at all or make the request with the correction.

> like calling your intern

LLM is not human. It can't act on it's own without you triggering it to act. Unlike the intern in your example who will be wasting time and getting frustrated if they don't receive a response from you. With model you can just abandon this "conversation" (which is really just a growing context that you send again and again with every request) forever or until you are ready to continue it. There is no situation when just adding "no" to the conversation is useful.


Why does it ask a yes-no question if it isn’t prepared to take “no” as an answer?

(Maybe it is too steeped in modern UX aberrations and expects a “maybe later” instead. /s)


> Why does it ask a yes-no question if it isn’t prepared to take “no” as an answer?

Because it doesn’t actually understand what a yes-no question is.


Yeah meat is another dimension, as is potato. So we're up to 4 dimensional breakfast latent space. I hate to think what's in the dark breakfast black hole of that 4 dimensional latent space...


I feel like there's a lot of unexplored area in the carb-soaked-in-egg category that French Toast fits into. The major analogues being chiliqiles and matzoh brie. I recently did something like french toast bites where I cubed some sourdough bread, soaked it with egg and fried it up with small pieces of bacon mixed in. But what if you did that with a glazed donut? Or a waffle?

[edit] just also why this post touched my heart - I think form is as important as ingredients whenever you're dealing with relatively few ingredients. I have a breakfast I particularly love making that's just hash browns, egg and cheese. But the trick is, you griddle the hash browns, then flip them and smash them on griddled cheese, then crack an egg on top while the cheese fries and flip the whole thing again. The result is a crispy potato pancake where one side is fried cheese and the other is embedded fried egg. The same 3 ingredients, but it can be held in hand and it's got the perfect balance in each bite.


you can just french toast anything you have laying around in your fridge or kitchen. they can't stop you.

https://www.youtube.com/watch?v=hB42iztkzVQ

i've done the french toast pizza and it wasn't bad. not sure if it was worth the effort. maybe there's an ideal type of pizza or combo of toppings that makes this spectacular. either way it's worth trying once just to say you did.


It wasn't the singularity I imagined, but this does seem like a turning point.


It's not the phrase, but the accelerating memetic reproduction of the phrase that is the true singularity. /s


What a great idea. Legacy VCR controls upcycled for digital control! There's a lot of those old decks and LANC deck controllers lying around...


This is a hard problem. Solvable only with try and error.

Those signals are just weird mess of coils, switches and resistors.

ESP32 clock speed may also be a contributing factor.


I asked Gemini to make sensible & readable version: https://github.com/timonoko/Jogwheel/blob/main/jogwheel_gem....

But did not improve much, except variable names are now less esoteric.


I think it makes the annoying part less annoying?

Also re: "I spent longer arguing with the agent and recovering the file than I would have spent writing the test myself."

In my humble experience arguing with an LLM is a waste of time, and no-one should be spending time recovering files. Just do small changes one at a time, commit when you get something working, and discard your changes and try again if it doesn't.

I don't think AI is a panacea, it's just knowing when it's the right tool for the job and when it isn't.


Anyone not using version control or a IDE that will keep previous versions for a easy jump back is just being silly. If you're going to play with a kid who has a gun, wear your plates.


Once, I told a friend that it was stupid that Claude Code didn't have native IDE integration. His answer: “You don't need an IDE with Claude Code.”

I've begun to suspect response that this technology triggers a kind of religion in some people. The technology is obviously perfect, so that any problems you might have are because of you.


I find that I vastly prefer Gemini CLI to antigravity, despite the latter being an ide. Others feel the opposite. I believe it comes down to how you are using AI. It's great they both options exist for both types of people.


I don’t think it’s “just” that easy. AI can be great at generating unit tests but it can and will also frequently silently hack said tests to make them pass rather than using them as good indicators of what the program is supposed to be doing.


> AI can be great at generating unit tests but it can and will also frequently silently hack said tests to make them pass rather than using them as good indicators of what the program is supposed to be doing.

Unit testing is my number one use case for gen AI in SWE. I just find the style / concept often slightly different than I would personally do, so I end up editing the whole thing.

But, it’s great at getting me past the unpleasant “activation energy threshold” of having a test written in the first place.


Totally. I’m a huge fan of it, but it rarely “just” works and I do have to babysit it to make sure it’s actually doing something good for the world


Once you start arguing, it's time to start a new prompt with new instructions


Or, as I prefer, go back in the conversation and edit / add more context so that it wouldn’t go off the wrong track in the first place.


I also like asking the agent how we can update the AGENTS.md to avoid similar mistakes going forward, before starting again.


But he started it …


Yeah it’s not accurate at all. Not the OPs fault but the purpleair sensors are placed by users. Right now it says fidi is 9° warmer than haight. Plausible, but it could also be the only 1 sensor reporting from fidi is on a balcony near a drier vent.


More than plausible, for 8am on a January morning!

I used to ride a motorcycle every day from the Haight (home) to the Financial District (work), and the temperature grade changes were palpable.

Your point is also completely correct, of course. :)


> "Walking and biking environments result in ghettoes"

I must admit this viewpoint is one I have never seen before! Instead I've heard many arguments that bike lanes and pedestrianization are forms of gentrification, but resulting "ghettoes?" +1 for creativity!


Yes? Bikes are an incredibly segregating means of transport. They are inherently limited in range, and they are largely incompatible with any other transit mode.

So you create an environment where all the housing within bike range from good jobs is unaffordable for most people.

And the most democratic mode of transport? Cars. They provide far greater accessibility.


You are spot on about segregation. Yes, walking and biking are for undesirables. The suburbs are built for cars and cars only. Poor people (African, etc) can't afford the large lots, the minimum size of residence, the HOA and lawn maintenance, car required to go anywhere. This is how you can do segregation without violating any laws. Usually, most people don't admit that these are the real goals. I'm surprised that you are openly admitting that segregation is what we want. I guess times are changing!


So you're saying that bicycles have caused our land use patterns to be inequitable? I would say I agree that transportation modes have made land use allocations in western society problematic, but again you are very novel in being the first person I've ever met who attributes those issues to people riding bicycles.


No, bicycles are more of a symptom. They are not the sole cause, of course.

The actual root cause is over-centralization, where the only jobs worth having are concentrated in downtowns of a dwindling number of cities. These downtowns are always congested, and bike lanes are one way to make it more tolerable. But if you can afford an apartment, of course.

Bike lanes near Wall Street are an iconic example. If you're using them, then it's highly likely that you're a multi-millionaire. Or maybe you inherited a rent-controlled apartment.

Cars historically were a great equalizer. Sure, your CEO was likely driving a better car, and living in a better house. But they were stuck in the same traffic along with you. And this _was_ a factor when deciding on the next office location: "Hm. I really hate the commute, perhaps our next office should be in a bit less congested location?"

And this is reflected in actual research: https://pmc.ncbi.nlm.nih.gov/articles/PMC4938093/ - "For the USA, we observe an exponent βUSA ≈ 0 indicating that the density of jobs is independent from the skill level in the USA. For the UK and Denmark, we observe a non-zero exponent with βUK ≈ 1/2 for the UK and a larger value for Denmark βDK ≈ 0.8. These results indicate that the density of jobs decreases with the skill level, more in Denmark than in the UK."


Ok most of what you're saying makes sense, but having gone to bike lanes in lower manhattan it seems like it's a lot of food delivery people 24/7 with the normal new yorkers you'd see on the subway during commuting hours. From a humanistic perspective it seems like it's a good thing to ensure that delivery drivers aren't killed by motor vehicles and have the ability to not conflict with sidewalk pedestrians? As a driver I would prefer they're not in my lane.

> Cars historically were a great equalizer.

I suppose we'll agree to disagree on this one, there's like a bajillion books that assert the opposite so I will let those and the intertubes do the talking.

As it relates to the study, I'm a little confused how it relates to the above discussion. Is this a good or bad thing to have density of jobs relate to skill level? Wouldn't the historic development of these cities with thousands of years of human civilization in Europe vs. relatively recently developed US cities be a confounding factor in exploring land use patterns?


> Ok most of what you're saying make sense, but having gone to bike lanes in lower manhattan it seems like it's a lot of food delivery people

Yes, I should have mentioned that I specifically meant people using bike lanes for commutes. Bike lanes for work or for recreation are a totally different story, and I have nothing against them.

However, in this case it still reinforces my point: delivery by bike is a luxury good. It still is something that makes living in an utterly unaffordable area more bearable for people who have money.

> I suppose we'll agree to disagree on this one, there's like a bajillion books that assert the opposite so I will let those and the intertubes do the talking.

I'm actually not saying anything that is not an accepted fact in urbanism.

> As it relates to the study, I'm a little confused how it relates to the above discussion. Is this a good or bad thing to have density of jobs relate to skill level?

No, it's not good. This means that good jobs force people to move closer to the centers of their concentration. This automatically reduces opportunities for other people.


> Bikes are an incredibly segregating means of transport.

A bike costs on the order of a few hundred dollars; there's essentially no barrier to entry.

Comparing them with cars on this metric is laughable. Must be 18 or so and able bodied, obtain an expensive license, purchase the actual very expensive vehicle, pay for constant upkeep in insurance, fuel, repairs, and risk serious accidents. All of this is an insane barrier to entry.

> They are inherently limited in range

Yeah, to like a radius of 5km or so, on the low end. That's quite a bit in a city.

> and they are largely incompatible with any other transit mode.

Kind of, but not really? Between e-scooters, rental bikes, and bike garages at train stations, this really is just a matter of proper infrastructure in the end. I don't get the relevance of this anyway.

> So you create an environment where all the housing within bike range from good jobs is unaffordable for most people.

And where exactly is this place you describe where everyone commutes exclusively by bike? Ooops, right, it doesn't exist, never has, probably never will. So you're just making stuff up.

I mean, it is a cute little theory, but it has zero relevance to the world we've built or ever plan to build.

Or maybe it's a strawman, implying that someone somewhere has claimed that we should only commute by bike? Again, cute, but nobody says that. Adding public transportation to the equation neatly eradicates your entire made up theory.

> And the most democratic mode of transport? Cars. They provide far greater accessibility.

I adore your conversational technique of adding positively charged words like "democratic" and "accessibility" without any justification or explanation, just to make it seem like you have an argument. "The democratic, accessible and green coal power plants." I'll add this technique to my list of common fallacies, thanks.


> Comparing them with cars on this metric is laughable. Must be 18 or so and able bodied, obtain an expensive license, purchase the actual very expensive vehicle, pay for constant upkeep in insurance, fuel, repairs, and risk serious accidents. All of this is an insane barrier to entry.

Just wait until you hear how much transit costs!

> And where exactly is this place you describe where everyone commutes exclusively by bike? Ooops, right, it doesn't exist, never has, probably never will. So you're just making stuff up.

Who said anything about exclusivity? Please point out with a hyperlink.

> I adore your conversational technique of adding positively charged words like "democratic" and "accessibility" without any justification or explanation, just to make it seem like you have an argument.

I provided a link in this thread. Go on, dispute it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: