There are cases where you would not want to reject such code, though. For example, if std::move() is called inside a template function where the type in some instantiations resolves to const T, and the intent is indeed for the value to be copied. If move may in some cases cause a compiler error, then you would need to write specializations that don't call it.
It's weird that they made a mistake of allowing this after having so many years to learn from their mistake about copies already being non-obvious (by that I mean that references and copies look identical at the call sites)
I had to cut it all at once - i.e. if added sugars is > 0 on the label, I avoided it. I still was consuming naturally occurring sugars from fruits and other produce.
Hard to tell if it was gradual or not, I had one panel done 3 months later and it showed that all values are within acceptable range now, but very close to thresholds, and ~10 months later all values were just in the middle between min/max where applicable.
Pre LLM agents, a trick that I used was to type in
auto var = FunctionCall(...);
Then, in the IDE, hover over auto to show what the actual type is, and then replace auto with that type. Useful when the type is complicated, or is in some nested namespace.
"I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment."
Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.
I feel AI now is good enough to follow the same pattern as with internet usage. The quality ranges from useless to awesome based on how you use it. Blanked statements that “it is terrible and uesless” reveals more about the person than the tech at this point.
It’s some mixture of luddites, denial, ignorance, and I don’t know what else.
I’m not sure what these people are NOT seeing. Maybe I’m somehow fortunate with visibility into what AI can do today, and what it will do tomorrow. But I’m not doing anything special. Just paying attention and keeping an open mind.
I’ve been at this for 40 years, working professionally for more than 30. I’ve seen lots.
One pattern I’ve seen repeating is folks who seem to stop leaning at some point. I don’t understand this, because for me learning everyday is what fuels me. And those folks eventually die on the vine, or they become the last few greybeards working on COBOL.
We are alive at a very interesting time in tech. I am excited about that. I am here for it.
it already tells me enough to stay away from using AI tools for coding. and that's just one reason, if i consider all the others, then that's more than enough.
I've used AI assitance in coding for a year before I quit. The hardest part was a day when the services where unexpectedly down, and working felt like I was amputated in some way. Nothing works, my usual movement does not produce code. That day I realised these AI integrations take away my knowledge and skill of the matter and is just maximising the easiest and fastest part of software development: writing code.
The reason code can serve as the source of truth is that it’s precise enough to describe intent, since programming languages are well-specified. Compilers have freedom in how they translate code into assembly and two different compilers ( or even different optimization flags) will produce distinct binaries. Yet all of them preserve the same intent and observable behaviour that the programmer cares about. Runtime performance or instruction order may vary, but the semantics remain consistent.
For spec driven development to truly work, perhaps what’s needed is a higher level spec language that can express user intent precisely, at the level of abstraction where the human understanding lives, while ensuring that the lower level implementation is generated correctly.
A programmer could then use LLMs to translate plain English into this “spec language,” which would then become the real source of truth.
A wild guess as to what is happening. I haven’t actually tested this hypothesis so I could be completely wrong.
In feedback systems, the gain is a function of frequency, and typically decreases when going from low frequency to high frequency. This is often accompanied by a phase delay.
So if the overall gain of the system is high enough, there will be some high frequency where the gain is 1, and the phase is 180 degrees. This would result in positive feedback, amplifying noise at that frequency.
Maybe that’s what’s happening in the latest AirPods? If Apple is aggressive cranking up the gain of the noise cancellation system, there’s some high frequency where the noise gets amplified rather than suppressed.
The solution would be to either reduce the gain (which reduces the noise cancellation), or to add some differential gain in the system which pushes out the unity gain frequency to higher frequencies.
If they were calibrated assuming a certain distance from the microphone that "hears" what the wearer's ear is hearing and the ear itself, then it's possible a change in air density could position the area of highest constructive interference at the eardrum instead of the intended destructive interference for some frequencies.
The pressure difference shouldn’t be significant enough in modern jets? Cabin altitude is around 6000-8000’ - we would hear complaints from a few major cities. Humidity is much lower in aircraft though.
The speed of sound varies with air temperature, which is what the linked graph shows.
Technically the speed of sound does vary with density, but as you change altitude there's also a change in pressure which exactly cancels that out. In the end only temperature and gas composition alter the speed of sound.
As long as you're inside the plane (and hopefully it's not 217 K or -70 °F, per the graph) then the speed of sound should be unchanged.
"if A is the gain of the amplifying element in the circuit and β(jω) is the transfer function of the feedback path, so βA is the loop gain around the feedback loop of the circuit, the circuit will sustain steady-state oscillations only at frequencies for which:
1: The loop gain is equal to unity in absolute magnitude, that is, |βA|=1, and
2: the phase shift around the loop is zero or an integer multiple of 2π: ∠βA=2πn,n∈{0,1,2,…}"
I just had a thought, it's possible to completely disable ANC in settings, turning them into "dumb" bluetooth headphones. (Enable "Off Listening Mode" in Airpods Settings and the option will become available in Control Center.) If some of us who are able to replicate this effect consistently could try turning ANC off and seeing if the effect still occurs, that would narrow it down to being feedback related from Transparency/ANC or being something external like back EMF.
I just tested this myself and the two ways that I am able to get consistent squealing (stroking the upper body when in-ear and cupping them in the hand) both fail to replicate when ANC is off. So this does point to a feedback issue.
My other thought is that the APP3 may have microphones located next to the drivers in the ear canal, both for measuring fit, and for the new "own voice amplification" feature that appears in hearing control center if you enable Hearing Assistance. Maybe vibration is leaking through the body to the inner microphone.
The scenario the author describes is bound to happen more and more frequently, and IMO the way to address it is by evolving the culture and best practices for code reviews.
A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.
They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.
Acceptable comments could be:
- I agree with the AI for xyz reasons, please fix.
- I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.
If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.
That's because our stereoscopic vision has infinitely more dynamic range, focusing speed and processing power w.r.t. a computer vision system. Periphery vision is very good at detecting movement, and central view can process tremendous amount of visual data without even trying.
Even a state of the art professional action camera system can't rival our eyes in any of these categories. LIDARs and RADARs are useful and shall be present in any car.
This is the top reason I'm not considering a Tesla. Brain dead insistence on cameras with small sensors only.
their cams have better dynamic range than your eyes, given they can just run multiexposure and u gotta squint for sunlight. focal point is infinite for driving.
You’re not considering them even though they have the best adas on the market lmao suit yourself
I don’t work in this field so take the grain of salt first.
Quality of additional data matters. How often does a particular sensor give you false positives and false negatives? What do you do when sensor A contradicts sensor B?
Humans can be confused in a number of ways. So can AI. The difference is that we know pretty well how humans get confused. AI gets confused in novel and interesting ways.
I suspect it helps engineering the system. If you have 30 difference sensors, how do you design a system that accounts for seemingly random combinations of them disagreeing with an observation in real time if a priori you don’t know the weight of their observation in that particular situation? For humans for example you know that in most cases seeing something in a car is more important than smelling something. But what if one of your eyes sees a pedestrian and another sees a shadow of a bird?
Also don’t forget that as a human you can move your head any which way, and also draw on your past experiences driving in that area. “There is always an old man crossing the road at this intersection. There is a school nearby so there might be kids here at 3pm.” That stuff is not as accessible to a LIDAR.
reply