Hacker Newsnew | past | comments | ask | show | jobs | submit | BrandonLive's commentslogin

Today’s “FSD” has its limitations and requires supervision, but your description of it is not anything like my experience even on a HW3 vehicle. In fact, in many years of using Autopilot and various “FSD Beta” and “FSD (Supervised)” versions for several tens of thousands of miles I’ve literally never seen it “slam on the brakes suddenly for shadows” or “veer into the wrong lane”. I’m not a cult member and my next car won’t be a Tesla because I cannot support Musk after the horrible things he has done these last 2-3 years, but “FSD” is phenomenal when used appropriately and with the right expectations about what it is and what it isn’t. And it has improved a ton over the years, too.

The end-to-end solution was a real game changer, and while the previous solution was still useful and impressive in its own right, moving to the new stack was a night and day difference. With V13 finally taking advantage of HW4, and all the work they’ve been doing since then (plus upcoming HW5 introduction), it’s totally within the realm of possibility that they achieve viable L4 autonomy beyond this kind of small scale demo (and I hope some form of L3 maybe on HW4 before long for customer vehicles).


I can give you a number of locations to visit in B.C. and the time of day for the shadows if you want to experience it for yourself! Hasn’t been fixed in four years yet. It has gotten less frequent in general though.


They collect video, not images, along with other sensor and control data.

It’s not a sunk cost fallacy, it’s a technical strategy that is very logical and showing compelling results (though as of yet unproven for achieving robust L3 or L4 autonomy, demos notwithstanding).


Driving over a double yellow is expected and legal in normal driving, such as when making a left turn or going around an obstruction.

In this example it looks like it oscillated between two different routing choices (turning left and going straight), and by the time it decided the correct route was to go straight, it found itself misaligned with the lane it should have been in. Instead of moving all the way back to the right, it kind of “cheats” its way into the upcoming left turn lane. This isn’t something it should do in this situation, but it’s likely emulating human behavior seen in situations like this which appeared in its training data, where people cut across the center line(s) ahead of the turn lane forming when they can see that it is clear.

The thing a lot of people get wrong is that they think the most valuable data for Tesla to collect are the mistakes or interventions. Really what they need the most is a lot of examples of drivers doing good job of handling complex situations. One challenge, though, is separating out those good examples from the less good or bad ones, as human drivers are notoriously bad at, well, driving.


They used it on test mules to create labeled training data for their older monocular depth / Occupancy Network models (which they still use as part of the supervisory policy enforcement and active safety layer, alongside the end-to-end model).

They’ve never had LiDAR in their cars and it would not ever have been practical for them to have done so. Nobody has any mid or long range LiDAR in vehicles as the scale that Tesla sells.


Tesla’s revenues and profits are down for one reason and one reason only: Musk has personally alienated a large swath of the customer base.


oversimplification


That’s both untrue and missing the point.

In a perfect world, AV software wouldn’t be necessary. We don’t live in a perfect world. So we need defense-in-depth, covering prevention, mitigation, and remediation.


How is it meaningfully different with respect to this question?

If I go to a museum and look at a bunch of modern paintings, then go home and paint something new but “in the style of”, this is well-established as within my rights, regardless of how any of the painters whose work I studied and was inspired by might feel.

If I take a notebook and write down some notes about the themes and stylistic attributes of what I see, then go home and paint something in the same style, that too is fine - right? Or would you argue the notes I took are a copyright violation? Or the works I made using those notes?

Now let’s say I automate the process of recording those notes. Does that change the fundamentals of what is happening, with respect to copyright?

Personally, I don’t think so.


The law most definitely distinguishes between the rights of a human and the rights of a software program running on a computer.

AI does not read, look at or listen to anything. It runs algorithms on binary data. An AI developer who uses millions of files to program their AI system also does not read, look at or listen to all of that stuff. They copy it. That is the part explicitly covered by international copyright law. It is not possible to use some file to "train" a ML model except by copying that file. That's just a fact. It wasn't the computer that went out and read or looked at the work. It was a human who took a binary copy of it, ran some algorithms on it without even looking at it, and published/sold/gave access to the software.

AI software is a work by an author; not an author.


> How is it meaningfully different with respect to this question?

Humans can't be owned by corporations for one.


You might consider that your needs are not representative of the overall Windows population. For example, one reason some of these key combinations are important is for accessibility.


What?


You are confusing two different things.

The problem is one factor account recovery, because it means you have one factor auth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: