Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh please, that's a ridiculous assessment of what was said. Coding giant distributed web apps is a very different domain than coding for cars. I wouldn't put much stock in Twitter engineers assessing self driving car code either.


"Tesla engineers don't just work on in-vehicle code though. The servers and systems needed to implement Tesla's telemetry and software updates would be in the same ballpark of expertise, albeit with much less scale and different requirements than Twitter."

-- hn_throwaway_99 https://news.ycombinator.com/item?id=33389128


Yes, both were me. I'll respond since you seem to be implying my responses are incongruent.

Here I was responding to a comment "What hubris to think that your system is so complex that other engineers couldn't understand it." Which is still a bullshit assessment of the argument (which the original comment was about) that you need expertise in specific domains to be able to evaluate code, which I agree with.

My comment you quoted above is that I think that there would be a subset of Tesla engineers (just not the ones that, for example, solely focus on driver systems) that would overlap with Twitter's domain.


>Oh please, that's a ridiculous assessment of what was said. Coding giant distributed web apps is a very different domain than coding for cars. I wouldn't put much stock in Twitter engineers assessing self driving car code either.

You don't need to be a part of any particular domain to effectively audit code.


This is true to an extent. At the extremities, a software system can be audited without any previous domain exposure: architecture, and implementation means (language, deployment, etc.)

It can be difficult to critique choices, specially in system architecture, when domain requirements are presented or falsely asserted as 'dictating the choice', but this can be addressed by directing conversations with stakeholders to nail down the conceptual model that has been implemented in architecture style x, and then the conversation is on more equal footing.

Auditing language usage, libraries, development process, etc. are much less domain specific.


Really the only way I've been able to make sense of it is if he brought people in to produce an audit of changes made in the last couple months. IE he knows he doesn't know how to get that information, but he knows he has people at Tesla who do.

They wouldn't have to understand any of it, just be able to identify where and when changes occurred, for later review for any last minute shenanigans.

I'm not personally convinced this whole incident even happened, but if it did, this is what I think would make sense.


[flagged]


If I see the SpaceX code, and it’s full of thousand line functions with a cognitive complexity of 200 I can confidently say it’s going to be hard to maintain.

Clearly it gets rockets into the sky, but something similar is true for all code that hasn’t failed catastrophically yet.


Except you would be wrong. Or at least not obviously right: safety critical code does all sorts of weird things in the interests of verifiability because it must be exactly right: it doesn't get "maintained", because once it's flight proven you do not change it without an entirely new verification process.

This also leads to interesting practices at times like favouring repeating code over writing separate functions to ensure the flow of reading doesn't jump around and instead always moves forward.


Is there any reason to suspect that Twitter's code will be of this terrible quality? If not, then again: what's the point of bringing random engineers in to take a look at code from an entirely unfamiliar domain?


Since it’s a huge enterprise with hundreds of engineers, I don’t have to guess. It’s just a matter of magnitude.


It's mostly Linux, written in C, a bit of math for GNC, quaternions etc.. nothing too crazy if you have a math + good C background.

They put the flight software through a lot of hardware in the loop tests, simulating launch. That's probably the main novel thing.


You can probably read the code. Whether you can accurately assess it for the right set of tradeoffs is generally far harder.


>Bullshit. I'm a web engineer, and I wouldn't be competent to audit code that runs Space X's rockets, nor would anyone I know in my career space.

You're still missing the point. It isn't about knowing how rockets work or anything in particular. It's about being able to search for and recognize issues.


At Twitter's scale the issues are far less about "this code smells funny" but how dozens of systems interact together. Ain't nobody gonna figure out problems by simply checking out a handful of repositories, and specially in a few hours / days.


>At Twitter's scale the issues are far less about "this code smells funny" but how dozens of systems interact together. Ain't nobody gonna figure out problems by simply checking out a repository, and specially in a few hours / days.

I don't know why you think someone would have to know about Twitter's specific implementation of distributed systems in order to inspect those systems for non-twitter-specfifc issues.


Ack, and thanks for your comment. I am choosing to disengage from this conversation, have fun!


>Ack, and thanks for your comment. I am choosing to disengage from this conversation, have fun!

Sorry, what was it about my comment you vehemently disagree with?


Issues such as what?


>Issues such as what?

Such as those which do not meet the standards defined for the particular audit.


I mean, you can do a very general audit of things like “is there CI” or “do they use horribly unsafe serialization everywhere” but I don’t see anything beyond this without a huge amount of effort.


>I mean, you can do a very general audit of things like “is there CI” or “do they use horribly unsafe serialization everywhere” but I don’t see anything beyond this without a huge amount of effort.

Do we know what is or isn't being reviewed? Do we know how much effort is being applied in this process or if there's a defined upper and/or lower bound?


The upper limit is the point where it makes sense to bring in experts rather than random Tesla engineers.


> You're still missing the point. It isn't about knowing how rockets work or anything in particular. It's about being able to search for and recognize issues.

Still, the issues you'll likely encounter inside a car are very very different from the ones you'll see inside a historically grown distributed system serving millions of web, app and API requests. Auditing code is more than analyzing runtime/memory complexity.

Thats not to say it's impossible for Tesla engineers to audit. But I'd imagine it would take quite a bit of time to gather meaningful insight into the landscape and would hardly be an efficient use of the time of senior Tesla engineers.


>Still, the issues you'll likely encounter inside a car are very very different from the ones you'll see inside [...]

There are many systems involved in those cars, there's also many people working for those kind of companies who do not solely "work on cars."

>Auditing code is more than analyzing runtime/memory complexity.

I agree.

>Thats not to say it's impossible for Tesla engineers to audit. But I'd imagine it would take quite a bit of time to gather meaningful insight into the landscape and would hardly be an efficient use of the time of senior Tesla engineers.

Take longer than if the team were comprised of Twitter staff or who are already familiar with Twitter's infrastructure and code base? Sure. But that's the case with just about any audit conducted by an outside entity.


I'm not missing the point. "Searching for and recognizing issues" is not somehow completely independent from the domain of the code in question. More importantly, different domains often have completely different primary concerns. E.g. web app engineering is often concerned with reducing cycle time because the technology means that you can instantly release updates. That's obviously very different from the concerns of launching a rocket, and I would expect them to have very different engineering practices.

In short, experience matters.


>I'm not missing the point. "Searching for and recognizing issues" is not somehow completely independent from the domain of the code in question. More importantly, different domains often have completely different primary concerns. E.g. web app engineering is often concerned with reducing cycle time because the technology means that you can instantly release updates. That's obviously very different from the concerns of launching a rocket, and I would expect them to have very different engineering practices.

>In short, experience matters.

Is this based off real experience in the field of auditing?


I don't think Musk is looking for an itemized bug list. He's probably more interested in overall code quality, estimates on the level of technical debt, and a qualitative feel about the state of things. While a distributed systems person would have an easier go of that, I think most decently-well-rounded software developers would provide value there.

Also remember that Telsa runs their own distributed systems to support the connected features of their cars. Certainly that's not the same as Twitter, but it's definitely in the same ballpark.


I implore you to consider that auditing code is more than just big-O analyses.


>I implore you to consider that auditing code is more than just big-O analyses.

That isn't at all what I said nor implied.


You seem to have fooled literally every person replying to your post, then. Perhaps consider that your reactionary post was not quite as "rational" as you'd like to believe, or at least that you haven't yet said what you mean.


>You seem to have fooled literally every person replying to your post, then.

Fooled? No.

>Perhaps consider that your reactionary post was not quite as "rational" as you'd like to believe, or at least that you haven't yet said what you mean.

What part of any of my comments are reactionary? What have I said that I don't mean? I genuinely don't understand.


> Bullshit. I'm a web engineer

Did you graduate with an engineering degree or do you just like calling yourself a "web engineer"?


It's more polite than "webshit", but the overall comment isn't making the case that the more polite term should be used... I'd like it if people calling themselves web engineers held themselves to a higher standard. And were able to see that even if someone is a webshit now, they might have been something else in the past, and something else again in the future, tech is such a great career space in that it's not incredibly difficult to change what domains you're working on. Even apart from general expertise that allows an engineer to go and review arbitrary code (with full understanding, and immediately? No, but you can get started, and find common areas and boundaries, and find who the main contributors are and who isn't so important, and draw big black boxes over areas that really need a specific expert's look, and there's tons of automated tools that can help too e.g. security audit consulting firms can find issues in huge codebases quite fast), it would be surprising if Tesla didn't already have some former Twitter employees already who could contribute to this review if it makes sense to use them. Let alone former employees from other companies that have systems similar to Twitter's. It'd be surprising if all the engineers selected were just people who have never worked for another company besides Tesla.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: