I'm sure it's just a coincidence that this is being published in multiple outlets now, when AMD is kicking Intel's butt, instead of when it was originally known. Why would anyone suspect otherwise?
@mindgam3 .. “Minor quibbles about truth and meaning of words aside, I have to support any article that skewers the soft underbelly of the phony AI ecosystem as effectively as this one does.”
Retrospective ass covering, Boeing installed MCAS to save money, and in the process took control of the airplane away from the pilots. They should be looking at jailtime for this.
Boeing still gives the pilots a lot of control over the airplane, compared to Airbus. Airbus has defaulted to fly-by-wire (i.e: the computer has control and the pilot makes suggestions) for planes made since the 1980's.
Oh, for sure. The person I replied to made it sound like it was always a bad thing (to have a computer in charge), though, and that's certainly not the case.
Having a computer in charge works fine when the computer is designed correctly, with lots of redundancy and the pilots are trained to understand how it works.
The number of ICS directly connected to the Internet has grown 10% every year since we started tracking them at Shodan (https://exposure.shodan.io) so even worse this is an increasing problem. This is a known issue in the security industry and has been for a while but fixing it is a hard problem.
The other thing we've noticed is that people are putting the ICS devices on non-standard ports in an attempt to hide them from Internet crawlers. This means that there are people that know this is a bad idea and instead of putting it behind a VPN or something more secure they just decide to change ports and leave it at that.
> This is a known issue in the security industry and has been for a while but fixing it is a hard problem.
I've never heard of Shodan, it seems like a valuable service and seems like you care. I'm not in the 'industrial control systems space', but am in an industry which is 'sensitive'.
The 'last line of defence' is often audit. Are you able to reach out to auditors (Big4) and regulators and educate them on this service (audit often have a financial background, CPA etc, and it's rare to find an auditor with a deep technology understanding, and MBA programs, which a lot of company heads might have taken, tend to lack anything very information technology technically - basically finance rooted)? I'm thinking this could be a business development route for a valuable service; make it a win-win for them too.
b. There are people making a lot of money out of the migrant crisis.
c. The leftists in Europe are in favor of migration not because they like migrants, but because they hate their own culture. (Peter Hitchens 41:26 https://www.youtube.com/watch?v=JlN0g6zut9c)
Peter Hitchens who loves his own culture so much he wants society to journey a century or two back in time to resume an illiberal, racist, deferential, deeply Christian (and hypocritical), but very, very polite former age. That never actually existed.
The same Peter Hitchens who thinks the current Conservative party too liberal, and women's rights being the cause of exploitation?!
Even the UK's right find him absurdly out of date.
You get a reading of 20 on one sensor and get a reading of 34 on the second, which one is correct. To achieve reliability a minimum of five sensors need be used. four primary and one back-up. If three primary agree then system normal. If two primary disagree then switch to backup.
If you get a reading of 20 on one and 34 on the other, you disregard both and disable the system.
There’s a big difference between a system which must work and a system which must not go wrong. For example, the fly by wire system in an Airbus must work. A failed sensor must not disable the system. Thus, you need at least triple redundancy to keep functioning in the event of a failure.
Boeing’s MCAS system, on the other hand, doesn’t need to work. The plane flies just fine without it. It merely needs to not go crazy. Two sensors is sufficient.
I've read several of these articles about the MAX and I'm not seeing the explanation for how allowing MCAS to fly the plane only on input from AOA sensors (1, 2 or 5) is different from asking pilots to fly the plane with a fogged-up windscreen. Why not cross-check against the true horizon, for example? Doesn't seem safer to unnecessarily disregard context.
MCAS only exists to paper over a small handling deficiency. Apparently nobody (at least nobody with the power to force a change) thought that it could pose a safety problem. It’s not safety critical, so who cares if it fails? Except that it can fail in a way that crashes the plane.
MCAS only exists to paper over a small handling deficiency.
Per the article MCAS was originally intended to handle uncommon edge cases but was extended to cover additional (low speed) deficiencies. This expanded scope is what made MCAS as problematic as it is because it did away with the second input (accelerometer) and expanded the authority dramatically (from something like 0.6 degrees to 2.4 degrees of stabilizer movement).
The problem occurred in that that sensor had a privileged (unoverridable) pipeline to the horizontal stabilizer.
The pilots knew something was going wrong. That wasn't the issue. The issue was that the bloody thing could mistrim the plane to the point of nigh irrecoverability, and no one knew enough about it until two planes full of people plunged out of the sky.
The plane may be able to fly just fine; but the way this thing was developed and brought into mainstream use had critical problems in terms of essential information being communicated.
All the decisions and motivations behind these lack of communication have to some point been traced back to trying to circumvent regulations in order to prop up share price by scoring sales of a new airframe of comparable efficiency to the a320neo.
True horizon has nothing to do with angle of attack. Angle of attack is the direction the wind is coming from relative to the aircraft. It's possible to have a nose up attitude relative to the horizon, and have the actual aircraft motion be downwards at 10,000 feet per minute.
There’s a big difference between a system which must work and a system which must not go wrong. For example, the fly by wire system in an Airbus must work. A failed sensor must not disable the system. Thus, you need at least triple redundancy to keep functioning in the event of a failure.
Fly-by-wire Boeings still only have two alpha vanes. Go ahead, take a look at the next 777 or 787 you come across.
> When you do that you now have an aircraft the pilots aren't certified to fly.
It would increase risk. But for that increased risk to materialize into harm, the plane would also need to experience an unlikely, near-edge-of-flight-envelope situation that the working MCAS was intended to handle.
This would be comparable to a plane with any other mechanical defect that is discovered in-flight. If the above situation is expected to be too-risky to continue the flight and repair on the ground, then it would give cause for an emergency landing.
“After Boeing removed one of the sensors from an automated flight system on its 737 Max, the jet’s designers and regulators still proceeded as if there would be two.”
No, no, no. This is just more of shifting the blame from Boeing upper management. They couldn't use two Angle of Attack (AOA) sensors as when there was a differing reading there would be no way to know the correct reading, which is why MCAS used a single AOA sensor on the right-hand side.
When 2 sensors disagree the data are considered invalid and the software is supposed to handle the case.
Usually it means showing an alarm, putting the system relying on it on degraded mode and letting the pilot manually select the sensor he thinks is correct.
Reacting to such failures is a big part of an equipment certification process.
This doesn’t seem correct to me, but I can’t put my finger on why. Surely if both agree that’s more certainty than a single sensor reading. Granted a disagreement would be bad, but at least you would have some warning that one of them is wrong, whereas you would have none at all if relying on a single sensor.
It doesn't seem correct to you because they might have been trying to be sarcastic. Using two sensors would admit that they might disagree, and that there might be situations where the MCAS could not work. But there cannot be a situation where the MACS doesn't work if the MAX shall have the same type rating as previous 737s, so the sensors cannot disagree, and so it would be useless and wasteful to use two sensors. Issuing that disagreement warning would completely undermine the very reason for the existence of the MCAS.
You wouldn’t be able to know which one was wrong but you’d be able to know and annunciate an AOA MISCOMPARE (which was an option on the Max) and then disable MCAS.
There are standards for how many independent inputs are needed based on the criticality of the system. It's not just a guess as to how many are sufficient. That's why the categorization of MCAS correctly ('catastrophic' vs. 'hazardous' is important)
No. If you have a hundred AoA sensors, then you know to trust that whatever 90% of them are reporting is the truth. You also know exactly which of the remaining 10% need to be repaired or replaced upon landing.
I'm not sure what you're saying a hard "no" to; that's a strange response to my point. Your response is a little naive in terms of the way sensors work. You won't necessarily know which are reporting "truth" because in reality there will be a range of values reported because each sensor has a bound of uncertainty. It comes down to understanding what level of uncertainty and reliability are necessary for the application.
In designing airframes, there's actually guidance along these lines to remove the ambiguity of how many sensors are necessary. You can use calculations to define what level of reliability is sufficient. As an example to this point, there are standards like IEC 61508 that outline the procedure for doing such calculations. Many organizations also create their own standards (e.g., five redundant sensors for mission critical systems, self-diagnostic sensors for safety critical systems etc.) It shouldn't be guesswork. It's a risk-based decision, not a subjective guess as to whether two or "a hundred" sensors are necessary.
“the system would enable the .. National Security Agency, to instantaneously share select classified information with America’s closest allies in the fight against the Taliban”
Would it also help them figure out who is facilitating the heroin trade out of Afghanistan and where all the money goes ;]
Do you think these security defects are really bugs or are back-doors left in for the state security apparatus? Who or what department is tasked with testing Cisco devices for security vulnerabilities. I mean didn't anyone test the devices for potential remote root access and the ability to bypass the Trust Anchor? Lastly I don't know how an internet router can be not connected to the Internet and still function?
Doubt it. It's the consequence of bad security practices, incompetence at many levels, rushing to market, and in general how these platforms are designed (which is a consequence of previous statements).
While there are a lot of CVE's for pretty much all equipment like this from all vendors they require access to the mgmt interface to be exploited. These devices to the heavy lifting in ASIC/NPU's, so control plane and forwarding plane are separated (some things requiring cpu processing such as routing protocols needs to be forwarded from forwarding plane to control plane), but requires some configuration to be fully secure, easily done however.
The control plane is typically a linux distro these days (some run freebsd, QNX, or some in-house developed OS) with some open source applications on top (Apache or others as web servers are common for mgmt), some proprietary apps, ASIC drivers etc. A linux distro you seldom are allowed to makes changes to or update software fearing that it will cause problems for customers, same with the apps running on it. Even if you do upgrade it you have to get your customers to do it as well, most upgrades require scheduled downtime and typically comes with new fun bugs. Most of the CVE's come from the open source software running on these devices, some from them messing up configuration on them. Very few come from the proprietary apps as they mainly deal with network control protocols and not mgmt.