Speaking as someone who has been responsible for "turning the lights back on" to fix problems with "fully-automated", "lights-out" factory lines, much of this paper still rings true forty years on - if nothing else as a check against our engineering hubris. It remains tremendously difficult to quash entirely the long tail of things that can go wrong in a factory.
That said, many contentions raised here really have been resolved substantially with increased computing efficiency and ubiquitous connectivity. The touted expert human operator's ability to see and understand processes from a high-level, informed by years of observing (and hearing, and "feeling") machine behavior has truly been eclipsed by an advanced machine's capacity to collect increasingly granular snapshots of its complete operating state - the temperatures, vibrations, positions, and other sensations of its various organs and elements - every few milliseconds, hold on to that data indefinitely, and correlate and interpret that data in ever-expanding radii of causation.
The best human operators (of any technology) not only respond to problems, they anticipate and prevent or plan around them. Massive data, advanced physics-based simulations, and "digital twinning" capabilities of manufacturing equipment afford pre-emptive testing of virtually infinite scenarios.
Not only can you simulate throwing a wrench in the works - you can simulate the effect of the wrench entering the works at every possible angle!
It's not infallible, and will for a long time still require a human-in-the-loop at some level, but as the author rightly put it themselves near the end of the paper:
"It would be rash to claim it as an irony that the aim of aiding human limited capacity has pushed computing to the limit of its capacity, as technology has a way of catching up with such remarks."
Do you think that with integrating AI/ML techniques fully into manufacturing automation that in our lifetimes people will become fully obsolete for this kind of work? As a cloud/software/network guy I am curious to see someones opinion who is knowledgeable in this area.
Within our lifetimes (say, the next 40-60 years), no, personally I don't think we'll see completely autonomous end-to-end manufacturing widely implemented (as much as I'd like to, considering it's a problem space I focus on!).
Some pockets of industry are much further ahead than others, but it will take A LOT of work to reach parity across the board. If not for technical reasons (which I'm more optimistic about), then for political and social reasons, as these systems and understandings adapt. That's a whole 'nother discussion...
AI/ML will play a huge role. Not only in machine resilience once commissioned and operating, but upstream and downstream as well. Better (AI/ML-assisted) tools for designing products and the factories/equipment that make them will preempt some of the challenges caused by the currently disjointed process.
I disagree with the comment that AI/ML techniques are only useful once you've physically built a plant - there are of course emergent behaviors that only crop up when dynamics of the whole unique factory are at play, but any given problem that arises is almost always traceable to one or a small number of subcomponent failures, for which better, more granular datasets are becoming available to train AI upon.
And, as I mentioned in my comment about throwing virtual wrenches in virtual works - simulations can begin to generate training data sets as well!
They won't. A new line needs to be "trained" for a ML system to be useful. In our lifetime I don't expect generalization across manufacturing lines to be feasible or economically advantageous.
That said, many contentions raised here really have been resolved substantially with increased computing efficiency and ubiquitous connectivity. The touted expert human operator's ability to see and understand processes from a high-level, informed by years of observing (and hearing, and "feeling") machine behavior has truly been eclipsed by an advanced machine's capacity to collect increasingly granular snapshots of its complete operating state - the temperatures, vibrations, positions, and other sensations of its various organs and elements - every few milliseconds, hold on to that data indefinitely, and correlate and interpret that data in ever-expanding radii of causation.
The best human operators (of any technology) not only respond to problems, they anticipate and prevent or plan around them. Massive data, advanced physics-based simulations, and "digital twinning" capabilities of manufacturing equipment afford pre-emptive testing of virtually infinite scenarios.
Not only can you simulate throwing a wrench in the works - you can simulate the effect of the wrench entering the works at every possible angle!
It's not infallible, and will for a long time still require a human-in-the-loop at some level, but as the author rightly put it themselves near the end of the paper:
"It would be rash to claim it as an irony that the aim of aiding human limited capacity has pushed computing to the limit of its capacity, as technology has a way of catching up with such remarks."