AI and IISE – Why AI Can’t Do Our Jobs, Updated

By Tamara Wilhite posted 10-03-2020 08:45:41 PM


Use of AI in product design via evolutionary mixing and competition is already being tried. And AI is getting used to solve problems in the real world. Artificial intelligence is being tasked to study the symptoms of unusual cases to try to find potential causes. Think of the rare syndromes that most doctors will never encounter, and the patients who visit dozens of specialists until getting a diagnosis. Worse are the ones who never receive a proper diagnosis and struggle through treatments for their symptoms because they don’t know the real root cause.

Will AIs get asked to solve manufacturing and process related problems by searching databases? Let’s look at the major reasons why it won’t in the near future.

Intellectual Property Concerns

Few companies are going to share their information on how they make their products, the problems they’ve encountered and how they solve these issues. Outsourcing to Asian nations with weaker IP protections has already lead to factories taking designs, work instructions and parts lists, carrying them to the factory across the street and making a rival product without paying the royalties owed.

Liability Concerns

Companies would prefer to bring in many experts bound by confidentiality agreements than use an artificial intelligence tool whose usage could be leaked like Wikileaks or reported by accident as part of someone’s exhibition of its abilities. They certainly won’t share data that admits potential liability on their part, and any examples shared will never point the finger at themselves. This will result in large omissions of useful data and skewed root cause results in queries against them.

A related issue is the risk that comes with data sharing. If you share your data, you may lead to insights and improvements in operations. However, you’re opening a door through which hackers could affect your systems and rivals could learn about how your proprietary processes work. In a world where lawyers’ abundance of caution often interferes with getting things done or delivering products to market, sharing information so that machines can learn may be verboten.

Incomplete Data

The data most likely to be shared are the happy stories, the industry white papers that say how great your company did solving this problem. The failures that provide the most useful lessons learned, the advice of what not to do, is least likely to be shared unless the failures are long past. Many other lessons learned won’t be published at all because the companies that experienced these things have closed. The negative outcomes that are reported may be sanitized to minimize root causes that make them look bad. Incomplete data will limit the effectiveness of any data mining to solve manufacturing and process related problems.

Think about how long it took for the severity of medical errors to be fully realized, identified in 2016 as the third leading cause of death in the United States. Then imagine the under-reporting of data to make various companies look good, because they don’t have the public good of saving lives as a motivator for fully detailed and honest reporting of problems they’ve encountered along with the solutions.

And coming back to the liability question, expect many of the most insightful accidents and mistakes to be excluded from publicly shared data. After all, providing only the data you’re fine with the public seeing minimizes their liability and protects their reputation.

Poor Quality of Data

We’d also face the risk of data quality affected by crediting the wrong solutions. For example, you’d see case studies discussing improved team function credited to diversity training instead of quality circles and inter-departmental knowledge sharing. The latest management fad would be receiving the credit instead of classic Lean engineering principles applied after value stream mapping and cutting the waste.

The ability to correlate the best solution would also be hindered by the renaming of classic problem solving solutions. In one interview, they asked I knew a CIP, continuous improvement project methodology. Yes, I know Six Sigma and completed dozens of projects. “But is it CIP?” I was asked. That this concept goes back to the plan-do-study-act cycle by Deming was irrelevant to the questioner. The name had changed, so it must be different and better. The mistaken belief that newer is better and novel is superior over the common leads to re-branding of the classic methods – and reports that credit the new methods management sent them to training to complete over the old names. The end result is that the AI could fail to associate all these different names for the same concepts going back decades to Dr. W. Edwards Deming and Frank Taylor.


Combine under-reporting of why problems occur and few reports of problems, combined with deliberate misclassification of causes to prevent public relations and liability concerns and incorrect identification of the solution, and it is unlikely artificial intelligence can be mined as effectively or well for solving industrial and system engineering problems as it is being used in other areas.

On the upside, it does mean industrial and systems engineers can look forward to a long and productive career even as AIs challenge many other areas of knowledge work.