Visual AI: Mapping Process to Productivity

Visual AI: Mapping Process to Productivity
Visual AI: Mapping Process to Productivity

What’s going on in the production line? This is the perennial question asked by manufacturing executives as they monitor increasingly complex and changeable production lines. Are we properly staffed? Should we invest in more advanced tools? Should we redesign the production process or the product?

In recent years, Industrial Internet of Things solutions have helped reduce uncertainty in highly automated operations. Sensors linked to deep learning algorithms can help predict failures and improve maintenance scheduling, and IIoT monitoring platforms keep teams up to date on the status of every system.

Yet, in high-touch manufacturing operations where much of the work is done by humans, IIoT leaves a lot of unanswered questions.

In the long run, advances in automation and robotics are predicted replace most human factory workers. Yet, the transition will take decades. A World Economic Forum report from 2019 projected that between 2025 and 2035 the number of global manufacturing jobs with a high potential for automation will grow from 10-15% to 35-50%. That still leaves up to 50-65% of jobs that are difficult to automate.

Although many tasks are well suited to automation, manufacturers continue to struggle to apply robots to complex, multistep assembly operations. Most robots lack the necessary dexterity, flexibility and commonsense understanding to react to unforeseen situations, especially when working closely with humans.

As a result, high-touch factories will be around for a while. Yet, staffing these operations is a growing challenge. A 2018 Deloitte study projected that through 2028, unfilled manufacturing positions in the US will grow from 2.0 to 2.4 million. The pandemic has accelerated this trend by causing a surge in early retirements.

Manufacturers are facing these challenges just when they are needed to drive the clean energy revolution and other new industries. With manufacturing labor projected to be tight for at least another decade, companies need to improve productivity now.

Beyond stopwatches and clipboards

For decades the most common tool for measuring productivity in the factory has been the Time and Motion study. Yet, this technique is prone to bias and inaccuracy, and in frequently changing operations, it must be repeated at considerable expense.

The Gemba Walk is another useful analytical tool. However, these ambulatory surveys represent a snapshot in time and may not reflect typical operations. As with Time and Motion studies, people usually work more effectively while being watched.

What about all the data being collected from the machinery? By analyzing and timing the gaps between process steps performed on sensor-monitored equipment, one can make educated guesses about what the workers are doing in between. Yet, most of the processes remain invisible. Barcoding parts and products offer further clues, but miss the action in between.

Manufacturing Execution Systems can help, especially when deployed on mobile devices with connected worker software. Yet, most MES platforms require operators to plug in status codes after each step. Employees may forget to enter the codes or enter the wrong codes, and the interruption slows production.

Visual AI for manufacturing

In recent years, a new solution has appeared for analyzing high-touch manufacturing: AI-driven analysis based on image sensor data. Unlike time and motion studies or Gemba Walks, the analysis never stops, and the systems are far less prone to human error and bias.

Until recently, video was used in manufacturing primarily for security or visual inspection of materials and finished products. A new wave of visual AI tools perform inspect processes instead of products.  Like product inspection, process inspection helps manufacturers avoid top- and bottom-line waste. By revealing ways to streamline operations, automatic activity measurement helps to lower labor costs and increase capacity.
In a typical solution, standard image sensors are set up near workstations or assembly lines. The images are streamed to AI software in the cloud or an edge AI computer. After training the AI to identify people, tools, machines, products and processes–the software tracks operations over time and generates charts and statistics.

Some solutions–typically using edge AI boxes–are optimized for real-time workstation assembly for poka-yoke feedback to ensure steps are followed accurately and improve quality, safety and ergonomics. Other products analyze more complex interactions between people and machines throughout the production line.

Available metrics may include cycle time, throughput, value vs. non-value add time, PPE usage and safety compliance. The software can track the average number of workers on a given task, as well as time spent standing, sitting, walking, carrying and performing identified tasks such as operating a machine, welding, or cutting. Privacy protections include face and body blurring and in some cases, an option for disabling video review and storage.

The insights from visual AI can be used to compare shifts and production stations and identify bottlenecks and best practices. The software acts as a copilot for industrial engineers, enabling them to quickly adjust production processes to reflect product, supply and staffing changes.

By sharing the analytics from the front office to the front line, the entire staff can focus on a common point of reference, enabling everyone to become involved in continuous improvement. The software can capture expertise and best practices, and the visual record can be repurposed for interactive training.

Most platforms offer APIs to export data to existing IT platforms to provide a more holistic view of operations.

Visual AI software can also be also used to assist in the transition to automation by analyzing human assembly steps to guide robotics deployments.

Many of these solutions can deliver ROI beyond the factory in warehouses, construction sites and other settings. In the coming years, visual AI will continue to extend the concept of process detection in many new and exciting directions.

About The Author

Cyrus Shaoul is the CEO of Leela AI.

Xem Thêm: Hệ thống MES

Trí Cường

Translate »