A new report shows manufacturing is often a laggard in data management processes. Jeff Nygaard explores the implications and options for facility managers.
In manufacturing, many industries have adopted large volumes of IoT devices and platforms to manage their operations in the hope that digital technologies will deliver a competitive advantage. But while data is being produced at unprecedented volumes, the manufacturing sector is let down by its data management practices. Without understanding what they want out of their data, it can be difficult to accomplish basic goals, and factories run the risk of being flooded with tens of thousands of uncoordinated IoT devices deployed on different platforms.
Smart manufacturing evolves together with the people and the systems. For example, in the last five years, Seagate has optimised its footprint and now operates out of seven factories across three continents, the equivalent of 95 football fields of factory floor space, using more than 35,000 robots and managing a manufacturing cycle time longer than six months. Those are big numbers but manufacturing is more than just the numbers; it is about the systems and the people. Producing high quality products requires innovations in data analytics and anomaly detection and that requires innovation with AI and machine learning (ML) to manage massive complexity and opportunities that come with data growth.
The key element to it all is data connection and integration. More than 30 TB of data is generated each day in our manufacturing processes including images tool sensor readouts, metrology, and test results. In the future we will also integrate other types data, like video, into our data platform. All this data is traceable to the individual component part ID and to the finished product. It is a rich resource for building ML models to power AI applications that improve our quality and efficiency.
It would seem, however, that our manufacturing experience is not the norm, with a new resource, based on IDC’s global survey and commissioned by Seagate, ‘Rethink Data’, reporting that manufacturing is often a laggard in data management practices. The sector shows the lowest level of task automation in data management and the lowest rate for full integration of data management functions on a single platform.
While manufacturing may generate a significant amount of sensor and device-related data, much of it is produced at the edge and discarded, rather than transferred to a core environment for long-term storage.
It’s clear there is a disconnect between the digital assets and data management practices within the manufacturing industry; but what’s holding back progress? Is it time for a rethink for those managing these facilities?
Part of the problem is the skills gap. The manufacturing sector often has difficulty attracting new skilled employees willing to work on the plant floor who bring hard and soft IT skills. Connecting different generations of assets on the floor of manufacturing plants is also a challenge. In many cases, legacy infrastructure simply won’t be able to keep up with the amount of connected assets entering the plant. That leads to plants implementing ad hoc processes to connect and manage assets without being able to rely on underlying infrastructure for comprehensive management.
This piecemeal approach to data management undermines the overarching goal of data-driven strategies: to deliver insights that drive business value. For example, metrology measurements on material are necessary to know if your tools and processes are in control. These measurements can be time consuming and require expensive instruments to complete.
This process requires relating the data you are collecting from your tools and sensors to the performance of the material that is being produced. However, this data is often stored in multiple locations and, if it can’t be pulled together easily, it is difficult to assess the health of your tools to predict when they should be undergoing maintenance.
According to IDC’s research, many data management headaches can be solved by implementing DataOps— an emerging discipline that connects data consumers with data creators. Yet, the same survey reveals that in Australia, just six percent of respondents have fully implemented DataOps in their organisations, while 88 percent are still considering DataOps.
DataOps can be used to correlate data from disparate sources, a capability not easily available through other means. Because it is difficult, those organisations able to master it can expect to have an edge over the competition.
Returning to the earlier metrology example, it is possible to use the tool sensor data collected during a process run to predict a downstream metrology measurement. To do this you need the ability to relate the data from tool runs, material IDs and past metrology results on material to build the predictive model for the measurement.
This means you can make fewer metrology measurements and will need fewer metrology tools, overall decreasing cycle time and the cost of non-value added operations.
Traditional analytics takes a problem and searches for an answer; DataOps makes data associations and searches for insights. This approach can lead to better predictive analytics and event correlations.
For example, the ability to relate data produced during the manufacturing process to data collected from products that have been sold and are in the field can tell us if any conditions during the manufacturing process led to quality issues. This requires storing data from your manufacturing edge location for at least as long as the products that were produced are active in the field – typically many years.
The data produced by products in the field might also be in edge locations, so it makes sense to move the data from the manufacturing edge and the field edge into central locations together. If field incidents occur, you can then figure out what caused them during the manufacturing process and make changes to prevent them in the future. You can also use the resulting model to predict what other products might be at risk of similar issues and take proactive action to replace them.
DataOps does require an investment in technology, such as long term storage for historical data to mine for insights and use for training models, fast storage for buffering data to feed into your compute infrastructure for close to real time insights, and information buses and messaging systems to relay information between systems and tools. However it also requires an investment in people such as tool engineers and data scientists, and processes around data governance and model life cycles and monitoring.
It’s something FMs need to take into careful consideration, for, without a data management strategy, businesses will continue to struggle to make sense of the data they collect. The challenge for companies now is to put more of their data to work to generate significant competitive advantage in the marketplace.
Jeff Nygaard is executive vice president and head of operations, products and technology at Seagate Technology.