Solve historical challenges and improve plant profitability through analytics-based insights.
TrendMiner

As disruption has escalated in 2017, industry faces increasing pressure to transform in order to remain competitive. This means finding ways to leverage new technologies and identify opportunities for optimization. One of the best ways to gain insight is to apply advanced analytics to the data by revenue-generating assets and processes, but until now the challenges associated with achieving this goal have prevented many companies from reaping the potential benefits.

What Are Industrial Analytics?

Industrial analytics refers to the collection, analysis and use of data generated in industrial operations. This covers a wide range of data captured from all kinds of sources and devices, whether an asset or a production process. Anything with a sensor creates data, and industrial analytics examines all this data. Naturally this means “big data” analytics, but industrial analytics differs from generic big data analytics systems in that they are designed to meet the exacting standards of the industry in which they work. This includes the ability to process vast quantities of time series data from various sources and turn it into actionable insights. Industrial analytics can be relevant to any company that manufactures and/or sells physical products.

The Problem with Industrial Analytics

Many organizations can see the potential benefits of an analytics solution, but the time and capital needed to achieve it have historically put it out of reach. That’s because in the past, industrial analytics has been run like generic big data analytics projects. The traditional and most common approach to industrial analytics involves data scientists building an analytics model. Data scientists must understand the use case and then gather, transform, optimize and load the data in the developed data model, which needs to be validated, optimized and trained. The completed data model provides answers to the initial questions. Aside from the considerable time and expense needed to realize results, this way of working has another disadvantage. It leaves companies dependent on their data scientists and results in a solution that subject matter experts (engineers and operators) may not fully understand.

Solving the Industrial Analytics Stalemate

Luckily, as disruptive technology brings new challenges, it also creates new solutions. In the last few years, there has been a growing trend toward self-service applications. This next generation of software uses advanced search algorithms, machine learning and pattern recognition technologies to make querying industrial data as easy as using Google.

No Data Scientist Required

With self-service industrial analytics, there is no need to model data. Companies do not need a data scientist to use the software, and there is no long project timeline or high cost. Instead, the subject matter experts directly query their process data at any time in a self-service application. Using pattern recognition and machine learning algorithms permits users to search process trends for specific events or detect process anomalies. By combining search capabilities on both the structured time series process data and the data captured by operators and other subject matter experts, users can predict more precisely what is occurring or what likely will occur within their continuous– and batch–industrial processes. For example, an operator can compare multiple data layers or time periods to discover which sensors are deviating from the baseline more or less, and then make adjustments to improve production efficiency.

Designed to be Used

Self-service analytics tools are also designed with end users in mind. They incorporate robust algorithms and familiar interfaces to maximize ease of use without requiring in-depth knowledge of data science. No model selection, training or validation is required; instead users can directly query information from their own process historians and get one-click results. Immediate access to answers encourages adoption of the analytics tool because the value is proven instantly. Precious time is saved and previously hidden opportunities for improvement are unlocked. This self-service analytics approach puts the power into the hands of the process experts, engineers and operators who can best identify and annotate areas for improvement.

Why Give this Power to End Users?

Working with time-series data is often best done by the subject matter experts (such as process engineers and control room staff) who know what to look for in case of anomalies in process behavior and finding root causes. They can also identify best performance regimes that can be used as define ideal production and identify conditions for live process monitoring and performance prediction. These subject matter experts are, in fact, the key to improving the company’s profitability. All they need is the tool. By democratizing access to analytics insights, actionable information becomes available at all levels of the plant. This translates into the ability to achieve incremental improvements at all stages of the production process.

The Real Benefits

With an industrial analytics solution, users gain insight into all their assets and processes. Previously hidden trends and patterns become clear and can inform decision-making. From high-level performance monitoring to the most granular investigations, industrial analytics deliver insight where it is most needed: in safety, efficiency and performance. An industrial analytics solution that focuses on self-service results in benefits to day-to-day plant operation. This includes improved root cause analysis, objective performance prediction, automated monitoring and knowledge retention. By sharing analytics insights with users, they are able to take immediate action when a trend appears and directly contribute to improving overall plant performance at all levels of production.

Enabling a Modern Engineering Analytics Organization

Just as technology has evolved to create connected plants, so engineers must be empowered to manage these facilities. This is a critical shift in business culture, since the entire organization must understand the potential of analytics as it applies to their role. Instead of relying solely on a central analytics team that owns all the analytics expertise, subject matter experts, such as process engineers, are empowered to answer their own day-to-day questions. Not only will this spread the benefits to all engineers involved in process management, it will also free the data scientists to focus on other critical business issues. Enabling engineers does not mean asking them to become data scientists. It means providing them access to the benefits of process data analytics. Process engineers will not become data scientists because their education background is different (computer science versus chemical engineering). However, they can become analytics aware and enabled. This process is sometimes referred to as “the rise of the citizen data scientist,” a growing trend in which experts in their own disciplines (such as engineering) add analytics capabilities to their core competencies rather than splitting the analysis from the data. Involving engineers in analytics allows them to solve more day-to-day questions independently and increase their own effectiveness. They in turn provide their organizations with new insights based on their specific expertise in engineering. This delivers value to the owner-operator at all levels of the organization and leverages (human) resources more efficiently.

Conclusion

Many people wonder if these analytics is worth the time needed to get started, and the answer usually yes. With a self-service industrial analytics tool, benefits may be great but the time investment is small­—especially for a company that’s not ready to invest in on-site data scientists. With a self-service industrial analytics solution, users don’t have to wait while a data model is selected and built. Immediately after deployment, they can begin analyzing the historic and live performance data from assets and processes. Additionally, the software is plug-and-play, often implemented by in-house Information Technology (IT) teams within an hour. Likewise, involved training is not required because the simple interface is easy to operate and quick to learn.