The Reliability Process: The Role of Statistics in Understanding the Future.

Uptime Engineering > reliability process > The Reliability Process: The Role of Statistics in Understanding the Future.
Zuverlässigkeitsprozess Zukunft
The second field of activity for statistics (apart from the present and the past) is predicting the future.

Prophesies used to be a matter for blind visionaries. Statistics however use the data available and extrapolate into the future. This is particularly relevant for the validation of reliability in a product life cycle and for all types of preventative maintenance.

To ensure these methods function correctly, the future must first be extrapolated in principle from the present, i.e., the future must be as qualitative as the present. This is valid for reliability and the technology, as well as for its use, and yet naturally that cannot be expected for innovation.

But there is hope since in as far as one of these two aspects remains stable, the effects of change on the other can often be accurately estimated: For example, a load collective is simulated for new wind turbines in the near vicinity. Or the performance requirements for future electric vehicles is derived from the use space of current vehicles with combustion engines. This delivers good approximations that provide a solid basis for product development. For charging operation, this is naturally not possible since in this case both aspects, use and technology, are qualitatively new.

Generally speaking, if you want to make statements about data-driven processes, then the sought-after information must be present in the data. As trivial as this sounds, in practice, huge and hidden obstacles lie in the path of the application of statistical procedures.

First of all, the data has to be available. In today’s big data/deep learning approach, it is assumed that this is true, and this is actually (partially) the case. However, we have a different topic in the context of reliability: We are looking for failure indicators in fleet data. Normally, the failure rate in real fleets or plants is low, and from the many risks of failure only a few actually happen.

Hence to train a data-based model, numerous cases would be required for every realistically possible failure mode, meaning a fleet that functions as poorly as possible. However, this is not normally the case. The training of models for maintenance optimisation therefore remains largely illusory and purely data-driven model hardly ever work in practice.

However, what can be done well and quickly is to determine the load space for different modes of operation and boundary conditions. This results in the area of damaging operation being limited from the point of view of the healthy system. A solid data set exists for this and also a starting point from system control for the development of algorithms used to indicate damaged parts

Data availability from a stakeholder perspective (manufacturer, operator, service personnel) is anything but self-evident. Although there is consensus regarding the advantages of shared data, this fails due to the fact that the operational data lies in the current competitive zone in the value chain. It is therefore all about expanding business and for this, as is well-known, other parameters apply.

Data must be relevant for future situations.

For reliability, availability, and maintenance this means that the causes for failure must be contained in the data. The large proportion of long-term downtimes are driven by episodes of limit load, transient load, sharp gradients, starts, shifting events, and other similar transient events. Although this information is contained in the measurement data time series, current averaging algorithms are very efficient at annihilating transient events. What normally remains is all manner of boring and unproductive averages and scatter bands.

Once one has understood how technical systems can fail, then one can use this knowledge to determine what information is required, i.e., the required measurement data and their processing can be designed in such a way that instead of annihilating information, it is compressed.

Reliability data needs to be contextualised. 

We do not profit from data but from information. For example, it must be documented for maintenance purposes, which component a load collective was measured for and where in the fleet of these components was it built in a lifelong. Databases containing operational loads, system responses, and load collectives must therefore be connected with the hardware documentation. Not all asset administrations or CMMS manage this in sufficient detail. Data-lakes are often not connected with any hardware-mainland. They therefore lie as forlorn as salt-lakes in an information-desert.

When dealing with reliability, we are normally working in an environment steeped in history. We can therefore answer many questions regarding reliability using statistical procedures and create a solid basis for many decisions in the product life cycle.

The ideal path is a combination of data source. Even the simple comparison of a system with its equivalent neighbouring instance is impressively powerful. It is also very rewarding to combine the load history with the maintenance history and the downtime event. However, the diverse sources must all first be converted to a mutual format. But we have even developed a solution for that.


Uptime Pfeil Icon

Statistical forecasts are extrapolations from the past. They are therefore blind to anything that is new in your products.

Uptime Pfeil Icon

If you aggregate data, then only very little information about the state of the system being monitored remains untouched.

Uptime Pfeil Icon

However, if you connect data with expert knowledge, then you have found the silver bullet in economic and technical terms for the maximisation of information.

Related Posts

Repairing fences instead of catching chicken

Failure analyses are knowledge generators. They create a solid basis for sustainable problem solving. In...

Mitigation of Change Risks

Changes are the daily bread of the developers. They are used throughout the entire product...

Monitoring Systems with Learning Controllers

Mechatronic systems are primarily monitored in order to recognise changes from their nominal behaviour over...