Interview to Ali Mousavi, Reader at the Department of Electronic and Computer Engineering, College of Engineering and Physical Sciences at Brunel University. He is expert of Mathematical Modelling and Simulation, Applied Control and Computing

Within Z-BRE4K Brunel University will employ the event modelling and event clustering algorithms to predict failures,
fatigue and wear to the machines.

  1. Ali, you are leading WP3 within Z-BRE4K and in particular you are responsible for the coordination of machine learning activities within the project. Could you summarize me the scope of the Predictive Modelling tasks you are taking care of?

We are trying to deploy advanced machine learning and AI technology to collect the causal relationship between machine state and potential failures of electrical and mechanical components of the manufacturing machinery involved in the project. In the context of machine manufacturing this concept is wider than just machine breakdowns.

For example, for SACMI, which is an OEM, we are dealing with the breakdown of the machine in specific while in Philips we look at the secondary signs of how the machine is working but more importantly into the mold and into the physical observable host, like the materials. Gestamp has a similar use-case. The good thing of the variation in our end-users is that we have both the experience, the knowledge that we can accumulate from the OEM, like SACMI but also applications are in places where the OEM will not provide us detailed information about the conditions of the machines but we try to use other capabilities to extract necessary data on potential causes of failures and breakdowns of equipment, machines and tools.

  1. In this regard, could you detail the activities carried out so far with end-users?

One of the major challenges of the data driven system modelling and understanding the performances of the system, is that it requires a lot of continuous, good quality data. This is very challenging in manufacturing lines where you have a combination of old and new machines. We have production lines were there is a lot of knowledge and know-how, which is been deployed by the operator and not really registered, it is not organized in a systematic manner. Even if the information has been collected, it is not collected in an accessible and understandable way, there is no uniformity in the acquisition or distribution of data, every user uses and interprets them as they wish.  

Our first challenge is to understand the challenges  data driven systems analysis and modelling and its strengths and weaknesses  compared to the  classical mathematical modelling. Now what we have done with our end users, for example with SACMI, is to collect information on the old and new machines, the data coming from the automated data acquisition system that has been established in the new machines but also the historical data that has been collected through the years by the experts and engineers and the experience in maintenance operations. Now we are looking to how to combine the information extracted from the machine with the FMEA knowledge of both OEM and customers experts on breakdowns.

We are hoping to synchronize this information to find the relationship of causes by using machine learning and event tracking techniques to build the story of breakdowns through Z-BRE4K strategies. We are at the beginning of that pathway and we are hoping to reach it soon.

  1. Apart from Brunel University there are other machine learning experts involved in the different use cases of the project, could you please tell us how are you coordinating with each other?

Within WP3 we are creating a layer and modular structure, we have clusters of partners working on the specific use cases. This approach helps in concentrating the effort on the specific needs of each industrial demonstration. In the end we will integrate the information coming from the single use case to have the whole scenario and to transfer the best practices from one use case to another to maximize the impact of the results. All the results will be merged in a common system, which will be modular and will allow to choose the strategy and the predictive models to be deployed based on the type of data that is available.

  1. Brunel University’s approach is focused on tracking events, while some other partners working on machine learning have different approaches based more on deep neural networks or algorithms. What is your approach? And what is the advantage of putting all these solutions together?

Deep neural networks rely on historical data and previous knowledge. This is why  deep learning is  successful there is a good knowledge or estimation of the output with successfully training your system. Once you have your defined output that you expect you collect a set of points that, by matching together, will give back to you the expected output (see the identification of people at the airport: they have your photo stored in their systems, they put you in front of a camera and by matching some points on your face they get your stored picture back and they know who you are). A lot of classical deep learning or neural learning solution does not necessary work in case of manufacturing failures prediction, or, if they do, they take a long period of time and are probably not efficient to use. Our event-based analysis is much more primitive of basis: if you have a needle in a haystack, the classical machine learning approaches look in the whole haystack to find the needle. The event-based method will tell the deep neural network where to look for the needle, since part of the haystack will have the highest probability to find the needle. The event cluster method is based on a real-time condition and on a wider range of parameters in the systems and allows to concentrate the efforts in a selected area. Event modelling should complement the other methods and integrate with them as none of these approaches alone is able to give a better result.

  1. Each of the three use cases of the Z-BRE4K project comprise several so-called “super components”, or subsystems made of several mechanical components. How will these Machine Learning solutions contribute to coordinate at higher level (i.e.: DSS, MES)?

Once you can provide the better insight of the conditions of your machines and their breakdowns then you can plan and strategize their maintenance plans. That information can help in planning maintenance flows.

  1. So how will it work practically? What is the starting point in applying all these approaches together?

There are four stages:

  1. Data acquisition
  2. Evaluation of the quality and quantity of the data for data analytics.
  3. Data analytics will help in finding correlations between the parameters of the system.
  4. The data analytics approach will make the parameters, which are usually used by the classical mathematical models, dynamic. Based on the condition of the machine is working in, the depreciation and failure rate becomes customized to that machine

Therefore, the event-based machine learning approach will integrate the historical knowledge and data available, with the real time knowledge of the machine condition. It will adjust the failure rate according to the real time conditions.

We want to contemporaneously minimize the sudden breakdown and to assess when is the most economical and tactical moment of conducting the maintenance procedure, synchronizing the failure of the materials and materials towards the schedules of maintenance that will become themselves synchronized.

 

Join to the project mailing list Subscribe to our monthly newsletter

Rotate your display!