Overview

Skill Level: Any Skill Level

Developing a test design, selecting, building and assessing models.

Ingredients

The modelling phase of the predictive maintenance project is my favourite part! Modelling is a highly iterative process – you must remember that predictive maintenance should be as empirical as possible, so that means you need to construct the modelling phase so as to robustly test any predictions that your model will make. Usually, we will try several models and when we have found one that seems to handle our data well, we fine-tune the parameters of that model until we achieve the best results. Quite often we will need to change any data preparation to better enhance the model. There are no one-size-fits-all solutions, which is why predictive maintenance modelling is always interesting. This recipe is a simplified approach to modelling, just to give you a flavour of how you might approach predictive maintenance model building.

 

Import the libraries that you will need for these exercises:

 

from sklearn.model_selection import KFold

from sklearn.model_selection import cross_val_score

from sklearn.linear_model import RidgeClassifier

from sklearn.neighbors import KNeighborsClassifier, NearestCentroid

from sklearn.naive_bayes import GaussianNB

from sklearn.svm import SVC

from sklearn.metrics import f1_score

from sklearn.model_selection import cross_validate

from sklearn.tree import ExtraTreeClassifier

from sklearn.model_selection import GridSearchCV

from sklearn.metrics import classification_report

from sklearn.metrics import confusion_matrix

from sklearn.metrics import accuracy_score

 

Step-by-step

  1. Developing a test design

    Before you build a model, you should think about how you can assess the models suitability. Typically you need to understand how you can determine how good the model will be , and also think about the data will you be testing on.

     

    As this is a supervised learning problem, we will use accuracy to guide our training. We will use cross-validation during the training phase of our modelling exercise. Usually, on imbalanced data, you might use F1 Score to assess goodness, but as we have rebalanced the data set when preparing the data, this is unnecessary. We should, however, take a look at F1 when we test on our validation data set. 

  2. Selecting the model

    Determining the most appropriate model will typically be based on the data you have available, the modelling goals, and the requirements of the model itself and of the output.

     

    The approach that I like to use is to create a test harness to cycle the data through different models to see which one fits best, at least at a high level. In this scenario, we have a classification problem, so I have put together some models that typically work well.

     

    These are only a small number of available models — there are dozens to try and as you get more experience you can often intuit which models or model type might work best for the data you have. Remember, however, that any model selection should be rigorously tested.

     

    models = []
    models.append(('ET', ExtraTreeClassifier()))

    models.append(('NC', NearestCentroid()))             

    models.append(('KNN', KNeighborsClassifier()))              

    models.append(('NBG', GaussianNB()))

    models.append(('RCL', RidgeClassifier())) 

    scoring = 'accuracy'

    results = []
    names = []
    for name, model in models:

        kfold = KFold(n_splits=10, random_state=7)

        cv_results = cross_val_score(model, X_resampled, Y_resampled, cv=kfold, scoring=scoring)

        results.append(cv_results)

        names.append(name)

        msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())

        print(msg)

    Code

  3. Building the models

    During this part of the process, it is important to understand the models you have built, the parameter settings for those models, and any performance or data issues that you encountered.

    In order to track your progress with a variety of models, be sure to keep notes on the settings and data used for each model. This will help you to share the results with others and retrace your steps.

    As we can see, KNN performed best, so let’s dive into that model in more detail. There are several parameters that can be adjusted for KNN, but for illustrative purposes, we will focus on the value for k. We will perform a grid search, which will help us tune that parameter by building a model for each parameter permutation and find the best performing one. Note that this might take some time to run so I would recommend chunking the code into three different cells in your notebook.

     

    k_range = list(range(1, 31))

    param_grid = dict(n_neighbors=k_range)

    grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy')

    grid.fit(X_resampled, Y_resampled)

     

    grid.cv_results_

     

    print(grid.best_score_)

    print(grid.best_params_)

    print(grid.best_estimator_)

     

    knn

     

    Once you have determined the parameters that produce the most accurate results, be sure to take note of them. This can help you when you decide to automate or rebuild the model with new data. In this case, we can see that the optimal value for K is 1, which generated an accuracy of 0.9999484722007523!

     

    It’s also important that when you assess the model, take note of key information such as

    • Meaningful conclusions
    • Any new insights
    • Model execution issues and processing time
    • Any problems with data quality
    • Any calculation inconsistencies
  4. Assessing the model

    Now that you have a model that is achieving a goodness of fit, let’s take a closer look at it to determine if it is accurate or effective enough to be deployed.

     

    It’s a good idea to be methodical and base it on your test plan.

     

    For our purposes, we will test our model on the unseen data that we created in our validation data set during data preparation.

     

    Remember how we also rescaled the data during data preparation? Well, we need to apply the same scaling to the validation set:

     

    rescaledX_validation = scaler.fit_transform(X_validation)

     

    Next, we will run the model on the unseen data. As this data set is very imbalanced, we will focus on the F1 score, which is a better guide than accuracy for imbalanced data. We will also generate a confusion matrix and a report on the classification outcomes.

     

    knn = KNeighborsClassifier(n_neighbors=1)

    knn.fit(X_resampled, Y_resampled)

    predictions = knn.predict(rescaledX_validation)

    print(accuracy_score(Y_validation, predictions))

    print(confusion_matrix(Y_validation, predictions))

    print(classification_report(Y_validation, predictions))

     

    confusion

    The results are pretty impressive, and the model achieved an F1 score of 98.

     

    At this stage in the process, if you think that the model meets your predictive maintenance objectives, you can move on to a deeper evaluation of the models and look to deploy. Be sure that you can answer the following questions before you do decide to move to the next stage:

     

    • Can you understand the results of the model?
    • Do the model results make logical sense and are free from glaring inconsistencies e.g. terrific results in training, but awful results on unseen data?
    • Do the results meet your business objectives??
    • Have you thoroughly evaluated the model accuracy?
    • Have you looked at multiple models and compared the results?
    • Are the results of your model deployable?

Join The Discussion