The Three Pillars of AI

Recent incidents involving AI algorithms have hit the headlines, leading many to question their worth.

In this article, CTO Alex Brown outlines the three pillars of AI and looks at how they each play a part in implementing AI in production.

three pillars of AI

As many who work within computer science will know, many Artificial Intelligence (AI) projects fail to make the crucial transition from experiment to production, for a wide range of reasons. In many cases, the triple investment of money, training, and time is deemed too big of a risk to take; additionally, it could be feared that initial AI and machine learning models might not scale or might be viewed as too experimental to be utilised by internal or external customers. 

Pillars

In many cases, it can also be due to a lack of data, the suitability of data, and the quality of data. But even if your data’s of the right quality and your experimental model is good, your digital transformation journey is far from over – you still have a long way to go before you can use that AI in production! 

From all the work we at Datactics have been undertaking in AI development, it’s clear to us that there are 3 critical features your AI system will need:  

Explainability

Pillars of AI

Two or three years ago, when more AI technologies and intelligent systems were emerging, no one talked about explainability – the ability to explain why an algorithm or model made a decision or set of decisions.

Today it’s a hot topic in data science and discussions around deep learning. The use of opaque ‘black box’ solutions has been widely criticised, both for a lack of transparency and also for the possible biases inherited by the algorithms that are subject to human prejudices in the training data. 

Many recent cases have shown how this can lead to fragmented and unfair decisions being made.  

Explainable AI or “XAI” is fast becoming a prerequisite for many AI projects especially in government, policing, and regulated industries such as healthcare and banking with huge amounts of data.

In these business areas, the demand for explainability is understandably high. Explainability is vital for decision–making, predictions, risk management, and policymaking.

Predictions are a delicate topic of discussion as any mistakes made can often lead to major implications. 

AI models in healthcare

As an example in healthcare, if an AI algorithm isn’t trained adequately with the correct data, we can’t be sure that it will effectively be able to properly diagnose a patient.

Therefore, training the data set and ensuring that the data entering the data set is bias-free has never been more important.  

Furthermore, XAI is not just for data scientists, but also for non-technical business specialists.

It stands to reason that it should also be easy for a business user to obtain and understand information on why a predictive model made a particular prediction from a business perspective and for a data scientist to clearly understand the behaviour of the model in as much detail as possible.  

Monitoring  

Closely related to XAI is the need to closely monitor AI model performance. Just like children may be periodically tested at school to ensure their learning is progressing, so too do AI models need to be monitored to detect “model drift”, defined as when predictions keep becoming incorrect over time in unforeseen ways. Various concept drift and data drift detection and handling schemes may be helpful for each situation.

Often, if longer-term patterns are understood as being systemic, they can be identified and managed.

AI models


Concept drift is often prominent on supervised learning problems where predictions are developed and collated over some time.  Like many things, drift isn’t something to be feared but instead measured and monitored, to ensure firstly that we have confidence in the model and the predictions it is making, and secondly that we can report to senior executives on the level of risk associated with using the model. 

Retraining  

 Many AI solutions come with ‘out of the box’ pre-trained models which can which theoretically make it quicker to deploy into production. 

measuring AI

However, it is important to understand that there isn’t a “one-size fits all” when it comes to AI, and that some customisation is going to be necessary to ensure that predictions being made fit your business purposes. 

For most cases, these models may not necessarily be well suited to your data. The vendor will have trained the models on data sets that may look quite different to your particular data and so may behave differently.

Again, this highlights the importance of monitoring and explainability, but furthermore the importance of being able to adapt a pre-trained model to your specific data in order to achieve strong AI.

To this end, vendors supplying pre-trained models should provide facilities for the customer to collect new training data and retain an off–the–shelf model.

An important consequence of this is that such AI frameworks need to have the ability to rollback to previous versions of a model in case of problems, and version control both models and training data in order to prevent weak AI.

To conclude our three pillars of AI, the route to getting AI into production is built on being able to explain it, including: 

  • The decisions baked-into the model, including why certain data was selected or omitted
  • How much the model is deviating from expectations, and why
  • How often, how and why the model has been retrained, and whether or not it should be rolled back to a previous version

For more on this subject, read up on my colleague Fiona Browne’s work, including a recent piece on Explainable AI, which can be found here 

Scroll to Top