SafeML

Model-agnostic safety monitoring of machine learning algorithms


Keywords
aisafety, safety, trustworthyai
License
MIT
Install
pip install SafeML==0.3

Documentation

License: MIT Standard - \Python Style Guide View SafeML: Safety Monitoring of Machine Learning Classifiers on File Exchange Documentation Status arxiv badge

SafeML, SafeAI, AIsafety, AI safety, SafeDL, machine learning safety

SafeML

Exploring techniques for safety monitoring of machine learning classifiers.

Abstract

Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this project to addressing both safety and security within a single concept of protection applicable during the operation of ML systems is active monitoring of the behaviour and the operational context of the data-driven system based on distance measures of the Empirical Cumulative Distribution Function (ECDF). We investigate abstract datasets such as XOR, Spiral, and Circle and some well-known security-specific datasets for intrusion detection of simulated network traffic, using distributional shift detection measures including the Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Wasserstein and mixed Wasserstein-Anderson-Darling measures. Our preliminary findings indicate that the approach can provide a basis for detecting whether the application context of an ML component is valid in the safety-security.

Description

The following figure illustrates the flowchart of the proposed approach. In this flowchart, there are two main sections including training phase and application phase. A) The training phase is an offline procedure in which a trusted dataset will be used to train the intelligent algorithm that can be a machine learning or deep learning algorithm. This study will focus on the classification ability of machine learning. Thus, using a trusted dataset the classifier will be trained and its performance will be measured with existing KPIs. Meanwhile, the probability density function and statistical parameters of each class will be estimated and stored to be used for comparison. B) The second phase or application phase is an online procedure in which real-time and unlabelled data is going to be feed to the system. For example, consider an autonomous car the has been trained to detect obstacles and it should prevent a collision. Therefore, in the application phase, the trained classifier should distinguish between the road and other objects. One important and critical issue in the application phase is that the data does not have any label. So, it cannot be assured that the classifier can operate as accurate as of the training phase. In the application phase, the untrusted labels of the classifier will be used and similarly, the probability cumulative distribution function (CDF) and statistical parameters of each class will be extracted. The CDF-based statistical difference of each class in the training phase and application phase is used to estimate the accuracy. If the estimated accuracy and expected confidence difference was very low, the classifier results and accuracy can be trusted (In this example the autonomous car continues its operation), if the difference was low, the system can ask for more data and re-evaluation to make sure about the distance. In case of larger difference, the classifier results and accuracy are no longer valid, and the system should use an alternative approach or notify a human agent (In this example, the system will ask the driver to take the control of the car).

FlowChart

Figure 1. Flowchart of the proposed approach


SafeML Applications

The SafeML idea can be used for different applications. Three aplication of the SafeML project have been illustrated as follows:

SafeML in Security

Application of SafeML for Security, SafeML, SafeAI, AIsafety, AI safety, SafeDL, machine learning safety

Figure 2. Application of the SafeML for Security Intrusion Detection

SafeML for Medical Applications

Application of SafeML in medical,SafeML, SafeAI, AIsafety, AI safety, SafeDL, machine learning safety

Figure 3. Application of the SafeML in ML/DL based Disease Detection or Diagnosis

SafeML for Autonomous Vehicles and Self-Driving Cars

Application_of_SafeML_for_Autonomous_Vehicle_or_Self-driving_cars, SafeML, SafeAI, AIsafety, AI safety, SafeDL, machine learning safety

Figure 4. Application of the SafeML for Traffic Sign Detection in Autonomous Vehicles/Self-driving Cars

SafeML for Autonomous Railway Systems

Application of SafeML for Autonomous Railway Systems, Intelligent Railway, SafeML, SafeAI, AIsafety, AI safety, SafeDL, machine learning safety

Figure 5. Application of the SafeML for Obstacle Detection in Autonomous Railway Systems


From SafeML Toward Explainable AI (XAI)

The proposed method is not only suitable for safety evaluation of machine learning classifiers but also can be used @Run-Time as an eXplainable AI (XAI). In one of our examples for security dataset, we showed how SafeML can be used as XAI.

Case Studies

Contributors

Publication

Aslansefat, K., Sorokos, I., Whiting, D., Tavakoli Kolagari, R. and Papadopoulos, Y. (2020) SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure. [PDF][arXiv][ResearchGate][DeepAI][Springer].

[Presentation at the 7th International Symposium on Model-Based Safety and Assessment (IMBSA2020)].

Aslansefat, K., Kabir, S., Abdullatif, A., Vasudevan, V. and Papadopoulos, Y. (2021) Toward Improving Confidence in Autonomous Vehicle Software: A Study on Traffic Sign Recognition Systems.[PDF][IEEE]

Medium Posts

How to Make Your Classifier Safe, (2020, June) Published in Medium (Towards Data Science). [Kaggle Version]

Talks

SafeML - A Human-in-the-loop Approach for Safety Monitoring of Machine Learning Classifiers (2020, November), in Open Ethics Series S01E06 Human-in-the-loop AI Аgency & oversight. [YouTube]

Trustworthy and Explainable Machine Learning with SafeML (2021, February), in DeepLearning.ai - Pai and AI Series. [Presentation File]

Cite as

@article{Aslansefat2020SafeML,
   author  = {{Aslansefat}, Koorosh and {Sorokos}, Ioannis and {Whiting}, Declan and
              {Tavakoli Kolagari}, Ramin and {Papadopoulos}, Yiannis},
   title   = "{SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure}",
   journal = {arXiv e-prints},
   year    = {2020},
   url     = {https://arxiv.org/abs/2005.13166},
   eprint  = {2005.13166},
}

Related Works

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety. [arXiv]

Irving, G., Christiano, P., & Amodei, D. (2018). AI Safety via Debate. [arXiv]

Schulam, P., & Saria, S. (2019). Can You Trust This Prediction? Auditing Pointwise Reliability After Learning. [arXiv]

Kläs, M., & Sembach, L. (2019). Uncertainty Wrappers for Data-driven Models. [Springer]

Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., & Vechev, M. (2018). AI2: Safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP). [IEEE]

Related Projects

SafeNN Project: This porject relies on the idea of SafeML and aimed to evaluate safety of Deep Neural Networks using statistical distance measures (will be public soon).

NN-Dependability-KIT Project: Toolbox for software dependability engineering of artificial neural networks.

Confident-NN Project: Toolbox for empirical confidence estimation in neural networks-based classification.

SafeAI Project: Different toolboxes like DiffAI, DL2 and ERAN from SRILab ETH Zürich focusing on robust, safe and interpretable AI.

AI Safety via Debate: This project aims to evaluate the AI Safety through Debate Games.

SafeDNN: A research project from NASA that focses on the property inference from Deep Neural Networks (DNNs).

FAQs

Q1: How can we define the right buffer size?

A1: The "buffer-size" in SafeML algorithm should be defined by an expert in the design time. It should be long enough to hold statistical characteristics of the incoming data.

Q2: How to define the right expected confidence threshold?

A2: It is also another hyper-parameter that should be defined in the offline phase of the SafeML and after training the classifier.

Future Extensions

Currently SafeML has designed for Deep Learning (DL) and Machine Learning (ML) Classifiers. We are trying to extend the approach for regression and clustring tasks. We are also trying to improve SafeML for classification tasks in time series.

License

This framework is available under an MIT License.

Acknowledgments

We would like to thank EDF Energy R&D UK Centre, AURA Innovation Centre and University of Hull for their support.