AI Safety Validation

Solutions for Cyber Physical systems

Our Approach

     

Our Solution

Deploying AI in safety critical systems

For complex systems, such as industrial-scale models of aerospace systems, the input-output relationship is only implicitly known as a ‘black-box’.

We work with black boxes to increase AI and deep learning adoptions in many fields such as defence, cybersecurity, aerospace, autonomous vehicles and healthcare.

WHY

  • Deploying AI and machine-learning algorithms in safety critical systems raises new challenges for system designers since they reduce transparency, replicability, robustness and interpretability of these systems at the same time raising the ethical considerations, and effectiveness in the reporting and assessment of ML/AI based prediction models.
  • Interpretability alone is insufficient. For humans to trust black-box methods, we need Explainability – models that can summarize the reasons for neural network behaviour, gain the trust of users, or produce insights about the causes of their decisions.

Application domains

with possible uses for AI

Telecommunication infrastructure

public telephone network, local branch exchange

Water supply systems

water treatment plant, dam control

Electrical power systems

nuclear power plant, regional electrical grid

Oil and gas generation and distribution

gas pipeline, gas powered power plant

Roadway transportation systems

smart interstate highway, traffic monitoring and control

Railway transportation systems

high speed train line, metropolitan train network control

Air transportation systems

Air traffic control system network, passenger aircraft autopilot

Banking and financial services

Pension fund management, stock market management

Public safety services

Air passenger screening, police dispatch

Healthcare systems

Robotic surgery, healthcare record management

Administration and public services

Employee personnel database, retirement management

In many cases, it is required to have AI software that provides explainable results

  • The recommendations made, need to be predictable and repeatable across a wide variety of inputs, in terms of timing, bias, and results.
  • Clinicians must be able to understand the underlying reasoning of AI models so they can trust the predictions and be able to identify individual cases in which an AI model potentially gives incorrect predictions.
  • There is a need to trust the methodology adopted to propagate the uncertainties through multi-disciplinary analysis, in order to quantify the risk with the current level of information and to avoid wrong decisions due to artificial restrictions introduced by the modelling.
  • The risk assumed by the decision maker is often wrongly estimated due to inadequate assessment of uncertainty. Decision-makers need to trust the methodology adopted to propagate the uncertainties through multi-disciplinary analysis, in order to quantify the risk with the current level of information and to avoid wrong decisions due to artificial restrictions introduced by the modelling.

Safety and security aspects

Our solution mitigates

Cybersecurity

  • Successful attack exploitiong vulnerabilities

Safety of Intended Functionality

  • Impact from surroundigs
  • Reasonably foreseeable misuse, incorrect HMI
  • Performance limitations

Functional Safety

  • HW random failures
  • Systematic failures

What we are doing

  • Investigate deploying machine learning (ML) and deep learning (DL) algorithms in safety critical systems
  • Management of uncertainty and resilient design of safety critical systems
  • Develop a regulatory framework for health related algorithms involving ML/DL and artificial intelligence (AI)
  • Incorporate elements that allow AI learning process to be interpreted
  • Design the techniques and tools for explainable AI (XAI)
  • Check the robustness of the AI and DL models
  • Tolerance analysis and verification of the value of functional requirements after tolerance has been specified on each AI component
  • Force a black box to learn from
  • Verification of the ML systems

Designing Safety Critical Software Systems

to Manage Inherent Uncertainty. The design of safety critical systems requires the explicit inclusion of varying levels of uncertainty and variability from different sources to guarantee that the components or systems will continue to perform satisfactory despite fluctuations (i.e., a resilient design).

Model training

For the AI learning process, we incorporate elements that allow it to be interpreted

Model checking

We use a set of technique to check the robustness of a model, whether AI or DL model generalize well on new cases: to force a black box to learn to distinguish oranges from the skin and not from the stem Occlusions as an Agnostic XAI Method

Contact Us






    AI Safety Validation