Modelwise achieves TÜV concept report

by Mar 2, 2022Blog

You are here:

Homepage 9 Blog 9 Modelwise achieves TÜV concept report
Reading Time: 6 minutes
The TLDR: Too busy to read? Here's a quick summary of article

This article aims to show how the tool concept of paitron was evaluated from TÜV SÜD and regarding its certification ability as a qualified software for the use in safety-critical applications.

Concept Report

In November 2021, the AI-powered modelwise’s paitron was evaluated by TÜV SÜD for the development of safety-related products. According to our knowledge, this is the first tool of its kind whose concept was examined according to international standards and guidelines (ref. section “Evaluation Process”). The approval of the safety concept is based on the documents listed in the Evaluation Process section of this article.

As the concept report marks the first step to a full tool qualification, Dr. Detlev Richter, VP of Industrial and Energy Products of the TÜV SÜD Product Service Division, commented on the milestone:

“This means that the integration and automation of Functional Safety validation have taken a step forward in the technical lifecycle of digital twin based systems with regards to product changes” [1].

The article covers the aspects that we, at modelwise, have to cover and comply with in order to make our AI suitable for all Safety Integrity Levels (SIL). Currently, we are working on fully certifying paitron according to IEC 61508:2010 and ISO 26262:2018. This is to be shown during the so-called detailed phase for the final certification of the tool.

Tool Qualification in Safety-Critical Industries 

To reach a defined safety goal, choosing the right software tool right from the beginning of development is important. Depending on the project’s needs, the required tools are planned based on the requirements from the safety standards and the related safety levels (ASIL, SIL, etc.) [2]. As “a tool error may lead to the injection or non-detection of a fault in the safety-product” [3], standards demand that tools are evaluated and, if necessary, also qualified in the context of the development process & toolchain.

To provide a generic framework and simplify the process for customers and users of such tools, standards allow software providers to qualify tools independent of a specific safety-related project. This qualification depends on the desired safety level. The suggested methods for achieving ASILs (Automotive Safety Integrity Level) are shown in the following table:

Legend – Software Tool Qualification
++                   The method is highly recommended for this ASIL
+                      The method is recommended for this ASIL
o                      The method has no recommendation for or against its usage for this ASIL

In our example, this requires a detailed analysis of the following documents to determine a level of confidence in the use of the tool.

  • Use Cases to visualize the intended process, highlighting all user inputs and information exchange and identifying all associated risks.
  • Requirements for safe execution, including mitigation measures, were needed for each safety-critical element of the software architecture.
  • Example project to show compliance with IEC 61508 and ensure our results’ correctness, focusing on the automation capabilities and the distinction to user inputs.
  • Tool providers’ development processes are not in the scope of a concept report.
  • Test Strategy and Test Cases are not in the scope of a concept report.

This consists in balancing the potential impact of tool errors on the safety function with appropriate mitigation measures. The qualification allows stating an envisioned process, which customers and users only have to implement and follow.

Currently, all AI approaches, including Neural Networks and Deep Learning, are not recommended by IEC 61508 [5], and ISO 26262 does not even mention AI. As of today, the discussion on how to qualify AI is still ongoing. [6] offers a great summary of the current state of the discussion, as it aims to summarize the Technical Report ISO/IEC TR  5469 Functional Safety and AI systems, which is still under development. The core idea currently floating around is that as long as the AI is comprehensible and valuable, it can be used in Functional Safety (ref. AI Class I in Table 2). One of the reasons for this restriction is that Functional Safety cannot accept unquantifiable risks, which makes “non-determinism […] hard to accept for safety” [7].

Evaluation Process 

To meet the standards, multiple documents were reviewed by TÜV SÜD. A review is a verification activity during which the documents have been evaluated regarding deficiencies, faults, inconsistencies, and deviations from standards. Within this process, we were able to show and qualify paitron’s concept, which is fully based on formal verification and theorem proving and is therefore deterministic, comprehensible, and valuable.

With more than 10 iterations between TÜV SÜD and us, the final concept approval was reached for all relevant documents consisting of:

  • Use cases analysis, which describes for each use case,  the preconditions, necessary steps, tools, and actors. Starting from the sea level use cases with a focus on the user goals, the document focuses on identifying all risks.
  • Software Architecture Description contains distinct architectural views to depict various aspects of the system.  It is intended to outline and inform about the significant architectural decisions relevant to the system. These are the elements and behaviors that are most fundamental for guiding the development of the software and for understanding paitron as a whole.
  • Software Requirements Specification, where the requirements are specified, prioritized, and linked to respective work items. This document shows a structured separation of the different functionalities into dedicated software components.
  • Review of an output FMEDA of an exemplary system model. The general structure of the output FMEDA was evaluated with a focus on the automated tool features and their incorporation into the user’s workflow.

The regulations and guidelines which form the basis of the concept evaluation are listed below.

Conclusion & Outlook 

The concept report from TÜV SÜD is the first step to proving paitron’s reliability and safety. We are currently following up on the verification of the underlying algorithms and an evaluation of the processes and test strategy. Furthermore, our mission continues by achieving the same trust in further industries, from automotive (ISO 26262) to machinery (ISO 13849) and Aerospace (DO 254).

The key benefits of receiving the concept report are the following:

  • Reduce customers’ liability by following industry standards.
  • Independent third-party review.
  • Transparency on the use cases, restrictions, and the technology.

We believe that paitron is a game-changer for the development of safety-related products, as it enables faster, easier, and more accurate verification and validation of system models. We are excited to share our progress and achievements, and we invite you to join us in our journey to make AI a trusted and valuable partner for Functional Safety.


Comments, suggestions? Brickbats, bouquets? Please send your feedback to our editor.


Additional Resources

References 

[1]    Dr. Detlev Richter, VP Industrial and Energy Products of the TÜV SÜD Product Service Division, 26.01.2022

[2]    TÜV SÜD Ltd. Functional Safety – Software tool certification for Functional Safety projects [online] Available at: https://www.tuvsud.com/en-gb/services/functional-safety/software-tool-certification-for-functional-safety-projects [Accessed 16.12.2021]

[3]    Oscar Slotosch et al. (2012) – ISO 26262 – Tool Chain Analysis Reduces Tool Qualification Costs [online] Available at: https://dl.gi.de/bitstream/handle/20.500.12116/17563/27.pdf?sequence=1&isAllowed=y [Accessed 16.12.2021]

[4]    ISO 26262-8:2010, Road vehicles Functional Safety – Part 8: Supporting Processes.

[5]    IEC 61508-3:2010, Functional safety of electrical/electronic/programmable electronic safety-related systems Part 3: Software requirements.

[6]    Holger Laible. Computer & Automation. KI Klassifikation für Safety [online] Available at: https://www.computer-automation.de/steuerungsebene/safety-security/ki-klassifikation-fuer-safety.187488/seite-2.html [Accessed 25.06.2021]

[7]    Tom Meany. ez.analog.com. Functional Safety and Artificial Intelligence [online] Available at: https://ez.analog.com/ez-blogs/b/engineerzone-spotlight/posts/functional-safety-and-artificial-intelligence-268912509 [Accessed 16.12.2021]

<a href="https://modelwise.ai/author/jan/" target="_self">Jan Neumann-Mahlkau</a>

Jan Neumann-Mahlkau

modelwise Team