Evaluating Software Tools in ISO 26262: A Practical EPS Example
Learn to systematically evaluate software tools for ISO 26262 compliance through a real Electronic Power Steering static analyzer example, covering tool impact assessment and qualification requirements.

Imagine deploying an ASIL D Electronic Power Steering (EPS) system, only to discover later that a critical buffer overflow caused a sudden loss of steering assist. Your engineering team ran rigorous static code analysis, but the analyzer silently missed the violation due to a misconfiguration. How much confidence do you actually have in the software tools you use to develop safety-critical code?
As automotive systems grow exponentially in complexity, engineering teams rely heavily on automated software tools to design, simulate, generate, and verify code. However, if these tools are flawed, they can introduce catastrophic errors or fail to detect existing ones. This is why establishing confidence in the use of software tools is a cornerstone of functional safety.
In this article, we will scratch the surface of tool evaluation by walking through a practical, real-world example. We will look at what specific data is needed to evaluate a tool and how to determine the required level of qualification to ensure your tools support, rather than compromise, your safety goals.
The Role of Software Tools in Functional Safety
When developing an automotive system under functional safety guidelines, you must ensure that every part of your development lifecycle is robust. Software tools act as force multipliers, but they also introduce a layer of indirect risk. If a compiler generates incorrect machine code from valid source code, or if a testing tool reports a "pass" for a failed test case, the integrity of the entire system is at risk.
To mitigate this risk, you must systematically evaluate and classify your tools. The goal is not to prove that a tool is perfect. Instead, the objective is to build sufficient confidence that the tool will not introduce an undetected error into your safety-critical system. This process involves determining the Tool Impact (TI), assessing the Tool Error Detection (TD) capabilities, and ultimately calculating the Tool Confidence Level (TCL). Once the TCL is established, you can select the appropriate Qualification method.
Evaluating Tools: The EPS Static Code Analyzer Example
| Data Category | Specific Example Data Required |
|---|---|
| Tool Identification | Vendor Name, Tool Name (e.g., AnalyzerPro), Exact Version Number |
| Execution Environment | OS Version, Hardware Architecture, Build Server Specs |
| Configuration Data | Active MISRA rule sets, suppressed warnings, command-line arguments |
| Use Case Description | Pre-compilation code scan in the CI/CD pipeline for ASIL D C-code |
| Known Anomalies | Vendor errata sheets, documented false negatives, bug reports |
To understand how this works in practice, let us look at a specific example: a static code analysis tool used during the development of an Electronic Power Steering (EPS) system. The EPS system is classified as ASIL D because a failure could lead to unintended steering interventions.
The engineering team uses a commercial static code analyzer to verify that the motor control software complies with MISRA C guidelines. The tool scans the source code automatically during the continuous integration build process. If it finds a violation, it flags the build as failed. If it finds no violations, the code moves on to the compiler.
Scratching the Surface: What Data is Needed?
Before you can determine the Tool Confidence Level, you must gather specific data about the tool and its exact use case. You do not need the tool's proprietary source code, but you do need a clear, documented understanding of how it operates within your specific environment. For our static code analyzer example, the required data includes several key elements.
- Tool Identification: The exact name, vendor, and version number of the static analyzer (e.g., AnalyzerPro v4.2.1).
- Execution Environment: The operating system and hardware where the tool runs (e.g., Ubuntu 22.04 LTS on a cloud-based build server).
- Configuration Data: The specific rule sets enabled during the scan. For example, are you checking all mandatory MISRA C rules, or have some been disabled? What command-line flags are used?
- Use Case Description: A precise definition of what the tool is expected to do. In this case, it is analyzing C source code for rule violations prior to compilation.
- Known Bugs: Access to the vendor's errata or known issue list to see if there are documented scenarios where the tool produces false negatives.
To build confidence in software tools, you must evaluate them exactly as they are configured and used in your specific project environment. A tool qualified for one configuration is not automatically qualified for another.
Determining Tool Impact and Error Detection
With the data gathered, the next step is to classify the tool. This begins with analyzing two distinct parameters: Tool Impact (TI) and Tool Error Detection (TD).
Assessing Tool Impact (TI)
Tool Impact asks a simple question: Can a malfunction in this tool introduce or fail to detect an error in the safety-related item? There are two possible classifications.
- TI1: The tool cannot introduce or fail to detect errors.
- TI2: All other cases where the tool can introduce or fail to detect errors.
In our EPS example, the static code analyzer does not generate code, so it cannot directly introduce an error into the EPS software. However, its entire purpose is to detect coding violations. If the tool malfunctions, it might fail to detect a critical buffer overflow. Therefore, the Tool Impact for our static analyzer is clearly TI2.
Assessing Tool Error Detection (TD)
Tool Error Detection evaluates the probability that an error caused by the tool will be detected or prevented by subsequent measures in your development process. There are three levels.
- TD1: There is a high degree of confidence that a tool error will be detected.
- TD2: There is a medium degree of confidence that a tool error will be detected.
- TD3: All other cases (low or unknown confidence).
For our static analyzer, we must ask: If the tool misses a MISRA violation, will a downstream process catch it? The team might conduct manual code reviews, but human review is prone to error. They also run dynamic testing, but dynamic testing might not trigger the specific edge case caused by the coding violation. Because the team cannot guarantee a high or medium probability of detecting the tool's failure downstream, they must conservatively assign a classification of TD3.
Calculating the TCL for Software Tools
The combination of Tool Impact (TI) and Tool Error Detection (TD) determines the Tool Confidence Level (TCL). The TCL dictates how much rigor is required for tool Qualification. The levels range from TCL1 (lowest rigor required) to TCL3 (highest rigor required).
Based on our EPS static analyzer example, we combine a Tool Impact of TI2 with a Tool Error Detection of TD3. This combination results in a classification of TCL3. A TCL3 classification means the tool requires rigorous qualification, especially since the target system is ASIL D.
Selecting a Qualification Method
Because our tool is classified as TCL3 for an ASIL D project, the team must select an appropriate Qualification method to prove the tool is fit for purpose. The standard provides four primary methods for tool Qualification.
- Increased confidence from use: Proving the tool has a long, documented history of flawless operation in identical use cases.
- Evaluation of the tool development process: Auditing the tool vendor to ensure they developed the tool using a rigorous quality management system.
- Validation of the software tool: Creating a specialized test suite to validate the tool against its requirements.
- Development in accordance with a safety standard: Treating the tool itself as a safety-critical software project.
For a commercial static analyzer, the most practical approach is often Validation of the software tool. The EPS engineering team would create a suite of dummy C code files containing known MISRA violations. They would then run the static analyzer against this test suite to prove that the tool successfully detects every known error under their exact configuration and execution environment.
Conclusion
Building confidence in software tools is a mandatory and highly structured process in modern automotive engineering. By systematically identifying the required tool data, assessing the Tool Impact, and determining the Tool Error Detection capabilities, you can accurately classify your tools. As demonstrated with the EPS static analyzer example, even tools that do not generate code can require rigorous Qualification if they fail to detect safety-critical errors.
Mastering tool qualification ensures that your toolchain acts as a reliable foundation for your safety goals, rather than a hidden source of risk. If you are ready to master the intricacies of TCL calculations, tool validation methods, and compliance documentation, we invite you to explore the resources available on our platform. Dive deeper with our comprehensive Confidence in the Use of Software Tools course at the ISO 26262 Academy, and ensure your engineering toolchain is truly ready for ASIL D development.
Abbreviations & Key Definitions
- ASIL - Automotive Safety Integrity Level, a risk classification scheme defined by ISO 26262.
- CI/CD - Continuous Integration / Continuous Deployment, a method to frequently deliver apps to customers by introducing automation into the stages of app development.
- EPS - Electronic Power Steering, an automotive system that reduces the steering effort by using an electric motor.
- MISRA C - A set of software development guidelines for the C programming language developed by the Motor Industry Software Reliability Association.
- TCL - Tool Confidence Level, a classification from TCL1 to TCL3 indicating the required rigor for tool qualification.
- TD - Tool Error Detection, a measure of the probability that an error introduced or not detected by a tool will be caught by downstream measures.
- TI - Tool Impact, a classification indicating whether a tool can introduce or fail to detect an error in a safety-related item.
- Qualification - The process of providing evidence that a software tool is suitable for its intended use in a safety-critical development process.
Last updated: 30 March 2026


