Context
Two standards specifically address AI safety in vehicles, complementing ISO 26262:
ISO/PAS 8800:2024 — Safety and AI in road vehicles
Published December 2024. 15 clauses, 8 annexes. Extends ISO 26262 + ISO 21448 for AI/ML elements. Key requirements:
AI element definition within system context
AI safety lifecycle — requirements → design → training → verification → validation → deployment → monitoring
Assurance arguments for AI safety (connects to safety case schema Safety case schema: GSN-style claims, arguments, evidence (UL 4600, ISO/PAS 8800 assurance) #103 )
V-model tailored for AI — deriving AI safety requirements, selecting measures, V&V
Data management — training data provenance, quality, governance
AI tool qualification — confidence in AI development frameworks
Post-deployment monitoring — ongoing safety in the field
ISO 21448:2022 — SOTIF (Safety of the Intended Functionality)
Addresses risks from functional insufficiencies (not hardware/software faults — that's ISO 26262). Critical for AI/ML because:
ML models have inherent performance limitations (OOD, edge cases)
Perception system insufficiencies cause unsafe behavior
Requires systematic scenario-based evaluation
Covers the gap between "it works as designed" and "it's safe as intended"
Together, 8800 + 21448 + 26262 form the complete AI safety triangle for automotive.
Design
Schema: `schemas/iso-pas-8800.yaml`
Artifact Type
ISO/PAS 8800 Clause
Description
`ai-element`
Cl. 4-5
Definition of an AI/ML element in the vehicle system
`ai-safety-req`
Cl. 6
AI-specific safety requirement
`ai-arch-measure`
Cl. 7
Architectural measure for AI safety (redundancy, monitoring, fallback)
`ai-dev-measure`
Cl. 8
Development measure (training strategy, regularization, robustness)
`ai-data-req`
Cl. 9
Data management requirement (provenance, quality, distribution)
`ai-training-record`
Cl. 9
Training data record with bias assessment
`ai-verification`
Cl. 10
AI verification measure (testing, formal, simulation)
`ai-validation`
Cl. 11
AI validation measure (scenario-based, field testing)
`ai-deployment`
Cl. 12
Deployment specification and monitoring plan
`ai-monitoring`
Cl. 13
Post-deployment monitoring measure (drift detection, performance tracking)
`ai-tool-qual`
Cl. 14
AI development tool qualification (frameworks, compilers, training pipelines)
`ai-assurance-argument`
Cl. 15
Structured assurance argument for AI safety (links to #103 GSN)
Schema: `schemas/sotif.yaml`
Artifact Type
ISO 21448 Area
Description
`sotif-hazard`
Cl. 6
Hazard from functional insufficiency (not component failure)
`sotif-triggering-condition`
Cl. 7
Condition that triggers functional insufficiency
`sotif-scenario`
Cl. 8
Scenario combining triggering conditions → hazardous behavior
`sotif-acceptance-criterion`
Cl. 9
Quantitative criterion for residual risk acceptance
`sotif-verification`
Cl. 10
Verification of SOTIF measures
`sotif-validation`
Cl. 11
Field validation and scenario coverage
`sotif-known-unsafe`
Cl. 7
Known unsafe scenario (Area 2 in SOTIF quadrant)
`sotif-unknown-unsafe`
Cl. 7
Unknown unsafe scenario (Area 3 — the hardest to address)
Traceability rules
ISO/PAS 8800:
Every `ai-element` must have `ai-safety-req` requirements (error)
Every `ai-safety-req` must be verified (`ai-verification` backlink, error)
Every `ai-element` must have `ai-data-req` (data governance, error)
Every `ai-element` must have `ai-monitoring` (post-deployment, warning)
Every `ai-element` should have an `ai-assurance-argument` (warning)
ISO 21448 SOTIF:
Every `sotif-hazard` must have triggering conditions identified (error)
Every `sotif-triggering-condition` must be covered by a scenario (error)
Every `sotif-scenario` must have an acceptance criterion (warning)
`sotif-unknown-unsafe` scenarios should drive systematic exploration (warning)
Bridge schemas
Why both together
ISO/PAS 8800 covers how to make the AI element safe (lifecycle, measures, qualification).
ISO 21448 covers how to prove the intended functionality is safe (scenarios, triggering conditions, acceptance).
They overlap on validation but address different aspects of AI risk.
A system using both would have:
`ai-element` → `ai-safety-req` (8800) + `sotif-hazard` → `sotif-scenario` (21448) → `ai-validation` (8800, using 21448 scenarios) → `ai-assurance-argument` (8800, GSN from #103 )
That's the complete chain no other tool provides.
References
ISO/PAS 8800:2024
SGS: Introducing ISO/PAS 8800
Adapting ISO/PAS 8800 to AI Safety in Other Industries (Critical Systems Labs)
ISO 21448:2022
Interplay between ISO 21448 and ISO 8800
Jama: Navigating AI Safety with ISO 8800
Rivet Safety case schema: GSN-style claims, arguments, evidence (UL 4600, ISO/PAS 8800 assurance) #103 (safety case/GSN schema), STPA-for-AI extension: ML lifecycle hazards, data-driven UCAs, DeepSTPA patterns #105 (STPA-for-AI), EU AI Act compliance schema (schemas/eu-ai-act.yaml) — high-risk AI system documentation #99 (EU AI Act), Domain schema packages: IEC 62304, DO-178C, IEC 61508, EN 50128/50716 #102 (domain schemas)
Context
Two standards specifically address AI safety in vehicles, complementing ISO 26262:
ISO/PAS 8800:2024 — Safety and AI in road vehicles
Published December 2024. 15 clauses, 8 annexes. Extends ISO 26262 + ISO 21448 for AI/ML elements. Key requirements:
ISO 21448:2022 — SOTIF (Safety of the Intended Functionality)
Addresses risks from functional insufficiencies (not hardware/software faults — that's ISO 26262). Critical for AI/ML because:
Together, 8800 + 21448 + 26262 form the complete AI safety triangle for automotive.
Design
Schema: `schemas/iso-pas-8800.yaml`
Schema: `schemas/sotif.yaml`
Traceability rules
ISO/PAS 8800:
ISO 21448 SOTIF:
Bridge schemas
Why both together
ISO/PAS 8800 covers how to make the AI element safe (lifecycle, measures, qualification).
ISO 21448 covers how to prove the intended functionality is safe (scenarios, triggering conditions, acceptance).
They overlap on validation but address different aspects of AI risk.
A system using both would have:
`ai-element` → `ai-safety-req` (8800) + `sotif-hazard` → `sotif-scenario` (21448) → `ai-validation` (8800, using 21448 scenarios) → `ai-assurance-argument` (8800, GSN from #103)
That's the complete chain no other tool provides.
References