AI Standardization Roadmap 2.0: Path towards Future Standards in Trustworthy AI

With lighthouse regulations such as the EU AI Act and the Liability Directive on the horizon, it's hardly news to anyone that the AI community is working on a plethora of additional, specific norms and standards that will guide the development, deployment, and monitoring of future AI systems. The concrete measures that organizations will have to take and the specific tools they’ll require in order to provide trustworthy AI systems are therefore yet to be settled.

For this purpose, it is helpful to take a closer look at Germany's AI Standardization Roadmap 2.0 to which QuantPi had the honor to contribute. This document outlines a pathway toward future standardization of AI systems and the methods ensuring their trustworthiness and transparency.

Filiz Elmas, Head of Strategic Development Artificial Intelligence at DIN, Christoph Winterhalter, Chairman of the Board of DIN, Robert Habeck, Vice Chancellor and Federal Minister for Economic Affairs and Climate Action, Prof. Dr. Wolfgang Wahlster, CEA of the German Research Center for Artificial Intelligence (DFKI) and Michael Teigeler, Managing Director of the German Commission for Electrical, Electronic & Information Technologies in DIN and VDE (DKE) (left to right). © Stefan Zeitz

In this post, we will be discussing what the latest version of this roadmap aims to accomplish, what specific methodologies have been proposed to achieve these goals, and what this means for the future of AI standardization and development.

Table of Contents

What is the German AI Standardization Roadmap

The German Standardization Roadmap on Artificial Intelligence is a community-driven framework designed to identify and outline the requirements for future standardization of safe and trustworthy AI technologies. The AI Standardization Roadmap was developed and co-authored with inputs from more than 570 experts from business, academia, and the public sector, over the course of the entire year 2022.

Although this framework introduces no enforceable regulation, it provides clear outlines for future standardization that will shape requirements for AI systems to come. For organizations preparing for compliance with future regulation, it’s a good indicator of requirements that experts in standardization find relevant for creating an environment that minimizes the potential risks for providers and users of AI systems.

Goals of standardization roadmaps as published by The German Commission for Electrotechnical, Electronic & Information Technologies (DKE)

Roadmap Goals: Facilitate the Adoption of Safe AI Systems

The aim of the standardization roadmap is to facilitate the adoption of safe AI systems and minimize the risks posed by the current lack of mechanisms ensuring trustworthiness. This means developing standards that ensure system trustworthiness in terms of explainability, integrity, privacy, transparency, and human-centricity. The authors also recognize that trustworthiness cannot be achieved without addressing legal aspects such as liability and responsibility when errors occur and risks materialize.

According to the authors, "the task of this framework is to formulate a strategic roadmap for AI standardization".  For this purpose, the document considers an array of norms and standards that, once developed and applied, will “enable the reliable and safe application of AI technologies and contribute to explainability and traceability.”

More specifically, it aims to establish assessment and certification standards because "the lack of such conformity assessments and certification programs threatens the economic growth and competitiveness of AI as a technology of the future." The authors further add that “statements about the trustworthiness of AI systems are not robust without high-quality testing methods”.

Proposed Conformity Assessment Standards

Based on the initial analysis of the broad landscape of ML applications across industries, the authors outline a total of 116 focal requirements for future standardization across the various use cases. Based on these, they formulate the following six key recommendations for action:

  1. Development, validation, and standardization of a horizontal conformity assessment and certification program for trustworthy AI systems.
  2. Development of data infrastructures and elaboration of data quality standards for the development and validation of AI systems.
  3. Considering humans as part of the system at all phases of the AI lifecycle.
  4. Developing specifications for conformity assessments of evolving learning systems in the field of medicine.
  5. Developing and deploying secure and trustworthy AI applications in mobility through best practices and assurance.
  6. Development of overarching data standards and dynamic modeling methods for the efficient and sustainable design of AI systems.

Naturally, the scope of the action recommendations is extremely broad as they need to encompass as many current or future use cases and industry-specific challenges as possible. This is a challenge that this version of the roadmap proposes to address through the implementation of horizontal conformity assessments, standardized across use cases. Assessments that are to be developed, validated, and implemented within operational risk management practices — an approach that is well-aligned with the European AI Act and highly relevant from a societal perspective.

As AI-enabled products and services are becoming more relevant in many different spheres of life, users and affected individuals will need to rely on the outcomes of conformity assessments which are comparable across domains and the characteristics of specific systems.  

For this reason, the document proposes specific follow-up actions to develop standards for horizontal conformity assessments of AI systems (action recommendation 1). These include amongst others:

The authors argue that failing to implement horizontal conformity assessments might jeopardize the economic impact of artificial intelligence as users would be unable to verify and compare system reliability across different AI applications.

Holistic Risk Management for Artificial Intelligence

Of similar importance to the operationalization of risk management are holistic assessment frameworks. Risk management practices need to be applied across the entire lifecycle of an AI system and encompass multiple risk dimensions. The standardization roadmap outlines the following 9 stages of the AI lifecycle:

Mapping of risk dimensions to stages of the AI lifecycle as described in the 2nd Version of the German AI Standardization roadmap and following ISO/IEC DIS 22989.

Six AI Risk Dimensions Highlighted in the Standardization Roadmap:

Obviously, addressing all of the lifecycle stages and risk dimensions outlined above is a huge undertaking. While there are significant efficiency gains to be made through the introduction of (automated) testing tools, there are currently no holistic, pre-configured, off-the-shelf solutions available. For this reason, it is important for organizations to carefully consider the limitations and calibration requirements of existing fragmented auditing tools, as they work to implement holistic risk management strategies.

Overall, the German AI Standardization Roadmap is set to have a strong impact on the testing and certification of AI systems used by enterprises.

The Importance of Transparent Auditing Tools  

QuantPi Co-founder and Chief Scientist Dr. Antoine Gautier participated in its creation as part of the testing and certification workgroup, chaired by Dr. Maximilian Poretschkin of the Fraunhofer Institute for Intelligent Analysis and Information Systems, and Daniel Loevenich of the German Federal Office for Information Security.

With regard to the concrete requirements for future auditing tools, the workgroup recommended that these “can be derived from the properties of the effectiveness criteria. Testing tools should provide all necessary information to interpret results appropriately. Such information should cover at least the following dimensions:

Applying audit tools in line with the outlined transparency requirements could constitute an important safeguard against the misapplication of solutions intended to bolster the trustworthiness of AI systems.

Notably, these recommendations for the transparency of testing tools are not made in a vacuum. Existing and future standards on trustworthiness in artificial intelligence (eg. ISO TR 24028) or AI risk management (eg. ISO/IEC 23894) serve as a basis for this work.

Nonetheless, methods to concretely assess the objective quality of auditing tools still need to be developed. Providing transparent tools for conformity assessments of AI systems is a major objective of QuantPi’s R&D team.

Enabling Coexistence With Intelligent Machines

Closing the gap between the (future) regulatory requirements and the capabilities of tools available to actually meet them will require enterprises to implement innovative, purpose-built tools, designed to meet both current challenges and upcoming landscape drift.

Still, regulatory compliance is not an end in itself. AI systems developed and deployed according to the requirements outlined in the Standardization Roadmap have a higher chance of achieving their full transformative potential. This way standardization enables a responsible and economically successful AI transformation.

QuantPi is grateful for the numerous substantive, and purpose-driven discussions in the course of working on this new version of the roadmap. We will further engage in the standardization community to push for our vision of a safe and worth-living co-existence of humans and intelligent machines.

You may also like
  1. The Role of AI Governance for Insurance Companies