AI ethics and compliance. Three words that are traded on management boards either as a reassuring mantra or as an annoying compliance vocabulary. Some executives see them as a noble attempt to keep the digital leviathan on a leash; others merely hear the faint reproach that their last algorithm may have discriminated a tiny bit.
But regardless of whether you prefer to print ethics guidelines or sales curves: since the EU AI Act at the latest, it has become clear that the topic no longer belongs in the "optional extras" drawer. In fact, without an ethics and compliance structure, a company's AI strategy threatens to be as stable as a house of cards in a wind machine.
Image source: K11 Consulting GmbH | Description: Speaker explains key aspects of AI management in the company in the workshop and leads the discussion.
Because regulations come faster than new buzzwords. The EU has agreed that artificial intelligence must not be an untamed field of experimentation.
Problem:
Companies use AI without establishing ethical guidelines or risk assessments. The result: discrimination, data protection breaches, reputational damage.
Solution:
The EU AI Act distinguishes between minimal risk, limited risk, high risk and unacceptable risk for AI applications. High-risk AI applications are those that directly affect people's health or fundamental rights. These include systems that decide on access to jobs, loans or state benefits, for example, but also AI in medical diagnosis or biometric identification in public spaces.
Problem:
Many companies believe that their AI is harmless - until the responsible supervisory authority (for data protection issues: the data protection authority; for other AI aspects: the designated AI supervisory authority) kindly asks whether the algorithm has perhaps been checked for risks such as bias.
Solution:
Image source: K11 Consulting GmbH | Description: Workshop on AI management in companies - participants discuss and develop practical approaches together.
From bad credit to unintentional discrimination in job applications - AI can ruin in seconds what took the marketing team ten years.
Step 1: Establish responsibility - e.g. in the form of an internal AI officer with a direct reporting line to the management.
Step 2: Define guidelines - preferably in writing, binding and without the words "may", "should" or "possibly".
Step 3: Train all relevant teams - from IT to marketing. AI training is not a luxury, it is a must.
Internal reading tip: AI Officer as a Service
Image source: K11 Consulting GmbH | Description: Participants in a workshop on AI management in companies follow the discussion closely
Some prefer to speak of Responsible AI, others of Trustworthy AI - terms that sound like noble brands in the AI bubble and have long since found their way into official strategy papers in Brussels. The idea behind both is the same: artificial intelligence should not only be legally compliant, but also fair, transparent and comprehensible. Anyone who takes AI ethics and compliance seriously automatically moves within this set of values - and conversely, Responsible AI and Trustworthy AI can hardly be achieved without a solid compliance structure.
Ethically impeccable AI is like an impeccably pressed suit: no one asks whether it is necessary - but everyone notices when it is missing.
Image source: K11 Consulting GmbH | Description: Team members after a workshop on AI management in the company - exchange, collaboration and the joy of learning together.
Anyone who sees AI Ethics and Compliance as a freestyle activity will soon realize that they are standing on a stage that has long been a compulsory program. Ethics is not a decoration, it is the framework without which the picture of "digital transformation" remains incomplete.
And yes - you can talk about it with a wink. But when it comes to implementation, it's better to frown.