The format combines online material on the topic with a live webinar (two selection dates) where we answer your questions about the content. The aim of the course is to give you a first orientation: The first is a blog post with a compact summary of the EU AI Act. You will then listen to a short podcast section on the risk levels. The risks of AI systems are then presented, followed by recommendations for action to prevent these risks. Finally, visit a live webinar where you can ask questions about the content and share it with other participants.
EU AI Act
Overview and classification for practice
CONTENT OF THE ONLINE TRAINING
Which obligations are relevant for you?
This depends on whether you are a provider or deployer of AI, and in what context you use AI. For more information, see Section 1
What are the risk levels?
Four risk levels are distinguished: minimal, limited, high and unacceptable risk. For more information, see Section 2
What are the risks associated with the use of AI?
The risks are closely linked to the levels of risk: The higher the risk level, the greater the potential impact. For more information, see Section 3
2. EU AI ACT POTCAST
In this podcast, the four risk levels of the EU AI ACT briefly presented.
3. RISKS AND RECOMMENDATIONS FOR ACTION
Hallucinations
What does this mean?
AI systems can make mistakes and provide false information. Especially in the case of translation or text systems, AI may be mistranslated, misrepresenting or inventing something.
Recommendations
Critically check AI-generated content, especially for technical terms or sensitive information.
Black Box
What does this mean?
Users often do not know exactly how AI comes to their outcome. Even in the case of simple AI systems, it is sometimes not clear why AI proposes something, as the functioning is not transparent.
Recommendations
Ask for comprehensible criteria for decisions by AI systems. Question automated proposals, especially for important decisions.
Bias
What does this mean?
Simple AI systems can also deliver biased or imprecise results. If AI has been trained with incomplete or biased data, it can take wrong decisions.
Recommendations
Pay attention to possible disadvantages in the results. Report conspicuous patterns and advocate for more diverse training data and perspectives.
Dependence
What does this mean?
If we rely too much on AI tools, we sometimes no longer ask ourselves whether the results are correct.
Recommendations
Use AI as a support, not as a substitute for critical thinking. Check results and always make important decisions.
Disinformation
What does this mean?
AI can create deceptively real content that spreads false information. The AI Act requires AI-generated or manipulated content to be clearly identified.
What can you do?
Clearly identify AI-generated content, e.g. in presentations or customer communication. Do not disseminate unconfirmed AI-generated content, but always check it independently for its veracity.
Manipulation
What does this mean?
AI-generated content can be designed in a targeted way that influences people’s opinions, decisions or behaviour without knowing it.
This concerns in particular text, images, videos or language aimed at emotional or psychological effects. The AI Act requires such manipulations to be clearly identified in order to increase transparency and minimise the risk of interference.
Recommendations
Look for transparency and critically question emotional or intrusive AI-generated messages.
Vulnerabilities
What does this mean?
Simple AI systems can also be attacked to steal data.
Recommendations for action
Use only AI systems from trustworthy sources. Find out about security features and take care of signs of misuse or external intervention.
4. Webinar 30MIN Q&A
Two separate dates, which build not on each other.