AI Gets Reined In by Swiss Law, Models Flunk Moral Tests, Global Safety Frameworks Solidify, and EU-Focused AI Alternative Emerges
- Aegis Blue
- May 14
- 3 min read
AI Business Risk Weekly
This week: Swiss regulators assert existing data laws over AI, fresh studies reveal startling moral judgment failures in models, while international efforts to forge AI safety standards gain traction and new AI platforms cater to specific regional privacy demands.
Swiss Data Protection Law Confirmed Applicable to AI
Switzerland's Federal Data Protection and Information Commissioner (FDPIC) has issued guidance confirming that the nation's current Data Protection Act (DPA), effective since September 1, 2023, directly applies to data processing involving artificial intelligence. This guidance, from the FDPIC, clarifies that the technology-neutral DPA mandates transparency in AI operations, including disclosure of purpose and data sources, upholds data subjects' rights to object to automated processing and request human review, and requires data protection impact assessments for high-risk AI applications. Certain applications, such as large-scale real-time facial recognition, are deemed prohibited.
Business Risk Perspective: The Swiss FDPIC's guidance acts as a crucial bellwether for global AI regulation, heightening the legal and financial risks for businesses whose AI systems fail to meet established data protection principles.
Study Highlights Critical Gaps in AI Moral and Legal Judgment Alignment
A recent discussion paper from the Max Planck Institute, titled "Human Realignment: An Empirical Study of LLMs as Legal Decision-Aids in Moral Dilemmas," found a significant mismatch between the decisions of Large Language Models, including various GPT versions, and human judgments in moral-legal dilemmas. The study explored using explicit normative guidance to address this, but results were mixed, with researchers concluding that current methods of explicit instruction are insufficient to fully align AI advice with human normative convictions in these scenarios.
Business Risk Perspective: This stark misalignment of LLMs with human moral-legal reasoning poses substantial reliability and liability risks when these models inform sensitive decisions, potentially leading to legally or ethically indefensible outcomes.
Singapore Conference Advances Global AI Safety Consensus
Over 100 participants from 11 countries, including representatives from every government AI safety institute such as those from the US and China, convened at the 2025 Singapore Conference on AI. Sponsored by the Singaporean government, the conference aimed to identify AI safety research priorities and resulted in a 40-page consensus document proposing a "defence-in-depth" approach to technical AI safety, structured around rigorous risk assessment, integrating safety into development, and continuous monitoring and control of deployed systems.
Business Risk Perspective: This international consensus on a "defence-in-depth" AI safety approach indicates rising global expectations for corporate AI governance, potentially shaping future regulatory landscapes and stakeholder scrutiny. Businesses may find their current AI safety practices benchmarked against these emerging international norms.
Mistral AI Launches EU-Focused, GDPR-Aligned AI Platform
French AI startup Mistral AI has released its "Medium 3" model and "Le Chat Enterprise" platform, positioning them as a cost-effective, high-performance alternative potentially well-suited for organizations prioritizing EU data governance. As detailed by the company, the enterprise platform's features, including on-premises deployment options, are highlighted as aligning with stringent European privacy requirements.
Business Risk Perspective: The emergence of competitive AI models from regions with distinct regulatory philosophies, like Mistral from France, presents both opportunities and challenges for businesses navigating global compliance. Organizations risk operational inefficiencies or regulatory misalignment if they fail to evaluate how these evolving model options may better address specific regional data governance needs.
AI Business Risk Weekly is an Aegis Blue publication.
Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.