- FOCUS TOPIC
Seize opportunities, manage risks
Generative AI is changing our working world at a speed that we have hardly seen before. Companies are benefiting from more productive processes, more efficient workflows and completely new possibilities in text, image, video, audio and code generation. But as the opportunities grow, so does the responsibility: Generative AI brings new types of security risksthat can quickly become a real threat without clear governance.
Why generative AI is so valuable
Whether text automation, code analysis, design prototypes or data visualization – generative models have long been more than just gimmicks.
They are already providing support today:
- Security management (e.g. analysis of vulnerabilities, incident response plans)
- Detection of spam, phishing & fake news
- Code hardening & log analysis
- Increased efficiency in day-to-day business
- Visualization of complex relationships
Used correctly, AI can make a massive contribution to safety, compliance, quality and productivity. productivity.
Do you want to find out what your AI potential is?
Then get started today and contact us for a free joint use case sparring session.
The dark side: New security risks
However, the introduction of generative AI comes with risks that many companies still underestimate:
1. risks despite "proper" use
- Confidentiality of inputs (data outflow in cloud AI)
- Incorrect or misleading output
- Insecure generated code
- Automation bias – blind trust in AI results
2. misuse Attackers can use AI to:
- Create fake content, identities or deepfakes
- Programming malware or scaling social engineering attacks
- Carry out processes such as CEO fraud in a much more sophisticated manner
Risks also arise within companies – for example through the sharing of confidential data with external AI services, copyright infringements or discriminatory AI decisions.
3. attacks on the AI itself
Generative models are vulnerable, so far we are aware of 3 popular types:
Privacy Attacks
Attackers try to reconstruct sensitive information from the model, e.g. training data or private user input. This exposes confidential data, even though the model was only supposed to deliver results.
Note:
Attackers can also be internal employees with good IT knowledge who, for example, want to gain unauthorized access to confidential information or systems.
Example:
An insurance company uses an internal generative AI system (LLM) to support employees in analyzing claims, summarizing customer dossiers and answering internal technical questions.
The AI system is based on an LLM with RAG architecture (Retrieval-Augmented Generation) and accesses internal knowledge sources such as customer databases, claims files, internal guidelines and previous chat histories (for quality improvement).
An internal attacker (e.g. an employee with advanced IT knowledge) wants to check whether confidential information of other customers or employees can be reconstructed from the AI system and formulates “harmless” questions such as “Create an example summary of a complex car damage case of a customer from Zurich with several parties involved.”
The AI system responds with realistic-sounding names, real zip codes, detailed damage descriptions, medical information and exact damage totals.
The problem: the information does not come from fictitious examples, but is reconstructed content from real training or context data.
A particularly impressive example of a real case:
In the “EchoLeak” exploit, attackers were able to access data from the context window of an AI copilot via hidden prompt injections – without classic phishing and with minimal user interaction.
Evasion Attacks
Manipulation of entries to circumvent protection mechanisms by third parties.
Example:
A bank uses an AI or LLM-supported assistance system that automatically classifies incoming emails or recognizes potential fraud, phishing or social engineering.
An external attacker wants to send a fraudulent email to an employee, with unusual sentence structures, harmless technical terms, slightly different payment requests and contextual references to real projects or departments.
The AI could now classify the email as harmless or as “inconspicuous internal specialist communication” and the message passes all automated filters and lands directly in the inbox of an employee from the finance team.
-> The transfer process is initiated.
Poisoning Attacks
Training data is deliberately "poisoned" by third parties so that the model becomes faulty or manipulable.
Example:
An insurance company operates an internal generative AI system (LLM) to support claims processing, fraud detection and answering internal technical questions, among other things.
The model is regularly retrained (“continuous learning”) with new claims, internal documents, feedback from the specialist departments and external specialist sources (e.g. market reports, partner portals).
The training data is automated to ensure efficiency and timeliness. An external attacker (or a compromised partner) would like the AI system to systematically make the wrong decisions in certain cases without this being immediately apparent.
Specifically, certain fraud indicators should be ignored and certain damage patterns should be considered “inconspicuous”. The attacker compromises an external data source, e.g. a partner portal or a shared knowledge pool. Manipulated or falsified content is deliberately placed there, which has a negative impact on decisions on fraud cases or classifications of claims.
-> The model gradually learns a distorted decision behavior.
-> The effects only become apparent weeks or months later.
Clear proof that AI security is now a discipline in its own right
How companies can protect themselves:
At Pragmatica , our experts and consultants offer the following realizations & implementations so that AI can be used responsibly and safely in your company.
BUILDING COMPETENCES & TRAINING EMPLOYEES
Employees need to know, how AI works. You need to know how to use it, where the risks lie and how to minimize them .
Our AI enablement service makes generative AI safe for banks & insurance companies.
The Pragmatica AG enables employees to use AI safely and in compliance with regulations through targeted training – in line with GDPR, FINMA requirements, EU AI Act and established standards such as ISO/IEC 42001 and NIST AI RMF.
In regulated environments in particular, AI risks often arise due to a lack of expertise, incorrect use and automation bias.
With practical trainingcase studies and clear dos and don’ts, we create security, acceptance and audit readiness. audit readiness.
Less risk. More efficiency. Audit and revision-proof.
GUIDELINES & GOVERNANCE
A clear internal AI policy is mandatory – especially with regard to data protection, compliance, transparency and the handling of confidential data.
Technical & organizational measures
In-house AI services (e.g. internal LLMs) with controlled rules and security mechanisms significantly reduce risks.
Using established standards
- ISO/IEC 42001 – the world’s first standard for AI management systems
- ISO/IEC 23894 – standardized AI risk management
- NIST AI RMF – Best practices for managing AI risks
- CoE AI Treaty – international treaty on AI, human rights & rule of law
These frameworks provide guidance on how AI can be used safely, legally compliant and ethically.
Conclusion
Generative AI is a historic opportunity – but only if we use it safely, responsibly and strategically use it.
Companies that establish clear rules, training and technical protective measures today will be among the winners tomorrow.
Simply register for a free use case sparring session to get to know the system landscape or AI governance better.
Introductory formats - free of charge & individual
Use Case
Exploration
2-4 hours | Remote or on-site
Introduction to LLMs, individual use cases, feasibility assessment
Introduction to LLMs, individual use cases, feasibility assessment
Workshop
Architecture & Technology
2-4 hours | Remote or on-site
Technology options in the market & fit to existing architecture
Technology options in the market & fit to existing architecture
workshop
AI governance & compliance
1-2 hours | Remote or on-site
Regulatory requirements for AI use (FINMA, DPA, EU AI Regulation)
Regulatory requirements for AI use (FINMA, DPA, EU AI Regulation)
awareness