Confidentiality
Training data memorization — Models can regurgitate PII, API keys, or proprietary data from training sets. Membership inference attacks can determine if specific data was used in training (AttenMIA achieves 0.996 AUC). System prompt leakage exposes internal policies and tool schemas.
Integrity
Prompt injection corrupts outputs — Attackers can make models produce false information, bypass safety filters, or execute unauthorized actions. Data poisoning corrupts the model itself. RAG poisoning inserts false context that the model treats as ground truth.
Availability
Unbounded consumption (OWASP LLM10) — Adversaries can trigger runaway costs through expensive queries, denial-of-service via jamming attacks on RAG systems, or resource exhaustion through recursive agent loops. A single malicious prompt can cost thousands in API fees.
AI adds a fourth dimension: Alignment. Even when C, I, and A are intact, the model may behave in ways that violate organizational policies, produce biased outputs, or take actions that are technically correct but ethically wrong. This is why governance (Ch 12) matters.