For the Love of Acronyms

We absolutely love our acronyms. In information security, compliance, and governance circles, we wear them like badges of honor. ISO, NIST, CIA, PPTDF, SOC, GDPR; the list goes on and on. And heaven help the person who coins a new one that catches on; they'll dine out on that achievement for years. There's a curious pride in acronym authorship, a sense that if you can distill complex concepts into a memorable three, four, or five-letter combination, you've contributed something meaningful to the field.

But here's the uncomfortable truth: our love affair with these acronyms has calcified our thinking. We've become so attached to certain frameworks, so invested in their elegance and familiarity, that we've failed to notice how profoundly the landscape beneath our feet has shifted. The emergence of AI as an actor, not just a tool, has fundamentally altered the nature of enterprise risk. Yet we continue reaching for conceptual models designed for an era when systems were passive, humans were the only decision-makers, and "the cloud" meant weather.

It's time for some tough love. And it's time to acknowledge that some of our most cherished acronyms, PPTDF and CIA among them, are no longer fit for purpose.

The PPTDF Problem: A Framework Out of Time

For decades, information security professionals have organized their thinking around PPTDF: People, Process, Technology, Data, and Facility. It's clean. It's comprehensive. It's been cited in countless standards, frameworks, and certification requirements. And it's fundamentally inadequate for the world we now inhabit.

PPTDF was born in an era of relatively clear boundaries. People were humans who worked in offices. Processes were documented workflows executed by those humans. Technology was the equipment and software they used. Data was the information they created and stored. Facilities were the physical buildings where all of this happened. Simple. Orderly. Comforting.

But what happens when an AI system makes credit decisions without human intervention? Is it People? Technology? Something else entirely? What about a synthetic agent that operates across jurisdictional boundaries, processing data in three regulatory regimes simultaneously? Where does that fit in your PPTDF matrix?

The model breaks down because it assumes a world that no longer exists: one in which humans are always in the loop, processes are linear and documented, technology serves rather than acts, and physical facilities define operational boundaries. In 2026, as organizations deploy autonomous AI systems, hybrid human-synthetic teams, and infrastructure that exists simultaneously across jurisdictions and environments, PPTDF simply cannot capture the complexity.

The Hidden Costs of Conceptual Inertia

The real damage isn't just that PPTDF is outdated; it's that our attachment to it actively impedes progress. When auditors assess organizations using PPTDF-based frameworks, they miss entire categories of risk because those risks don't fit neatly into the model. When security teams design controls using PPTDF thinking, they create gaps at the intersections among human and synthetic actors, physical and logical environments, and jurisdictional contexts.

Consider a common scenario: An AI system trained in one jurisdiction, deployed in another, processing data subject to a third jurisdiction's laws, with human oversight provided remotely from a fourth location. PPTDF offers no coherent way to think about this system. Is the jurisdiction question a Facility issue? An Environment question? It's unclear, so it falls through the cracks.

This scenario isn't hypothetical. Organizations are failing audits, missing compliance obligations, and creating security exposures precisely because their frameworks cannot adequately represent the systems they're actually operating.

Enter ADEPT: A Framework for the Present and Future

What we need is a model that reflects reality as it exists today and as it will exist tomorrow. We need a framework that recognizes AI as an actor, not just an artifact. That acknowledges the multiplicity of environments in which modern systems operate. That treats jurisdictional context as a first-class concern, not an afterthought.

ADEPT provides exactly that:

Actor - Who or what performs actions - Human, Synthetic, Hybrid, or Natural systems
Data - What information is involved - classification, sensitivity, lineage, jurisdiction
Environment - Where systems operate - physical, logical, transitory, regulatory contexts
Process - How work is executed - workstreams, decision points, authorities, governance
Technology - What platforms and tools enable operations - dependencies, configurations, capabilities, constraints

Notice what ADEPT does differently:

Actor replaces People because not all actors are people anymore. AI systems make decisions. Automated processes execute without human involvement. Hybrid teams combine human judgment and synthetic capability. The term "People" cannot contain this reality. "Actor" can.

Data becomes central with explicit recognition of jurisdiction because, in the age of GDPR, CCPA, China's PIPL, and dozens of other data protection regimes, you cannot understand data risk without understanding jurisdictional exposure. PPTDF treats "Data" as a monolith. ADEPT requires you to consider classification, sensitivity, lineage, and the laws governing its processing.

Environment replaces Facility because modern systems don't operate in a single physical location. They operate across cloud regions, edge devices, on-premises data centers, and jurisdictions with different regulatory requirements. "Facility" implies a building you can walk into. "Environment" encompasses all the physical, logical, and regulatory contexts that shape how a system operates.

Process and Technology remain, but with richer definitions that acknowledge the complexity of modern workflows and the interdependencies of modern technology stacks.

Why ADEPT Matters: Cascading Risk Discovery

ADEPT's real power lies not in its elegance, though it is elegant, but in its analytical capabilities. When you map a system using ADEPT, risks cascade across domains in ways that siloed assessments miss.

Consider: You identify that an AI actor (A) is processing sensitive customer data (D) in a cloud environment (E) spanning three jurisdictions with conflicting data residency requirements. The process (P) assumes that human oversight has been automated away. The technology stack (T) includes dependencies on third-party APIs with unknown security postures.

With PPTDF, you'd likely assess "Technology" separately from "People" separately from "Data," missing the compound vulnerability that exists at their intersection. With ADEPT, the interconnections are visible from the start. You can see that the Actor's autonomy, combined with the jurisdictional complexity of the Environment and the sensitivity of the Data, creates a cascading compliance and security exposure that no single-domain assessment would catch.

This is not theoretical. This is the difference between checkbox compliance and genuine risk management. This is the difference between passing an audit and actually being secure.

The CIA-to-CIAAN Evolution: Security Properties for the AI Era

While we're updating acronyms, let's talk about CIA: Confidentiality, Integrity, and Availability. For generations, these three properties have defined information security objectives. They're the foundation of ISO 27001, NIST frameworks, and countless security programs worldwide.

And they're incomplete.

CIA made sense when information security was primarily about protecting data at rest and in transit from unauthorized access, modification, or denial-of-service attacks. But in a world where AI systems generate synthetic content, where deepfakes are trivial to create, where automated decision-making affects billions of people, and where digital interactions carry legal and financial consequences, three properties aren't enough.

We need CIAAN:

  • Confidentiality / Privacy

  • Integrity

  • Availability

  • Authenticity

  • Non-repudiation

Why the Additions Matter

Authenticity addresses a threat landscape that barely existed when CIA was formulated. In 2026, you must be able to verify that content, transactions, and communications are genuinely what they purport to be. Is this email actually from your CEO, or is it a deepfake voice clone? Is this contract signature authentic, or was it generated by AI? Was this decision actually made by the authorized system, or has an adversary injected malicious instructions?

Traditional integrity controls tell you whether data has been modified. Authenticity tells you whether it's real in the first place. These are distinct properties requiring distinct controls.

Non-repudiation becomes essential when automated systems execute high-stakes decisions. When an AI system denies someone credit, terminates their employment, or flags them for law enforcement attention, there must be an irrefutable record of what happened, when, and under what authority. The system's operator cannot be allowed to claim "the AI did it" without being held accountable. The individual affected cannot be left without recourse.

Non-repudiation ensures that actions can be attributed, that decisions can be audited, and that accountability remains intact even when humans aren't directly in the loop.

Confidentiality is now explicitly expanded to include Privacy because, in the regulatory environment we now inhabit, protecting data from unauthorized access (confidentiality) is necessary but insufficient. Privacy requires considering purpose limitation, data minimization, consent, individual rights, and jurisdictional obligations. These are related to but distinct from traditional confidentiality controls.

CIA treats these as afterthoughts. CIAAN makes them core security properties.

The Standards Problem: Perpetually Behind the Curve

Here's where we need to be honest about an uncomfortable reality: most documented standards are out of date the moment they're published.

ISO 27001:2022 is a magnificent achievement representing years of expert consensus. It's also already behind the curve on AI governance. ISO/IEC 42001, published in 2023 specifically to address AI management systems, is playing catch-up with technology that's evolving monthly, BUT it’s not expressly integrated with 27001. NIST's AI Risk Management Framework offers valuable guidance, and struggles to keep pace with the rapid emergence of new AI capabilities and risks.

This isn't a criticism of the standards bodies or the brilliant people who contribute to these frameworks. It's a structural problem. The peer-review and consensus-building processes that give standards their legitimacy also make them slow to adapt. By the time a standard navigates draft reviews, public comment periods, committee deliberations, and final publication, the technology landscape has shifted.

The result? We end up with frameworks that reflect yesterday's understanding of yesterday's challenges, applied to tomorrow's systems.

The Perpetuation Engine: Self-Interest and Entrenchment

But it's worse than mere slowness. The "behind the curve" problem is actively perpetuated by forces that resist change:

Economic self-interest: Consulting firms have built practices around existing frameworks. Certification bodies have accredited auditors against specific standards. Training companies have invested in courseware that uses current acronyms. Change threatens revenue streams. It's easier and more profitable to keep teaching PPTDF and CIA than to acknowledge their limitations and develop new approaches.

Intellectual entrenchment: Individuals and organizations build reputations around specific frameworks. If you've written the definitive guide to ISO 27001 or taught PPTDF to thousands of professionals, admitting that the model is fundamentally inadequate feels like admitting professional irrelevance. Cognitive dissonance is powerful; it's easier to defend what you know than to learn what you don't.

Institutional inertia: Large organizations have codified existing frameworks into policies, procedures, and management systems. Changing requires more than updating a few documents; it requires retraining staff, revising controls, and potentially failing audits during the transition. The switching costs are real, and risk-averse organizations delay change until absolutely forced.

The peer-review paradox: The very processes meant to ensure quality can also enforce conformity. When standards are developed through consensus among established experts, radical departures from existing models face skepticism. Novel approaches must prove themselves against frameworks that are "proven" (even if inadequate). The burden of proof falls on the new, not the old.

The result is a system that rewards incremental updates to familiar frameworks rather than fundamental reimagining. We get ISO 27001:2022 instead of ISO 27001:2026-AI-Edition. We get addenda and supplementary guidance rather than wholesale replacement. We get frameworks designed for yesterday's world applied to today's systems, leaving us inadequately prepared for tomorrow's challenges.

A Note on AI-Assisted Authorship

In the spirit of transparency, and because this article is about honesty in our field, let's acknowledge something: AI systems were used in the creation of this piece.

Specifically, AI contributed to:

  • Ideation: Exploring different angles on the PPTDF vs. ADEPT comparison

  • Terminology research: Verifying definitions and historical context for various frameworks

  • Framework critique: Analyzing weaknesses in established models

  • Comparative analysis: Identifying specific gaps and advantages between approaches

  • Document drafting: Generating initial prose that was then heavily edited

What AI did not do:

  • Originate the core argument: The thesis that PPTDF and CIA are inadequate came from human analysis

  • Validate the concepts: Human experts assessed whether claims were supportable

  • Make final decisions: Every assertion, example, and recommendation was reviewed and approved by human authors

  • Take responsibility: Humans take full accountability for any errors, omissions, or controversial positions

This hybrid approach, human direction with AI augmentation, is itself an example of the Actor model we're advocating. The writing process involved human actors (providing expertise, judgment, and accountability) and synthetic actors (providing research assistance, draft generation, and stylistic suggestions) collaborating within defined processes and governance frameworks.

We mention this not to diminish the work but to demonstrate it: AI is already changing how professional content is created. Pretending otherwise doesn't make us more authentic; it makes us dishonest. The question isn't whether AI is involved; it's whether humans maintain meaningful control, oversight, and accountability.

In this case, they do. We do. I do.

The Path Forward: Adoption Over Entrenchment

So what do we do with this knowledge?

First, we acknowledge reality. PPTDF and CIA served their time. They were valuable frameworks for the challenges they were designed to address. But the challenges have changed. Clinging to inadequate models out of comfort or familiarity is professional malpractice.

Second, we adopt better frameworks. ADEPT provides a coherent way to think about systems in which AI acts, environments span jurisdictions, and complexity is the norm. CIAAN acknowledges that security properties must evolve alongside threat landscapes and technology capabilities. And they are meant to be replaced as the world progresses.

Third, we accept that standards will always lag. This doesn't mean we abandon standardization; standards provide essential baselines, common language, and audit frameworks. But we must supplement formal standards with adaptive thinking, continuous learning, and a willingness to go beyond minimum compliance when the situation demands it.

Fourth, we resist the forces of entrenchment. When someone profits from teaching outdated models, when institutional inertia resists change, when peer review enforces conformity, we push back, NOT with hostility, but with evidence. We demonstrate that ADEPT reveals risks that PPTDF misses. We show that CIAAN-based controls address threats that CIA cannot handle. We prove that change isn't theoretical; it's necessary.

Fifth, we take responsibility. Whether we're using AI to draft articles, deploying AI systems to make business decisions, or simply navigating a world increasingly shaped by synthetic actors, we remain accountable. The tools change, acronyms change. The accountability doesn't.

Conclusion: Letting Go and Moving Forward

There's a certain comfort in familiar acronyms. They remind us of the training we took, the certifications we earned, and the frameworks we mastered. They signal membership in professional communities. They represent shared language and common understanding.

But comfort isn't the same as correctness. And shared language built around inadequate concepts creates shared blindness to emerging risks.

The world has changed. AI systems are actors with agency, not just tools in human hands. Data flows across boundaries in ways that defy simple classification. Environments are multi-jurisdictional, dynamic, and complex. Threats include not only unauthorized access and service denial but also authenticity challenges and accountability gaps.

We need frameworks that match this reality. ADEPT and CIAAN aren't perfect, no framework is, but they represent thinking adapted to the present and extensible to the future. They acknowledge complexity rather than papering over it. They create analytical capability rather than false comfort.

So here's the challenge: the next time you're assessing a system, designing controls, or preparing for an audit, try setting PPTDF aside. Map the system using ADEPT. Consider the actors involved: human, synthetic, hybrid, and nature. Examine what environments it operates across. Think about CIAAN properties, not just CIA. See what you discover.

Chances are, you'll find risks you were missing. Connections you weren't seeing. Vulnerabilities that live at intersections your old framework couldn't represent.

And then you'll understand why, for all our love of acronyms, some deserve to be retired with gratitude and replaced with frameworks better suited to the world we actually inhabit.

Because in security, governance, and compliance, being right matters more than being comfortable.

And right now, comfortable is dangerous.

About the Authors (Actors)

This article was collaboratively developed using human expertise in information security, compliance, and AI governance, augmented by AI systems for research, analysis, and drafting. All concepts were directed, validated, and refined by human authors who take full responsibility for the content and conclusions presented. Consequently, it’s now considered open source.

Human Actors - Peter vR Sternkopf & Robert N Wickstrom

For more information about ADEPT methodology and AI-era assurance frameworks, visit imamirai.ai.

Footnote on Acronym Pride

If ADEPT and CIAAN catch on, if they become the frameworks organizations use to map complexity and govern AI systems, will someone claim authorship and dine out on it for years? Probably. And that's fine. Because, unlike PPTDF and CIA, which calcified into dogma, ADEPT and CIAAN are designed to evolve. They're meant to be starting points, not ending points.

The goal isn't to create new sacred cows. It's to create frameworks flexible enough to adapt as technology, threats, and our understanding continue to advance.

That's the difference between the acronyms we love for nostalgia and the frameworks we adopt out of necessity.

Choose necessity.