Command Zero · Research Paper · April 2026

The Recomposition of Security Work: Roles, Expertise, and the Agentic SOC

How the distributed, agent-powered Security Operations Center transforms roles in the security profession, and what happens when that transformation is navigated well or badly.

Authors Dean de Beer, CTO & Cofounder
Organization Command Zero · commandzero.ai
Version 1.0 | Final
Date April 2026
Contents
  • 01Executive Summary
  • 02The Pipeline Was Already Broken
  • The MSSP and MDR Waves
  • What Tier 1 Was Actually Building
  • The MSSP Counter-Argument and Its Limits
  • 03Two Futures: Worst Case and Productive Transformation
  • The Worst Case: Expertise Extinction
  • The Productive Transformation
  • 04The Four Phases of SOC Evolution
  • Timeline 1: Dual-Scenario — end of Section 3
  • Timeline 2: Role Evolution View — Section 6/7 bridge
  • Timeline 3: Phase-by-Phase Detail — Section 4
  • 05Existing Roles: What Changes and What Disappears
  • Tier 1 Analyst
  • Tier 2 Analyst
  • Tier 3 / Senior Analyst
  • Detection Engineer
  • SOC Manager
  • SOC Director / CISO
  • 06Evolved Roles: The Same Job, Fundamentally Changed
  • 07New Roles: No Predecessor, Net-New Demand
  • Agent Operations Specialist
  • Security Ontology Engineer
  • Adversarial Scenario Designer
  • Cross-Domain Interpreter
  • Agent Trust & Boundary Engineer
  • Ethical Oversight Specialist
  • 08The Deskilling Problem: Compliance and Audit
  • 09The Adversarial Asymmetry Risk
  • 10Design Principles for the Transition
  • 11Cross-Profession Implications
  • Software and Detection Engineering
  • Legal, Compliance, and HR
  • IT Operations and Product Management
  • Executive Leadership
  • 12Universal Skills and the Skills in Decline
  • Skills Rising Across All Roles
  • Skills Declining in Value
  • The 30% Diagnostic
  • 13Formation Pathways: A Practical Map
  • Existing Practitioners: Transition Paths
  • New Entrants: Agent-Era Formation
  • Organizational Investment Requirements
  • 14Conclusion
01

Executive Summary

Security Operations is undergoing a transformation unlike any it has faced before. The emergence of AI agents, systems capable of autonomous reasoning and conducting investigations, correlating evidence, and coordinating responses across organizational boundaries, is dissolving the traditional, centralized SOC into a distributed security fabric embedded throughout the enterprise and beyond. This is not an incremental upgrade. It is a paradigm shift that fundamentally reshapes every security role, from entry-level analyst to CISO.

This paper argues two positions simultaneously. The transformation carries genuine promise: security operations that scale beyond human throughput constraints, analyst work concentrated on genuinely strategic challenges, and, for the first time, expert-equivalent security coverage accessible to organizations that previously could not afford it. But the same transformation carries a structural risk that the industry is moving toward without adequate awareness: the systematic erosion of the expertise pipeline that produces senior security practitioners, compounded by a compliance and governance framework entirely unprepared for the autonomous systems it will be asked to certify.

The outcome is not determined by the technology. It is determined by the organizational and policy choices made in the next one to three years, while the pipeline is still intact and the transformation is still in its early phases. Organizations that treat AI deployment as a design problem, intentionally preserving the conditions that develop expertise while eliminating the work that never should have been a priority in the first place, will emerge with security programs more capable than anything the centralized SOC could achieve. Organizations that treat AI deployment purely as a headcount reduction exercise will discover, five years from now, that they have extensive automation and nobody who knows what to do when it fails.

2
Waves of analyst pipeline erosion before AI arrived
4
Phases of SOC evolution from augmented to autonomous mesh
6+
Net-new security roles with no direct predecessor

The paper proceeds from the historical context of talent pipeline damage predating AI, through an analysis of the worst-case and positive-case scenarios, and into detailed treatment of every significant role transformation, both evolutionary and entirely new. It closes with concrete design principles for security leaders navigating the transition now.

These two positions are my thoughts on what could happen. That is not to say that it will play out as either one or the other but could, instead, be a combination of both and is dependent on the organization, how AI evolves, adoption, and a myriad of other factors we have yet to account for.

02

The Pipeline Was Already Broken

Most security leaders approaching AI deployment frame it as a novel disruption, specifically a new technology arriving to reshape a stable profession. The frame is incorrect. The analyst pipeline was already damaged before a single prompt was typed into a language model. Understanding why requires acknowledging two decades of procurement decisions whose downstream consequences the industry has consistently failed to account for. Perhaps a little extreme of a statement but it will serve for now.

The Three Waves

The first wave was MSSP. From approximately 2010 onward, Managed Security Service Providers offered organizations 24/7 coverage at a fraction of the cost of in-house staffing. What organizations received was pattern-matching against known signatures, ticket closure at SLA pace, and the transfer of their Tier 1 analyst function to a vendor whose economic model has always been optimized for throughput, not development.

The second wave was MDR. Managed Detection and Response matured the conversation, with better tooling and legitimate threat hunting capability. But the economics were structurally identical to MSSP. Organizations traded internal headcount for vendor coverage. The Tier 1 seat, the one where analysts learned to see, to intuit threats, to gain experience, disappeared from organizational charts at accelerating pace through 2018 to 2022.

The third wave is AI. Unlike the first two, AI has the technical capability to actually perform everything Tier 1 does, faster, more accurately and continuously than the overworked, junior analyst. That technical superiority is precisely what makes the design decision more consequential, not less.

What Tier 1 Was Actually Building

The Tier 1 analyst role was never primarily valuable for the work it produced. The alerts it triaged, the tickets it closed, the enrichment it performed, all that had value, but none of it was irreplaceable. What was irreplaceable was the developmental function the role served.

The formation mechanism is not complicated to describe, even if it is slow to produce. Security analysts learn by doing, by accumulating exposure to real threat data at volume, making judgment calls, seeing which ones were right, and adjusting. The intuition that eventually operates at a subconscious level is built from the experiences of thousands of routine cases, a fraction of them formative. There is no shortcut and no substitute.

Senior security analysts make cognitive leaps during an investigation, pattern recognition and earned knowledge allow them to make connections between events and activities that are not immediately obvious. That knowledge develops only through experience. It cannot be compressed into documentation, transferred through training programs, or inherited from a vendor's playbook.

When you remove the Tier 1 function, whether through MSSP, MDR, or AI, you do not simply move the work. You move the developmental stage. You eliminate the conditions under which expertise forms and you discover there is no internal pipeline producing the next generation of senior practitioners, because you dismantled the conditions under which that formation happens.

The MSSP Counter-Argument and Its Limits

A reasonable objection deserves direct engagement: if Tier 1 work moved to MSSPs, did the analysts doing that work not develop expertise there? Is the pipeline truly broken, or merely relocated?

The counter-argument is partially correct. MSSP analysts gain genuine exposure, in some respects more than their in-house counterparts, because they see broader attack surfaces across dozens of customer environments. Some MSSP and MDR alumni do develop real expertise and transition into enterprise senior roles. The talent did not vanish from the industry entirely.

But the counter-argument breaks down across four structural dimensions. First, the feedback loop is severed. In-house Tier 1 analysts escalate to Tier 2 colleagues working the same environment and often close the loop. At an MSSP, escalation crosses team and geographic boundaries, and the Tier 1 analyst rarely learns what happened to the case they handed off. The volume of exposure exists; the corrective and educational feedback that converts exposure into expertise seldom does.

Second, breadth substitutes poorly for depth. What builds senior security judgment is understanding one environment deeply enough that anomaly detection becomes intuitive. MSSP analysts see the surface of many environments and the interior of none. Third, much MSSP Tier 1 work was delivered from lower-cost geographic markets, meaning the analysts who did develop skills were feeding a different talent market, not the enterprise senior analyst pipeline of the organizations that outsourced the function. And fourth, the MSSP and MDR economic model actively selected against the developmental case types. The cases that teach most are exactly the cases SLA pressure drives analysts to close or escalate fastest.

The pipeline fragmented rather than vanished. But from the perspective of the enterprise that outsourced, the developmental benefit moved to the vendor and the expertise gap remained with the organization.

03

Two Futures: Worst Case and Productive Transformation

These are not merely different framings of the same outcome. They are genuinely incompatible trajectories, driven by different organizational choices, producing structurally different security postures a decade from now. The following presents both with full internal logic, making the strongest possible case for each position before examining where they genuinely diverge.

Position 1: The Worst Case, Expertise Extinction

Worst Case Thesis

The security industry is executing a structural bet it has not explicitly made: that the judgment required to govern autonomous security systems can exist without the experiential pipeline that produces it. That bet is wrong, and the consequences compound over a 10–20 year horizon in ways that will not be visible until they are irreversible.

The Pipeline Collapses First

Tier 1 automation is already underway. Within three to five years, the entry-level analyst role is functionally absorbed by agents across the industry. Organizations reduce headcount at the base of the pyramid because the economics demand it, rational at the individual organization level and catastrophic at the industry level. The T1 role, already thinned by MSSP and MDR, disappears. The cohort that would have become T2 analysts in 2028–2030 does not exist. T3 specialists a decade from now are people who were never meaningfully T2. The governance tier, including the agent architects, oversight specialists, and strategic threat analysts the positive scenario requires, will be staffed by people who know how agents work but not what agents are supposed to find.

The Illusion of Competence

Autonomous agents produce outputs that look like expert analysis. Confidence scores, reasoning chains, evidence summaries, and recommended actions, all presented in language indistinguishable from what a skilled analyst produces. The humans in the loop will not be equipped to recognize failure when it occurs. An agent misclassifying a new, unique intrusion technique as benign network noise, with 78% confidence and a coherent-looking evidence chain, will be reviewed by an analyst whose primary experience is accepting or rejecting agent summaries, not reconstructing investigations from raw log data. The failure passes through. This is not the elimination of a task. It is the elimination of the knowledge and systems that detects agent error.

Compliance Becomes Theater

Regulatory frameworks including SOC 2, ISO 27001, PCI-DSS, and HIPAA were written for human-operated environments. They assume human attestation of human-verifiable controls. As autonomous agents generate the evidence that autonomous agents are audited against, compliance certification detaches from security reality. Auditors who cannot independently analyze underlying security data, because they never developed that capacity, sign off on agent-generated audit trails they cannot independently verify. Organizations achieve full compliance certification while operating security programs whose autonomous components have systematic blind spots no human in the organization can identify.

Vendor Concentration Risk

Organizations that automate their expertise become dependent on the vendors whose agents replaced it. When those agents fail at scale, and they will,against new and unique attack patterns, there is no internal capacity to recognize the failure, diagnose it,or remediate it independently. The vendor controls the security posture. This is not a speculative risk. It is the logical destination of current procurement trajectories, visible in the next five to seven years.

Worst case indicators
  • T1 eliminated as a headcount line item, not redesigned
  • AI deployed for cost reduction with no formation replacement built
  • Agent outputs accepted without structured analyst challenge
  • Compliance achieved via agent-generated evidence, unverified by humans
  • Vendor dependency becomes structural for core security functions
  • Senior analyst bench thins as pipeline cohorts fail to materialize
  • Novel threat detection degrades while dashboard metrics improve
Positive indicators
  • T1 redesigned around agent collaboration, not eliminated
  • Efficiency savings fund formation investment in parallel
  • Agent reasoning paths exposed; analysts trained to challenge them
  • Continuous compliance evidence requires independent verification sampling
  • Internal capability maintained to evaluate vendor agent systems
  • New formation pathways (QA, ontology, scenario design) replace old ones
  • Human expertise concentrated at genuinely irreducible work

Position 2: The Productive Transformation

Positive Thesis

The transformation is not a threat to security expertise. It is the first technology in the industry's history with the potential to make security expertise the primary activity of security professionals, rather than a fraction of their time buried under volume work they were never able to handle at that scale.

The Volume Problem Gets Solved

The fundamental failure of the current SOC is not human inadequacy. It is that humans were asked to operate at machine scale without machine assistance. Tens of thousands of alerts per day processed by individuals whose cognitive bandwidth peaks at a few hundred meaningful analyses per shift. Analyst burnout, alert fatigue, and the skills shortage are symptoms of the same structural mismatch. Agents solve this, not by replacing human judgment, but by eliminating the cognitive tax of volume processing. An analyst who previously spent 70% of their shift on routine enrichment inverts that ratio in Phase 2. The same expertise, producing dramatically more investigative value.

Education Gets Embedded in the Work

Agents are not just doing the work; in well-designed systems they make the work visible in ways that teach. When an agent surfaces an enriched alert summary with its reasoning chain, evidence links, and confidence justification, the reviewing analyst is seeing a structured model of how that investigation should be conducted. The agent becomes a trainer, not a replacement. Investigation interfaces that require analysts to confirm or challenge agent findings with explanation, that track where analysts override agents and learn from those overrides. These are pedagogical tools embedded in operational workflows. The formation that previously required two years of high-volume manual triage can be restructured into supervised agent collaboration that produces equivalent intuition through a different mechanism.

New Entry Points Replace Old Ones

The pessimistic position assumes the only path into security expertise is through T1 alert triage. Agent QA and validation roles require exposure to security data, investigation logic, and adversarial thinking. Security ontology development requires deep engagement with attack taxonomy and detection logic. Adversarial scenario design requires the red team thinking that develops adversarial imagination. These are different formation paths producing different but complementary competencies. The security professional who has spent two years designing agent stress-test scenarios has developed adversarial intuition through a different mechanism than the analyst who triaged 50,000 alerts. Neither path is inherently superior.

The Bet Worth Making

The difference between the two outcomes is not the technology. The technology is identical in both scenarios. The difference is organizational and policy choices made in the next one to three years, while the transformation is still in Phase 1–2 and the pipeline is still intact. Organizations that answer the design question deliberately, preserving what makes human expertise irreplaceable while eliminating what was always a poor use of it, will emerge with security programs that are more capable, more scalable, and more sustainable than anything the centralized human-only SOC could achieve.

The following timeline visualizes both trajectories simultaneously, showing the divergence that compounds from Phase 1 onward.

Timeline 1: Dual Scenario, Where the Paths Diverge

Worst case trajectory
Productive transformation trajectory
Select any event for detail
Phase 1 — Augmented
2024 – 2026
Phase 2 — Collaborative
2026 – 2027
Phase 3 — Distributed mesh
2027 – 2030
Phase 4 — Autonomous
2030+
Worst case
Positive
Outcome
divergence
best worst
04

The Four Phases of SOC Evolution

The transformation from traditional centralized SOC to distributed autonomous security mesh occurs through four overlapping phases. These phases are not discrete; organizations will operate across multiple phases simultaneously, with different functions and business units at different stages of maturity. The trajectory shifts analysts from operators to orchestrators, introduces roles such as Agent Operations Specialists and Security Ontology Engineers, and ultimately dissolves the SOC boundary into a federated security fabric embedded throughout the enterprise.

Phase Reference Summary

Phase Period Operating Model Analyst Role Agent Role
Phase 1
Augmented
2024–2026 Analysts assisted by task agents Direct investigations with agent support Automate enrichment and basic analysis
Phase 2
Collaborative
2026–27 Analysts supervise agent teams Orchestrators and adjudicators Autonomous evidence collection, analysis, reporting
Phase 3
Distributed
2027–30 Security mesh across enterprise Strategy, governance, edge-case resolution Front-line detection, self-service security
Phase 4
Autonomous
2030+ Autonomous security mesh Strategic leadership and ethical governance Predictive defense, collective cross-org response

The following timeline presents the positive trajectory in phase-by-phase detail, organized by outcome category.

Timeline 3: Positive Transformation, Phase-by-Phase Events

Operational outcome
Pipeline & formation
Skills & roles
Defense posture
Select any card for detail
P1
2024
– 26
Phase 1 — Augmented analysis
Pipeline & formation
Reasoning chains as formation tools
Training embedded inside every investigation
Operational outcome
T1 augmented, not eliminated
Entry roles redesigned around agent collaboration
Skills & roles
New entry pathway investment begins
Agent QA, scenario design, ontology as formation paths
P2
2026
– 27
Phase 2 — Collaborative operations
Skills & roles
New specialist roles reach maturity
AgentOps, Ontology Engineer, Investigation Coordinator established
Operational outcome
Senior capacity freed for high-value work
30–40% of senior analyst time recovered from overhead
Defense posture
Continuous compliance monitoring takes hold
Real-time posture replaces periodic certification cycle
P3
2027
– 30
Phase 3 — Distributed security mesh
Skills & roles
Expertise at the irreducible tier
Analysts do only the work agents genuinely cannot
Defense posture
SME organizations gain expert-equivalent coverage
Security capability gap between large and small narrows
Pipeline & formation
New pathway cohort reaches senior level
First agent-era analysts prove the formation model works
P4
2030+
Phase 4 — Autonomous security mesh
Operational outcome
Security expertise becomes the primary activity
Strategic work replaces throughput as the job definition
Defense posture
Collective defense networks become operational
Industry-wide detection faster than any single org
Pipeline & formation
Autonomous systems governed by domain experts
Formation investment from Phase 1–2 pays its return
05

Existing Roles: What Changes and What Disappears

Every role in the security organization is affected by the agentic transition. The nature and degree of change varies significantly by tier and function. Some roles are substantially preserved with expanded scope. Others are so fundamentally altered that the original title becomes misleading. A few functions disappear entirely as autonomous agents absorb them, though the organizational need they served, including investigation, analysis, and judgment, does not disappear, it relocates upward in complexity.

Tier 1 Analyst

Current function: Alert triage, initial enrichment, queue processing, escalation, basic correlation, documentation.

What disappears: Manual alert correlation, copy-paste enrichment, tier-based queue routing, template-based reporting, rule-based escalation decisions. These represent the majority of current T1 day-to-day work and are absorbed by agents in Phase 1–2.

What survives and transforms: The T1 role does not simply vanish; it redesigns. Instead of processing individual alerts, T1-equivalent analysts supervise and validate agent-generated investigation summaries. They evaluate confidence scores, challenge reasoning steps, flag anomalies the agent deprioritized, and develop the habit of critical agent evaluation. Override rate tracking feeds agent retraining. The volume is lower; the required analytical depth per case is higher.

Critical risk: In the worst-case trajectory, T1 is eliminated rather than redesigned. When that happens, the entry-level formation stage disappears. The T2 cohort that emerges five years later will have supervised agent outputs but never conducted a manual investigation. Their ability to recognize agent failure is structurally limited from the start.

Tier 2 Analyst

Current function: Detailed investigation, cross-source correlation, escalation decisions, incident documentation, analyst mentoring.

What changes: T2 transitions from conducting investigations to orchestrating them. In Phase 2, analysts direct agent teams rather than perform evidence collection themselves. Multi-agent investigation management becomes central: understanding which agents are working a case, what they have found, and where to redirect focus. Documentation shifts from manual creation to review and attestation of agent-generated reports.

New skills required: Multi-agent workflow management; evidence chain evaluation for legal and compliance sufficiency; hypothesis testing methodology, which involves formulating competing theories of attack then tasking agents to validate or invalidate each; confidence score interpretation; adversarial scenario intuition, specifically knowing when an agent is likely to miss something because it falls outside trained patterns.

What remains irreplaceable: The judgment about when to override an agent becomes as important as the underlying security knowledge. This requires the same foundational formation that manual investigation produced, which is why the T1 pipeline design problem directly determines T2 capability a generation later.

Tier 3 / Senior Analyst

Current function: Complex incident management, threat hunting, tool development, detection engineering, mentoring T1/T2.

What changes: This is the tier with the highest survival probability and the most ambitious transformation. Threat hunting becomes partially agent design, translating hunt hypotheses into persistent autonomous discovery workflows. Detection engineering expands to include agent-compatible logic, designing detection scenarios agents can execute autonomously while preserving evidence chain integrity.

Highest survival probability because: The skills are substantive and non-routine. Adversarial creativity, threat hypothesis generation, and the ability to externalize tactical investigation knowledge into explicit agent architecture are genuinely hard to replace. These capabilities develop only through the formation pipeline, which is why their preservation is load-bearing for the entire positive trajectory.

Detection Engineer

Current function: Writing detection rules, tuning SIEM queries, managing alert logic, reducing false positives.

What changes: Detection logic must now be authored for autonomous execution. Agent-compatible detection scenarios carry additional requirements beyond SIEM queries: evidence preservation specifications, confidence thresholds, escalation trigger logic, and human-override conditions. The Detection Engineer becomes the bridge between threat knowledge and autonomous execution, translating security understanding into the structured reasoning frameworks that agents operate within.

New skills required: Agent workflow design; evidence chain specification; understanding of how agents fail and what detection logic fails gracefully versus catastrophically; MCP tool-chain awareness; knowledge of what data sources agents can and cannot reach.

SOC Manager

Current function: Queue management, team oversight, SLA accountability, escalation handling, resource allocation.

What disappears: Queue-based workload management, ticket routing, daily triage reviews. Agents handle routine case flow. The command-and-control instincts of traditional SOC management become actively counterproductive in an environment where the primary challenge is governance and orchestration, not throughput.

What transforms: Management shifts from managing analyst productivity to governing agent ecosystems and human-agent team performance. SLA accountability persists but metrics shift from mean-time-to-respond to detection precision, investigation throughput quality, and agent decision accuracy. Cross-functional liaison work expands significantly as security embeds into business units.

New skills required: Agent governance framework design; performance measurement for AI systems including hallucination rate evaluation and confidence calibration assessment; workflow architecture for human-agent collaboration; risk tolerance calibration, specifically defining the boundaries of autonomous action versus required human authorization.

SOC Director / CISO

What changes: Security strategy expands from organizational to cross-organizational, governing participation in collective defense networks, inter-organizational agent collaboration, and industry-level security mesh governance. Vendor management shifts from tool procurement to agent infrastructure and protocol standards evaluation. Executive reporting migrates from activity metrics to security posture and business enablement.

New responsibilities: Designing the organizational architecture for distributed security capabilities; establishing ethical and legal frameworks for autonomous security decision-making; governing the accountability mechanisms that ensure autonomous systems remain auditable and compliant; managing the organizational transition in ways that preserve expertise while capturing efficiency gains.

06

Evolved Roles: The Same Job, Fundamentally Changed

The roles below represent significant transformations of existing security functions. The job title may persist, but the primary activity, required skills, and organizational context change substantially enough that a practitioner who does not adapt will find themselves structurally misaligned with what the role actually demands. A five-year role evolution timeline follows this section, showing where these evolved roles transition into the net-new roles described in Section 7.

T1 Analyst
Agent Validator
Evolved Role

The T1 role shifts from manual alert triage to structured validation of agent-generated investigation summaries. The analyst evaluates confidence scores, challenges reasoning steps, flags anomalies the agent deprioritized, and surfaces edge cases that fall outside the agent's training distribution. Override rate is tracked systematically and feeds agent retraining cycles.

The formation value of this role depends entirely on interface design. If investigation platforms expose agent reasoning paths, not just outputs, analysts develop intuition through structured reasoning review. If platforms present only conclusions, the developmental value disappears and the role becomes a rubber stamp. This design choice is the single most important Phase 1 decision for pipeline preservation.

Core skills
Agent output critical evaluation Confidence score interpretation Reasoning chain review Anomaly pattern recognition MCP data source awareness Override documentation

T1 analysts who develop strong agent evaluation instincts in Phase 1 are well positioned for Agent Operations Specialist roles in Phase 2. The key development investment is prompt literacy, confidence calibration, and the habit of structured override documentation.

T2 Analyst
Investigation Coordinator
Evolved Role

Senior T2 analysts transition from conducting investigations to orchestrating them. This role directs agent teams, extends hypotheses, determines which findings warrant human deep-dive versus autonomous closure, and synthesizes multi-agent outputs into coherent incident pictures. The judgment about when to override an agent, and on what evidentiary basis, becomes as analytically demanding as the underlying investigation.

Evidence chain evaluation takes on new importance. Agent-collected evidence must be assessed not just for security relevance but for legal defensibility, compliance sufficiency, and chain-of-custody integrity. This requires analysts to understand the provenance of agent findings, not just their content.

Core skills
Multi-agent workflow management Hypothesis testing methodology Evidence chain evaluation Incident synthesis Agent team redirection Legal evidence standards

Senior T2 analysts with strong investigation instincts and growing interest in agent system behavior. The transition is largely natural through Phase 2 deployment experience if the environment is well designed. T2 analysts who develop interest in the architectural layer have a longer but high-value path toward Agent Architect through deliberate cross-training in agent engineering fundamentals.

T3 / Senior Analyst
Agent Architect
Evolved Role

Senior analysts with deep investigation expertise transition to designing the agent systems that perform those investigations. The role demands a rare combination: genuine security expertise plus the ability to externalize that expertise into autonomous system design. Investigation knowledge must be translated into explicit agent architectures, detection logic, memory system design, and reasoning frameworks that other analysts then supervise.

This is the pivotal role in the positive transformation scenario. The quality of agent systems across the industry through 2030 will be a direct function of how many T3-equivalent analysts made this transition successfully in Phase 1 to Phase 2. The role is the mechanism by which domain expertise is preserved in institutional form even as the operational workforce transforms.

Core skills
Agent engineering and design Prompt architecture Memory system design Tool orchestration (MCP) Trust validation logic Escalation framework design Detection scenario authoring

T3 analysts with 5+ years of investigation experience who develop systematic interest in how agent reasoning systems work. This transition should be actively managed by organizations in Phase 1–2, not left to individual initiative.

Detection Engineer
Agent Detection Designer
Evolved Role

Detection logic must be authored for autonomous execution by agents, not merely for human review in a SIEM interface. Agent-compatible detection scenarios carry requirements beyond traditional rules: evidence preservation specifications, confidence calibration thresholds, escalation trigger conditions, and graceful failure modes. The Detection Designer becomes the bridge between threat knowledge and autonomous capability, translating security understanding into the structured frameworks agents operate within.

Core skills
Agent workflow design Evidence chain specification Confidence calibration Failure mode analysis MCP tool-chain mapping Detection coverage auditing
SOC Manager
Agent Operations Lead
Evolved Role

The transition from managing analyst queues to governing agent ecosystems requires a fundamentally different operational model. The Agent Operations Lead oversees agent fleet health, deployment pipelines, drift detection, human-agent handoff integrity, and performance measurement across the autonomous layer. Success metrics shift from SLA closure rates to detection precision, investigation quality, and agent decision accuracy.

Cross-functional liaison work expands substantially as security embeds into business units. The role increasingly operates at the intersection of security operations, technology governance, and business relationship management, a combination the traditional SOC manager role rarely demanded.

Core skills
Agent governance frameworks AI performance measurement Workflow architecture Risk tolerance calibration Cross-functional liaison Escalation policy design

The following timeline maps specific role evolutions alongside operational capability shifts across the five-year transition window, showing where evolved roles appear and when net-new roles first emerge.

Timeline 2: Positive Transformation, Five-Year Role View (2025–2030)

Operational shifts
Evolved roles
New roles (no predecessor)
Select any card for detail
Operations & capability
Role evolution
P1
2025
Operations
Agent-assisted enrichment
Agents pre-populate every investigation
Operations
Reasoning-visible interfaces
Training embedded in the workflow
Evolved role
T1 Analyst
Agent Validator
Alert review becomes reasoning audit
Evolved role
Detection Engineer
Agent Detection Designer
Rules become agent-executable scenarios
→2
2026
Operations
MCP-enabled cross-tool correlation
Silos dissolve at the agent layer
Operations
Autonomous preliminary investigations
Known threat categories handled end-to-end
New role
Agent Operations Specialist
No predecessor — first appearance at scale
Evolved role
T2 Analyst
Investigation Coordinator
Analyst becomes orchestrator of agent teams
P2
2027
Operations
Domain-specialist agent teams
Malware, phishing, identity, cloud agents
Operations
Continuous compliance monitoring
Real-time posture replaces periodic audit
New role
Security Ontology Engineer
Knowledge frameworks that agents reason with
Evolved role
T3 / Senior Analyst
Agent Architect
Investigation expertise becomes system design
→3
2028
Operations
Security mesh begins forming
Agents embedded in every business unit
Operations
Continuous autonomous threat hunting
Hunt hypotheses execute without analyst time
New role
Adversarial Scenario Designer
Red team thinking applied to agent systems
New role
Trust Engineer
Governance and auditability for autonomous systems
P3
2027–30
Operations
Cross-org agent collaboration
Collective defense networks emerge
Operations
Expertise at the irreducible tier
Human work is genuinely strategic
New role
Cross-Domain Interpreter
Security findings translated across the business
Evolved role
SOC Manager / Director
Ethical Oversight Specialist
Governance of autonomous operations
07

New Roles: No Predecessor, Net-New Demand

The roles in this section have no meaningful predecessor in the current security organization. They emerge from the structural requirements of operating autonomous agent systems at scale, requirements that simply did not exist when security operations were entirely human-operated. Their timing of appearance in organizational charts tracks the phase evolution: Agent Operations Specialist is needed the moment Phase 1 agents go into production; Security Ontology Engineer becomes critical at Phase 2 when agent reasoning quality directly determines detection effectiveness; Trust and Boundary Engineering becomes a regulatory necessity by Phase 3.

Agent Operations Specialist
New Role · Phase 1+

The operational reliability function for deployed agent systems. Monitors agent performance in production, manages deployment pipelines, detects agent drift and reasoning degradation, ensures human-agent handoff integrity, and maintains observability across the agent fleet. Sits structurally between security operations and engineering, demanding both operational instinct and systems-level technical thinking.

This is the first new role to appear at scale, needed as soon as Phase 1 agents operate in production, before the deeper knowledge engineering roles are required. Its absence in early deployments is one of the most common failure modes: organizations deploy agents and then lack systematic visibility into whether those agents are performing as expected.

Core skills
Agent monitoring and observability Deployment pipeline management Drift detection Handoff protocol design Performance metrics for AI systems Incident response for agent failures

Former T2 analysts with strong tool proficiency and systems thinking, or DevOps/SRE engineers who develop security domain knowledge. The role rewards practitioners who are comfortable operating at the boundary between technical systems and operational process.

Security Ontology Engineer
New Role · Phase 2+

Develops and maintains the knowledge frameworks that agents use to understand security concepts, attack taxonomies, organizational context, and business semantics. The quality of agent reasoning is bounded by the quality of the knowledge representation it reasons over. Poor ontologies produce agents that misclassify threats, miss contextual signals, and fail to translate findings meaningfully across business functions. This role is responsible for the semantic infrastructure of the entire agent ecosystem.

The role bridges two fields that rarely intersect in current security organizations: deep security domain expertise and knowledge engineering: graph databases, semantic representation, NLP, and ontological modeling. Genuine depth in both is required. This makes the formation challenge significant and the resulting practitioner genuinely rare.

Core skills
Knowledge graph design Semantic representation Attack taxonomy development Ontological modeling NLP fundamentals Security domain depth Organizational context mapping

Former T3 analysts or threat intelligence specialists who develop deep interest in knowledge engineering and structured representation. Alternatively, knowledge engineers or ontologists who invest substantially in security domain expertise. No shortcut: the role genuinely requires both halves.

Adversarial Scenario Designer
New Role · Phase 2+

Applies red team thinking specifically to agent stress-testing. Designs attack simulations and edge cases that challenge agent reasoning, surface detection blind spots, probe confidence calibration failures, and test the boundaries of autonomous decision-making under adversarial pressure. The role ensures that autonomous detection systems are tested adversarially, not just functionally, before they are trusted with production security decisions.

This role is conceptually adjacent to traditional red teaming but requires an additional technical dimension: understanding how agent reasoning systems fail. An attack that would challenge a human analyst may not challenge an agent in the same way, and vice versa. The Adversarial Scenario Designer must model both failure modes simultaneously.

Core skills
Red team methodology Agent reasoning failure analysis Attack simulation design Confidence calibration testing Edge case generation Agentic QA methodology Detection coverage gap analysis

Red teamers and penetration testers who develop systematic interest in AI system evaluation, or Agentic QA specialists who develop offensive security expertise. Both require deliberate cross-domain investment; neither background alone is sufficient.

Cross-Domain Interpreter
New Role · Phase 3+

As security agents embed into HR, Finance, Legal, R&D, and Operations workflows, the gap between security reasoning and business reasoning becomes a critical failure point. The Cross-Domain Interpreter ensures that agent findings are understood and acted upon by non-security stakeholders. This is not a communications role; it requires genuine security expertise combined with the ability to translate threat context into business-relevant language without losing analytical precision.

This role becomes structurally essential at Phase 3, when security is no longer centralized in a SOC but distributed throughout every business function. The people receiving security findings are HR managers, finance controllers, and legal counsel, not security analysts. The quality of their response to those findings depends on the quality of the translation.

Core skills
Security domain expertise Business function literacy Risk communication Regulatory translation Stakeholder management Cross-functional workflow design

T2/T3 analysts who develop strong business communication skills and cross-functional exposure over their careers. Security awareness professionals who develop deep technical grounding. Business relationship managers who invest seriously in security domain expertise.

Agent Trust & Boundary Engineer
New Discipline · Phase 2+

This role manages the trust fabric of the agent ecosystem. Often referred to as Permission Engineering, it is more precisely scoped as Agent Trust and Boundary Engineering, reflecting that the discipline covers the full communication layer, trust model, and access boundary architecture, not just permission assignment.

The IAM analogy is the right starting point but undersells the complexity. Where IAM manages relatively static human-to-system permissions, Agent Trust and Boundary Engineering must address permissions that are contextual and dynamic: an agent's access profile should shift based on the task it is executing, not just its identity. It must address the communication layer: which agents can talk to which other agents, under what conditions, with what data in the payload. It must solve the composition problem: an orchestrator agent directing sub-agents can aggregate the permissions of multiple downstream agents into an effective capability set that no single agent was explicitly granted. And it must operate at operational tempo, requiring revocation in seconds, not quarterly access reviews.

Why this role is distinct from the Trust Engineer: The Trust Engineer governs the quality and accountability of agent reasoning, covering auditability, confidence calibration, and compliance. The Agent Trust and Boundary Engineer governs the access surface and communication boundaries within which that reasoning operates. Both are necessary. Neither is the other.

Core skills
Zero-trust policy architecture Agent identity management Contextual permission design A2A communication security MCP boundary enforcement Permission composition analysis Real-time revocation systems Trust fabric design Graduated trust modeling

IAM engineers who develop deep agent architecture knowledge; security engineers specializing in API security and data boundary enforcement who extend to agent communication layers; or compliance-focused security engineers who develop AI governance expertise. The role has no clean predecessor because the problem space itself is new; it requires assembly from multiple adjacent disciplines.

Ethical Oversight Specialist
New Role · Phase 3+

Governs the ethical, legal, and accountability dimensions of autonomous security operations. Ensures that agent systems respect privacy, maintain regulatory compliance, operate within defined ethical boundaries, and remain accountable when their decisions produce adverse outcomes. This role emerges first in heavily regulated sectors, including financial services, healthcare, and critical infrastructure, where the legal implications of autonomous security decision-making are most immediately material.

The role is not primarily a security role in the traditional sense. It requires deep familiarity with regulatory frameworks, privacy law, and the organizational accountability structures needed for compliant autonomous decision-making. It is the direct organizational response to the compliance theater risk: the person responsible for ensuring that agent-generated audit trails actually reflect security reality, and that human verification sampling is sufficient to detect systemic agent failure before it becomes a regulatory event.

Core skills
Regulatory framework expertise Privacy law AI governance frameworks Audit design for autonomous systems Accountability structure design Ethical decision framework Board-level risk communication

Senior security managers or directors with broad operational exposure who develop regulatory and governance depth. GRC professionals who develop technical understanding of autonomous systems. Legal and compliance counsel who develop sufficient security domain knowledge to evaluate agent system design. The role requires both technical credibility and institutional authority.

08

The Deskilling Problem: Compliance and Audit

Nowhere are the consequences of the worst-case trajectory more immediately concrete and more systematically underestimated than in compliance and audit. The compliance industry has a persistent blind spot for what happens when the controls it certifies are operated by systems rather than humans, and the industry is moving toward that condition faster than its frameworks can adapt.

What Current Frameworks Assume

SOC 2, ISO 27001, PCI-DSS, HIPAA, and their equivalents share a foundational assumption: somewhere in the chain, there is a human who understands what the control is designed to prevent, can verify that the implementation actually prevents it, and can exercise judgment about whether a deviation is material. This assumption is load-bearing. The entire framework of periodic certification rests on human attestation of human-verifiable controls.

What Continuous Agent Monitoring Does

Agents can check controls at scale, continuously, and generate audit evidence automatically. This is operationally valuable, and it creates a governance gap that the current compliance model is not designed to handle. An agent can verify that a control is configured as specified. It cannot verify that the specified configuration actually achieves the security intent of the control in the organization's specific environment. That gap, between configuration compliance and security effectiveness, has always existed, and human auditors with domain expertise bridge it through judgment.

When auditors lose the technical depth to evaluate agent-generated evidence independently, because they never developed it or let it atrophy, and audit becomes a process of attesting to agent outputs rather than evaluating security. Compliance certification detaches from security reality. Organizations can be fully compliant, per agent-generated audit trails, while being substantively insecure.

The Structural Conflict

Autonomous agents generate the evidence that autonomous agents are audited against. If an agent misclassifies a finding, suppresses an alert below a confidence threshold, or applies flawed reasoning to a compliance check, the audit trail reflects the agent's conclusion rather than the underlying reality. An auditor who cannot independently analyze the underlying data cannot detect the discrepancy. This is not a hypothetical failure mode; it is the logical endpoint of removing independent human verification capacity while maintaining a certification process that requires it.

Worst Case: Phase 4

A major breach occurs in a fully certified organization. Post-incident review reveals all compliance checkpoints were satisfied, agent-generated audit trails showed no anomalies, and human analysts had no independent verification capacity. The gap between certification and security posture becomes publicly undeniable. Regulatory frameworks written for human-operated controls are exposed as inadequate for autonomous operations.

Positive Case: Phase 2+

Continuous autonomous monitoring supplements rather than replaces periodic human verification. Regulators develop frameworks requiring human-verifiable sampling of agent-generated evidence, mandatory disclosure of agent confidence calibration methods, and independent audit of agent decision architecture. Compliance becomes more rigorous than the current model, not less, because the evidence surface expands dramatically while human verification requirements are explicitly designed into the governance framework.

09

The Adversarial Asymmetry Risk

The deskilling problem has a systemic dimension that extends beyond any single organization. Defenders across the industry are moving toward automation of expertise. Attackers are not constrained to follow the same path.

Sophisticated threat actors, including nation-state groups, advanced criminal organizations, and state-sponsored mercenaries, are using AI to augment human expertise, not replace it. Their operators use LLMs to accelerate reconnaissance, generate new attack content, assist with code development, and identify attack paths. But the strategic creativity, including choosing targets, understanding organizational context, and identifying the specific chain of weaknesses that defeats a particular organization's controls, remains human-directed and human-developed.

If the defender side of the industry systematically deskills through automation while the attacker side uses automation to amplify existing expert capacity, the net effect is a widening of the expertise gap in favor of attackers. Autonomous detection systems optimized for known-pattern detection face increasingly unique attacks from adversaries who still have deep human expertise driving the targeting and strategy. Detection coverage calcifies around known TTPs while sophisticated actors operate in the gaps that trained pattern-matchers cannot see.

This asymmetry is not resolved by better agents. It is resolved by preserving the human adversarial imagination: the ability to hypothesize attack chains that have not yet been enumerated, to recognize threat actor creativity, and to probe defenses from an attacker's perspective. These capabilities develop through the same formation pipeline the deskilling argument identifies as at risk. The Adversarial Scenario Designer role exists precisely to preserve this capacity in institutional form. But that role can only be populated by practitioners who developed adversarial intuition through sufficient prior exposure, which loops back, once more, to pipeline design.

The symmetry argument

The positive case argues that defender organizations operating in the agentic model are also augmenting experts with AI rather than replacing them, and that the organizations that navigate this well will have more T3-equivalent analysts, not fewer, because agent automation of the operational tier frees the economics for senior specialist capacity they previously could not afford. The adversarial asymmetry only materializes if defender organizations actually eliminate expertise rather than redirect it. That is the choice.

10

Design Principles for the Transition

The following principles are not prescriptive implementation steps. They are the minimum design considerations that separate organizations executing the positive transformation from those inadvertently executing the worst case. Each addresses a specific mechanism through which well-intentioned AI deployment can produce the outcome it was designed to avoid.

Principle 1: Measure Developmental Case Rate

Before deploying AI that automates any analyst-facing workflow, audit the developmental content of that workflow. What percentage of the cases currently handled by the role being automated require hypothesis formation, pattern recognition, or investigative judgment rather than lookup and disposition? If the answer is below 30%, a development problem already exists and automation will compound it. If above 30%, a deliberate replacement formation mechanism must be designed before automation deploys.

Principle 2: Design for Reasoning Visibility, Not Just Output Efficiency

Investigation interfaces must expose agent reasoning paths as a first-class product requirement, not a nice-to-have. The platform question is not "does the agent return the right answer?" but "does reviewing the agent's reasoning teach the analyst something?" These are different optimization targets and produce different interface designs. Platforms that present conclusions without reasoning chains are headcount reduction tools. Platforms that expose reasoning chains are formation infrastructure.

Principle 3: Preserve the Override Mechanism and Take It Seriously

Analyst override of agent findings must be tracked, analyzed, and fed back into agent retraining. An override rate of zero is not evidence of excellent agent performance; it is evidence of rubber-stamp review. Override rates and patterns are the primary signal for both agent improvement and analyst development quality. Organizations should establish baseline override rate expectations and investigate both excessive overrides and insufficient ones.

Principle 4: Fund Formation Investment from Efficiency Savings

The economic logic of AI deployment in security produces efficiency savings immediately and pipeline costs five to seven years later. Organizations must explicitly budget formation investment, including new entry pathway development, Agent QA roles, scenario design programs, and ontology engineering, as a line item funded from the efficiency gains that automation produces. This does not happen by default. It requires explicit decision-making against the efficiency gradient.

Principle 5: Treat Agent Trust and Boundary Engineering as Day-One Infrastructure

Permission engineering for agent systems, which involves defining and governing the boundaries between agents, data sources, APIs, communication layers, and organizational boundaries, is not a Phase 2 or Phase 3 concern. Every autonomous agent deployed in Phase 1 has an access surface that needs governance. The Agent Trust and Boundary Engineer function, even if initially staffed by an existing security engineer doing double duty, must exist before the first production agent goes live.

Principle 6: Require Human Verification Sampling in Compliance Frameworks

Agent-generated audit evidence must not be accepted as self-certifying. Internal governance frameworks should establish mandatory human verification sampling for compliance-relevant agent decisions. The sampling rate, methodology, and independence requirements should be documented, auditable, and treated as a control in their own right.

Principle 7: Design the Governance Tier for the People Who Will Populate It

The Agent Architects, Ethical Oversight Specialists, and Security Ontology Engineers of 2028 onwards are today's T2 and T3 analysts. Career pathway design for the new roles must be visible and accessible now, not as aspirational job descriptions but as structured development programs with clear competency milestones. Organizations that wait until Phase 3 to think about who will fill these roles will find there is no internal pipeline to draw from.

11

Cross-Profession Implications

The agentic transformation of security operations does not occur in isolation from the rest of the enterprise. As autonomous security agents embed into every business function, and as the Agentic Web infrastructure layer enables agents to discover, communicate, and collaborate across organizational boundaries, every profession that touches security, directly or tangentially, is affected. The following analysis covers the most materially impacted adjacent roles.

Software and Detection Engineering

Software engineers are undergoing a parallel transformation to security analysts. The shift from writing deterministic code to designing cognitive loops, from debugging functions to validating trust in agent reasoning, mirrors the analyst's shift from triage to orchestration. Detection engineers face the most direct overlap: their work is now explicitly about designing logic that agents execute, which means the software engineering and security domains converge in the Detection Designer role described in Section 6.

More broadly, software engineers building security products, including platforms, investigation tools, SOAR replacements, and agent orchestration layers, must develop fluency in agentic engineering principles. The platforms they build determine whether investigation interfaces expose reasoning chains (formation infrastructure) or present only conclusions (headcount reduction tools). That design choice, made by product engineers, has downstream consequences for the analyst pipeline that most engineering teams have not accounted for.

The skills shift for this cohort is from writing deterministic code to designing cognitive systems: reasoning chains, memory architecture, tool orchestration, trust validation, and the economics of autonomous operation: model costs, retry loops, memory management, and latency. The engineering discipline that emerges from this transition is what the Role Evolution framework calls Agentic Engineering: treating cognition not as a feature but as infrastructure.

Legal, Compliance, and HR

Legal counsel faces the convergence of two previously separate domains: security law and AI governance law. As autonomous agents make consequential security decisions, including containment actions, evidence preservation, and breach notification triggers, legal teams must understand both the evidentiary standards those decisions produce and the liability exposure when autonomous systems act incorrectly. The legal professional who can evaluate whether an agent-generated evidence chain meets admissibility standards, or whether an autonomous containment action creates exposure, becomes valuable in ways that have no current parallel.

Compliance professionals face the most immediate and concrete transformation. The compliance frameworks they administer were written for human-operated controls. As autonomous agents take over compliance monitoring, evidence generation, and control verification, compliance professionals must develop the technical depth to evaluate agent-generated evidence independently through understanding what agent confidence scores mean, what systematic failure modes look like, and how to design human verification sampling that actually catches agent errors. Accepting agent audit trails uncritically is the compliance theater failure mode described in Section 8.

HR professionals are among the earliest non-security users of embedded security agents. The employee offboarding investigation scenario, where an HR Security Agent discovers potential data exfiltration and escalates to security analysts, is a Phase 2 reality, not a distant possibility. HR professionals must understand how to interact with security agents, what their outputs mean, what escalation thresholds apply, and what their own responsibilities are when agents surface findings. The cross-domain literacy this requires is modest in technical depth but broad in scope. They have enough security understanding to not misinterpret agent findings, enough process understanding to know when to escalate and to whom.

IT Operations and Product Management

IT Operations increasingly operates at the interface between security agents and the infrastructure they monitor. Security-focused change management becomes agent-assisted. Change requests flow through agents that perform real-time security impact assessment before human approval. IT operations professionals must understand what security agents are evaluating when they assess a change, what findings are blocking versus advisory, and how to provide the contextual information agents need to make accurate assessments. The historical pattern of IT and security operating in separate silos is incompatible with a security mesh architecture.

Product Managers for security platforms face a new capability planning challenge. They must understand agent economics, including token costs, retry loops, memory architecture, and latency tradeoffs, as first-class product constraints. They must understand trust metrics and human-AI collaboration UX patterns as product requirements, not afterthoughts. And they must make the formation versus efficiency tradeoff explicit in every platform decision. Does this feature design preserve analyst development value, or eliminate it? That question is probably not on most product management radars.

Executive Leadership

The executive layer faces a different kind of transformation, not of technical skills but of decision-making infrastructure. The security briefings executives receive will increasingly be synthesized by agents rather than prepared by analysts. The risk metrics they act on will increasingly be generated by autonomous monitoring rather than human assessment. The vendor decisions they make will determine whether their organizations have independent capacity to verify that autonomous systems are functioning correctly, or whether they have ceded that capacity permanently to external providers.

Three capabilities become essential at the executive level that were rarely required before. First, the ability to ask the right questions of agent-generated security summaries, not "what does this say?" but "what is this agent capable of missing?" Second, the organizational governance to maintain human verification capacity as a deliberate investment against efficiency pressure. Third, the ethical framework to recognize when autonomous security operations are approaching boundaries, including privacy and regulatory limits, that require human authority rather than agent judgment.

Executives who lack these capabilities will make resource allocation decisions that look rational in the short term and catastrophic in hindsight, exactly as MSSP procurement decisions looked in 2012 and look very different from the vantage point of 2026.

13

Universal Skills and the Skills in Decline

The role-specific skills covered in Sections 5–7 describe what individual practitioners need for particular functions. Beneath those specifics, certain skills become table stakes across every role in the security organization, and certain skills that currently carry high market value are being systematically devalued by automation. Understanding both categories is essential for individual practitioners planning development investments and for organizations designing workforce transition programs.

Skills Rising Across All Roles

Agent output critical evaluation is the foundational universal skill in an agentic future. Every security role, from T1 equivalent to CISO, requires the ability to evaluate agent-generated findings with skepticism, not reflexive rejection and uncritical acceptance, but the informed judgment to distinguish high-confidence routine findings from low-confidence ones that require human attention. This skill requires both domain expertise (knowing what the agent should find) and systems understanding (knowing how agents fail and what their failure modes look like).

Prompt literacy, the ability to formulate precise investigative queries, structure agent directives, and design prompt templates that produce reliable outputs, is not a specialist skill. It is as fundamental to agentic-era security work as query syntax knowledge was to the SIEM. Practitioners who cannot construct effective prompts cannot extract the investigative value agents offer; those who can construct high quality prompts operate at far higher effectiveness than those who cannot. This is a trainable skill with high return on investment at every level of the security organization.

Human-AI collaboration patterns, specifically knowing when to delegate, when to intervene, how to structure handoffs, and how to maintain accountability when agents are acting autonomously, represent a new category of judgment that has no close equivalent in pre-agentic security work. The analyst who instinctively knows when an agent's 82% confidence finding deserves another look, and the manager who knows which decisions should never be delegated to agents regardless of confidence score, are exercising this judgment. It develops through experience with agent systems, but organizations can accelerate its development through deliberate case review programs.

Data source and tool-chain awareness, meaning an understanding of what MCP servers are connected to what data, what agents can and cannot access, and where coverage gaps exist, determines whether practitioners can evaluate the completeness of agent findings, not just their accuracy. An agent that concludes "no anomalous activity detected" means something very different depending on whether it had access to email logs, network traffic, and endpoint telemetry or only to authentication events. The practitioner who does not understand the agent's data surface cannot properly interpret its conclusions.

Ethics and bias recognition, including identifying when autonomous operations produce systematically biased outcomes and when automated decisions are approaching ethical or legal boundaries, becomes a required competency at every level as autonomous systems take on more consequential security decisions. This is not primarily a technical skill. It is a combination of domain judgment, ethical awareness, and the professional confidence to flag concerns about systems that others may trust.

Skills Declining in Value

Identifying skills in decline is not a commentary on the practitioners who hold them. It is a practical map for where retraining investment is most urgent. The following represent functions that agents handle with increasing competency, reducing the market premium for human performance of the same tasks.

Manual alert triage and enrichment are the most immediate casualties. The correlation of known indicators, enrichment of IP addresses and domains against threat intelligence databases, basic pattern matching against known signatures. These are precisely the tasks that Phase 1 agents execute reliably and at scale. Practitioners whose primary value proposition is speed and accuracy on these tasks face the most urgent need for transition.

Template-based report writing, meaning the structured documentation of investigation steps, findings, and recommendations in standardized formats, is a Phase 1 automation target. Agents generate structured investigation reports as a native output. The human value-add shifts from writing the report to reviewing and attesting to its accuracy, identifying what the agent missed, and adding the contextual judgment that makes a technically accurate report operationally useful.

Rule-based escalation decisions, specifically determining whether an alert meets defined thresholds for escalation to a higher tier, are straightforward automation targets. The agent's confidence scoring and escalation logic replaces the lookup-based decision that constitutes much of current T1 escalation work. What does not automate is the judgment call on edge cases. The alert that doesn't meet escalation criteria by the numbers but has characteristics that an experienced analyst recognizes as worth a second look. That judgment requires the deep, tactical expertise the formation pipeline is designed to produce.

Basic log querying without analytical depth, including running predefined searches, extracting known log fields, and populating spreadsheets with filtered data, is absorbed by agent tool-use through MCP server connections. The practitioner who knows how to write a SIEM query retains value; the practitioner whose primary contribution is executing predefined queries without analytical interpretation does not.

The 30% Diagnostic

One operational metric synthesizes much of the skills-in-decline analysis into a single, measurable organizational indicator. Before any AI deployment decision, security leaders should audit the function being automated against a single question: what percentage of the cases currently handled by this role require actual hypothesis formation, not lookup and disposition, but genuine investigative judgment? If the answer is below 30%, the organization already has a development problem and automation will compound it. If above 30%, a deliberate replacement formation mechanism must be designed before automation deploys.

The same metric applies as a continuous operational monitor post-deployment. If the cases being routed to junior analysts for review, the structured sample preserved for developmental purposes, contain less than 30% that require hypothesis formation or genuine judgment, the sample is not serving its formation purpose. It is generating the appearance of analyst development without any real substance.

Skills declining fastest
  • Manual alert triage and enrichment
  • Template-based investigation reporting
  • Rule-based escalation decisions
  • Predefined log query execution
  • Basic data correlation without interpretation
  • Signature-based pattern matching
  • SLA-driven ticket closure
Skills rising in all roles
  • Agent output critical evaluation
  • Prompt literacy and query design
  • Human-AI collaboration judgment
  • Data source and tool-chain awareness
  • Ethics and bias recognition in autonomous systems
  • Hypothesis formation and adversarial creativity
  • Cross-functional communication of agent findings
14

Formation Pathways: A Practical Map

The roles described in Sections 5–7 do not populate themselves. For every Agent Architect, Security Ontology Engineer, and Trust and Boundary Engineer the industry needs from 2028 onwards, there must be a practitioner who made specific development investments prior. The following maps those investment paths, both for existing practitioners making the transition and for new entrants seeking to enter the security profession through agent-era pathways that did not previously exist.

Existing Practitioners: Transition Paths

T1 Analysts face the most urgent and most tractable transition. The target roles, Agent Validator in the near term and Agent Operations Specialist in the medium term, require building on existing alert pattern knowledge while developing the agent-specific layer. The understanding of agent reasoning systems, confidence calibration, override tracking, and MCP data surface and scope mapping. The transition might require approximately 12–18 months of deliberate development, most of which can happen in role if the organization designs the Phase 1 deployment to expose reasoning chains rather than just outputs. T1 analysts who develop strong agent evaluation instincts in Phase 1 are well positioned for Agent Operations roles in Phase 2.

T2 Analysts have a clear path to Investigation Coordinator through Phase 2 deployment experience. The transition is largely natural if the Phase 2 environment is well designed. The investment required is in two areas. Multi-agent workflow management, which requires exposure to coordinating multiple simultaneous agent investigations, and evidence chain evaluation for legal and compliance, which requires collaboration with legal counsel on what agent-collected evidence means in a formal setting. T2 analysts who develop interest in the architectural layer have a longer but high-value path to Agent Architect through cross-training in agent engineering fundamentals.

T3 / Senior Analysts have the clearest path to the highest-value new roles, and face the most significant identity transition. The Agent Architect role requires externalizing expertise, specifically translating investigation intuition into explicit system design. This is cognitively demanding in a specific way as most investigators have never been asked to articulate their reasoning at the level of precision required for agent architectures. Development programs should include structured knowledge elicitation exercises, agent design workshops with immediate feedback loops, and mentored agent-building projects that test whether their investigative knowledge can be successfully encoded. Senior analysts who develop Security Ontology Engineering skills additionally require investment in knowledge graph foundations and semantic representation, likely a 6–12 month development period alongside existing responsibilities.

Detection Engineers have a relatively direct path to Agent Detection Designer, with the primary development requirement being agent workflow design. The understanding of how agents execute detection logic, what evidence chain specifications look like in practice, and how to design graceful failure modes. Most detection engineers already have the security domain knowledge and analytical precision the role requires. The gap is primarily technical. MCP tool design, confidence threshold calibration, and the mechanics of autonomous evidence preservation.

SOC Managers transitioning to Agent Operations Lead face a split development path depending on their existing strength. Those with strong technical backgrounds should invest in AI governance frameworks and agent performance measurement approaches or methodologies. Those with stronger operational backgrounds should invest in the technical foundations of agent deployment and observability. Both need significant development in liaison and business relationship management skills, which have traditionally been peripheral to SOC management and will become central in the distributed mesh model.

Transition Timeline Reference
Development Guide
Current Role Phase 2 Target Phase 3 Target Primary Development Investment
T1 Analyst Agent Validator Agent Operations Specialist Agent reasoning evaluation; confidence calibration; MCP data surface awareness
T2 Analyst Investigation Coordinator Agent Architect (long path) Multi-agent workflow management; legal evidence standards; hypothesis methodology
T3 / Senior Analyst Agent Architect Security Ontology Engineer Agent engineering fundamentals; knowledge elicitation; ontological modeling
Detection Engineer Agent Detection Designer Adversarial Scenario Designer Agent workflow design; evidence chain specification; failure mode analysis
Red Teamer / Pen Tester Adversarial Scenario Designer Adversarial Scenario Designer (senior) Agent reasoning failure analysis; agentic QA methodology; confidence boundary testing
SOC Manager Agent Operations Lead Ethical Oversight Specialist AI governance frameworks; agent performance measurement; cross-functional liaison
IAM / Security Engineer Agent Trust & Boundary Engineer Trust fabric architecture (senior) Contextual permission systems; A2A communication security; composition analysis

New Entrants: Agent-Era Formation Paths

The security profession is creating entry pathways that did not exist five years ago. These are not shortcuts around foundational security expertise. They are different formation routes that develop domain depth through different mechanisms. The following represent viable agent-era entry paths for practitioners who are entering security for the first time.

Agent QA and Validation is the most accessible new entry pathway for practitioners with analytical backgrounds but limited prior security exposure. The work involves testing agent reasoning quality, identifying hallucination patterns, evaluating confidence calibration, and designing stress test scenarios for detection logic. Practitioners in this role develop security domain knowledge through continuous exposure to what agents get right and wrong. The failure analysis requires understanding what correct would have looked like. A two year development cycle in this role, with structured mentorship from senior analysts, can produce practitioners with genuine investigation judgment and strong agent systems knowledge simultaneously.

Security Ontology Engineering is accessible to practitioners from knowledge engineering, information architecture, and data modeling backgrounds who invest seriously in security domain development. The formal knowledge engineering skills are transferable; the security domain expertise must be built. Organizations that hire from this background should design explicit cross training programs, rotating ontology engineering candidates through investigation workflows, threat intelligence analysis, and detection engineering.

Trust and Boundary Engineering is accessible from identity and access management, API security, and network security engineering backgrounds. The contextual permission design and agent communication security aspects require development, but the foundational governance and access control knowledge will transfer well. This is likely the most near term hiring opportunity for practitioners with strong IAM backgrounds who want to develop into a higher complexity, higher value specialization.

Organizational Investment Requirements

Formation pathways exist only if organizations fund them. Three specific investment categories determine whether the positive transformation trajectory is achievable or remains aspirational.

Structured development case programs must be designed and maintained as operational infrastructure, not as training add-ons. This means identifying which agent resolved cases contain the highest developmental value, routing them to appropriate analysts, building the structured review interfaces that expose agent reasoning, and tracking development outcomes. The cost is operational. Analyst time spent on development cases rather than production throughput. The payoff is a senior analyst bench that exists in 2030.

Cross-training investment for the new specialist roles requires dedicated time and budget. T3 analysts developing agent architecture skills, detection engineers developing adversarial scenario design capability, IAM engineers developing agent boundary expertise. All these transitions do not happen on the margins of existing responsibilities. They require protected development time, access to agent engineering tooling, mentorship from those who have already made the transition, and explicit permission to prioritize development over short term operational productivity.

Career pathway visibility is the most underinvested and most immediately actionable requirement. The new roles described in this paper need to be visible as defined career destinations with explicit competency requirements, progression milestones, and compensation recognition, not as job descriptions that will be written when needed. Practitioners make development investments based on the career landscape they can see. Organizations that make the new roles visible now will find candidates self-selecting into development paths that serve the organization. Organizations that wait will find themselves unable to populate the roles they will urgently need when Phase 2 and Phase 3 arrive.

14

Conclusion

The traditional SOC, centralized, human-bound, and reactive, was never designed to withstand the scale, speed, and sophistication of today's cybersecurity landscape. Its replacement by a distributed, agent-powered security fabric is not a question of if but of how. The technology is more than capable, economic incentives are aligned and an organizational transformation is underway. The Natural Language or Agentic Web may provide a blueprint for the infrastructure layer that makes cross-organizational agent collaboration technically achievable rather than theoretical.

What is not determined is whether that transformation preserves or destroys the human expertise layer that makes autonomous systems trustworthy. The argument of this paper is not that the transformation should be resisted but that must be intentional and designed. Every section points toward the same conclusion from a different angle. The positive trajectory requires active organizational choices against the efficiency gradient. Those choices are not technologically complex. They are economically inconvenient. And the industry has consistently failed to make them when given the opportunity twice before, in 2012 with MSSP and in 2019 with MDR.

AI is the third wave. Unlike the first two, it has the capability to complete the pipeline damage the first two waves began. That capability is also what makes the design decision more consequential, not less. The same technology that can hollow out the analyst pipeline can, if deployed with formation intent, produce a security profession that is more capable, more sustainable, and more strategically valuable than anything the centralized SOC ever achieved.

The roles described in Sections 5 through 7, from the Agent Validator to the Ethical Oversight Specialist and from the Security Ontology Engineer to the Agent Trust and Boundary Engineer, represent the human architecture of a more capable profession. They are not speculative. They are the logical endpoints of decisions that security leaders are making today about how to deploy AI, how to preserve developmental value, how to govern autonomous systems, and how to invest in the practitioners who will populate these roles when the roles become critical.

The practitioners who will fill the governance tier of the 2028 to 2030 autonomous SOC are working somewhere in security today. Whether they develop the expertise required depends on whether the organizations they work for invest in their formation or automate it away. Whether compliance frameworks meaningfully constrain autonomous security operations, or certify fictional security postures, depends on whether regulators and organizations develop the human verification capacity now. Whether the adversarial asymmetry narrows or widens depends on whether the industry preserves the adversarial creativity and threat intuition that no agent can currently replicate.

These are tractable problems. They are not primarily technology problems. They are organizational design problems, workforce investment problems, and governance design problems of the kind that security leadership is well equipped to solve, if they recognize that the threat is structural and self-inflicted rather than external and beyond their control.

The security profession is not facing a technology problem. It is facing a formation problem that a technology transition is about to make permanent. The practitioners who will govern the autonomous security systems of 2030 are somewhere in the industry today, working their way through the volume and repetition that builds the judgment those systems will require to remain trustworthy. Whether they complete that formation, or whether that formation gets automated away before it finishes, is a choice that security leaders are making right now, in purchasing decisions and platform design choices and headcount models, without necessarily recognizing that they are making it.

The recomposition of security work is inevitable. Its outcome is not.

References and Context

This paper synthesizes original research from Command Zero, including "The Evolution of the SOC" and "The Natural Language Web, Designing for Agent-Human Coexistence," developed by Dean de Beer, CTO & Cofounder, Command Zero.

The four-phase SOC evolution model draws on the Command Zero distributed agentic SOC architecture, incorporating the Agent Communication and Discovery Protocol (ACDP), Model Context Protocol (MCP), and A2A agent communication standards.

The pipeline analysis in Section 2 incorporates arguments from "The Hollow Middle: How We Gutted the SOC Analyst Pipeline Before AI Ever Showed Up," referenced with attribution as an independent corroborating source for the MSSP/MDR pipeline damage argument. The 30% developmental case rate metric in Section 13 derives from the same source.

The Agent Trust and Boundary Engineering role definition in Section 7, and its treatment as a discipline distinct from the Trust Engineer role, represents original analytical work developed in the context of this research. It extends beyond traditional IAM frameworks to address the contextual permission, communication layer governance, and permission composition challenges specific to autonomous multi-agent security systems.

The cross-profession implications analysis in Section 11 extends the role framework beyond the security organization to address the full scope of enterprise transformation implied by distributed security mesh architecture.

why command zero

Governed AI.
Not a black box.

Direct-to-data access & SIEM support.
Start in under an hour.

Unify investigation for all tools.

Scale shared knowledge.
Uplevel humans and agents.