
Implementation Guide: Hunt for Threat Actor TTPs Across Endpoint Telemetry & Manage Vulnerability Lifecycle End-to-End
Step-by-step implementation guide for deploying AI to hunt for threat actor ttps across endpoint telemetry & manage vulnerability lifecycle end-to-end for Government & Defense clients.
Software Procurement
Microsoft Sentinel (Azure Government — FedRAMP High)
| Field | Value |
|---|---|
| Vendor | Microsoft Azure Government |
| License type | Consumption-based |
| Cost estimate | See UC-10. Hunting queries add minimal additional ingestion cost. |
Primary hunting platform. Sentinel's hunting dashboard provides a structured environment for hypothesis-driven hunts using KQL queries. Hunting bookmarks save interesting findings for analyst review. Live Stream enables real-time hunting query monitoring.
Microsoft Defender for Endpoint (GCC High)
| Field | Value |
|---|---|
| Vendor | Microsoft |
| License type | Included in M365 E5 GCC High |
| Cost estimate | ~$5.20/device/month standalone |
Primary endpoint telemetry source for threat hunting. Defender's Advanced Hunting (via Microsoft 365 Defender GCC High portal) provides 30-day rolling endpoint telemetry across all managed endpoints. Native connector to Sentinel streams all Defender telemetry for cross-source hunting.
Tenable.sc (Vulnerability Management)
| Field | Value |
|---|---|
| Vendor | Tenable |
| License type | Perpetual + annual maintenance |
| Cost estimate | See UC-13 |
Vulnerability scan engine and management platform. The vulnerability lifecycle agent uses Tenable.sc as the authoritative source for vulnerability data (CVE, CVSS, asset, plugin output). Tenable.sc API enables programmatic retrieval of scan results and verification of remediation.
Azure OpenAI (Azure Government)
| Field | Value |
|---|---|
| Vendor | Microsoft Azure Government |
| License type | Consumption-based |
| Cost estimate | Monthly hunt report: ~$5–$15. Vulnerability triage narrative: ~$1–$3 per CVE cluster. |
Generates hunt report narratives, vulnerability triage prioritization explanations, and remediation assignment communications. All processing within FedRAMP High boundary.
Microsoft Azure Logic Apps (Azure Government)
| Field | Value |
|---|---|
| Vendor | Microsoft Azure Government |
| License type | Consumption-based |
| Cost estimate | ~$0.000025/action |
Orchestrates the vulnerability lifecycle workflow: new CVE detected → triage → assign → track → verify → close. Runs on schedule (weekly full triage) and event-driven (new critical CVE detected → immediate triage).
Prerequisites
- Hunting hypothesis library: Effective threat hunting is hypothesis-driven, not query-driven. Before configuring the autonomous hunting agent, work with the ISSO to develop a hunting hypothesis library — a list of specific threats or behaviors to hunt for, based on the organization's threat model (who would attack them, with what TTPs). Common starting hypotheses for defense contractors: credential harvesting, lateral movement from compromised workstations, data staging before exfiltration, persistence mechanisms in critical servers.
- MITRE ATT&CK baseline: The hunting queries must be mapped to MITRE ATT&CK techniques relevant to the organization's threat profile. Review current NSA/CISA advisories targeting the Defense Industrial Base (DIB) sector for the most relevant TTPs to hunt for. CISA's Known Exploited Vulnerabilities catalog identifies which CVEs are actively exploited — hunting for exploitation attempts of those specific CVEs is high-value.
- Vulnerability management SLAs: Before configuring the lifecycle automation, obtain written approval of the remediation SLAs from the ISSM: Critical CVEs (CVSS 9.0+): 15 days; High CVEs (CVSS 7.0–8.9): 30 days; Medium CVEs (CVSS 4.0–6.9): 90 days. These SLAs drive the escalation thresholds in the lifecycle workflow.
- System owner/admin contact list: The vulnerability lifecycle agent assigns remediation tasks to system owners and administrators. Maintain a current contact list (name, email, Teams alias) for each system category in the CMMC boundary. This list must be kept current — outdated contacts cause assignment failures.
- POA&M system integration: Identify the client's POA&M system of record (eMASS, JIRA, SharePoint list). The lifecycle agent's close-out step must update the authoritative POA&M, not just mark items complete internally.
- IT admin access: Azure Government subscription, Sentinel workspace, Defender for Endpoint admin, Tenable.sc admin, POA&M system admin access.
Installation Steps
Step 1: Deploy the Autonomous Threat Hunting Agent
Build the autonomous hunting agent that runs scheduled KQL hunts, evaluates findings, and escalates confirmed threats.
# threat_hunt_agent.py
# Autonomous threat hunting agent — runs on Azure Functions timer trigger (weekly)
​from azure.monitor.query import LogsQueryClient, LogsQueryStatus
​from azure.identity import ClientSecretCredential
​from openai import AzureOpenAI
​import os, json, datetime
credential = ClientSecretCredential(
os.environ["AZURE_TENANT_ID"],
os.environ["AZURE_CLIENT_ID"],
os.environ["AZURE_CLIENT_SECRET"]
)
logs_client = LogsQueryClient(credential, endpoint="https://api.loganalytics.azure.us/v1")
aoai_client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_KEY"],
api_version="2024-08-01-preview"
)
WORKSPACE_ID = os.environ["SENTINEL_WORKSPACE_ID"]
# Threat hunting hypothesis library
# Each hypothesis has a KQL query, description, and MITRE ATT&CK mapping
HUNT_HYPOTHESES = [
{
"id": "H-001",
"name": "Credential Access via LSASS Memory Read",
"mitre": "T1003.001",
"description": "Hunt for processes accessing LSASS memory — common credential dumping technique",
"query": """
DeviceEvents
| where TimeGenerated > ago(7d)
| where ActionType == "OpenProcessApiCall"
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ ("MsMpEng.exe", "csrss.exe", "wininit.exe",
"services.exe", "lsass.exe", "svchost.exe")
| summarize
count = count(),
processes = make_set(InitiatingProcessFileName),
devices = make_set(DeviceName),
first_seen = min(TimeGenerated),
last_seen = max(TimeGenerated)
by InitiatingProcessFileName, InitiatingProcessVersionInfoCompanyName
| where count > 1
| order by count desc""",
"severity": "Critical",
"true_positive_threshold": 1 # Any result is suspicious
},
{
"id": "H-002",
"name": "Lateral Movement via Pass-the-Hash",
"mitre": "T1550.002",
"description": "Hunt for NTLM authentication anomalies indicating pass-the-hash",
"query": """
IdentityLogonEvents
| where TimeGenerated > ago(7d)
| where LogonType == "Network"
| where Protocol == "NTLM"
| where FailureReason == "" // Successful logons only
| summarize
distinct_sources = dcount(IPAddress),
distinct_destinations = dcount(DestinationDeviceName),
logon_count = count()
by AccountUpn, bin(TimeGenerated, 1h)
| where distinct_destinations > 5 and logon_count > 20
| order by logon_count desc""",
"severity": "High",
"true_positive_threshold": 3
},
{
"id": "H-003",
"name": "Data Staging in Unusual Directories",
"mitre": "T1074.001",
"description": "Hunt for large file copies to temp or user-writable directories (pre-exfiltration staging)",
"query": """
DeviceFileEvents
| where TimeGenerated > ago(7d)
| where ActionType == "FileCreated"
| where FolderPath has_any (@"\\Temp\\", @"\\AppData\\Local\\Temp\\",
@"\\Public\\", @"\\ProgramData\\Temp\\")
| where FileSize > 10485760 // Files > 10MB
| summarize
total_size_mb = sum(FileSize) / 1048576,
file_count = count(),
file_types = make_set(tostring(split(FileName, ".")[-1]))
by DeviceName, InitiatingProcessAccountName, bin(TimeGenerated, 1h)
| where total_size_mb > 100 // More than 100MB staged in 1 hour
| order by total_size_mb desc""",
"severity": "High",
"true_positive_threshold": 2
},
{
"id": "H-004",
"name": "Scheduled Task Persistence",
"mitre": "T1053.005",
"description": "Hunt for scheduled task creation pointing to unusual executables",
"query": """
DeviceProcessEvents
| where TimeGenerated > ago(7d)
| where (FileName =~ "schtasks.exe" and ProcessCommandLine has "/create")
or (FileName =~ "at.exe" and ProcessCommandLine has "/add")
| where ProcessCommandLine has_any (
"\\AppData\\", "\\Temp\\", "\\Public\\",
"powershell", "cmd.exe /c", "wscript", "cscript", "mshta",
"http://", "https://", "\\\\")
| project TimeGenerated, DeviceName, AccountName,
ProcessCommandLine, InitiatingProcessFileName
| order by TimeGenerated desc""",
"severity": "Medium",
"true_positive_threshold": 3
},
{
"id": "H-005",
"name": "C2 Beacon Pattern — Periodic Small Connections",
"mitre": "T1071.001",
"description": "Hunt for periodic, small outbound connections typical of C2 beaconing",
"query": """
DeviceNetworkEvents
| where TimeGenerated > ago(7d)
| where ActionType == "ConnectionSuccess"
| where RemotePort in (80, 443, 8080, 8443, 1080, 4444, 4445)
| where RemoteIPType == "Public"
| summarize
connection_count = count(),
avg_interval_min = (max(TimeGenerated) - min(TimeGenerated)) / count() / 1m,
unique_bytes_sent = dcount(SentBytes),
destinations = make_set(RemoteIP)
by DeviceName, InitiatingProcessFileName, RemoteIP
| where connection_count > 20
and avg_interval_min between (1 .. 60) // 1–60 minute intervals (beaconing range)
and unique_bytes_sent < 5 // Low variance in packet size (automated)
| order by connection_count desc""",
"severity": "Medium",
"true_positive_threshold": 5
}
]
def run_hunt(hypothesis: dict) -> dict:
"""Execute a single hunt hypothesis and return results."""
from datetime import timedelta
result = logs_client.query_workspace(
workspace_id=WORKSPACE_ID,
query=hypothesis["query"],
timespan=timedelta(days=7)
)
if result.status == LogsQueryStatus.SUCCESS:
rows = [dict(zip(result.tables[0].columns, row))
for row in result.tables[0].rows]
return {
"hypothesis_id": hypothesis["id"],
"name": hypothesis["name"],
"mitre": hypothesis["mitre"],
"severity": hypothesis["severity"],
"result_count": len(rows),
"exceeds_threshold": len(rows) >= hypothesis["true_positive_threshold"],
"results": rows[:10], # Top 10 results for AI analysis
"hunt_date": datetime.datetime.utcnow().isoformat()
}
return {"hypothesis_id": hypothesis["id"], "error": "Query failed", "result_count": 0}
def assess_hunt_findings(hunt_result: dict) -> str:
"""Use AI to assess whether hunt findings represent true threats."""
if hunt_result.get("result_count", 0) == 0:
return "No findings — hunt returned no results for this hypothesis."
assess_prompt = f"""You are a senior threat analyst assessing threat hunt findings
for a defense contractor environment under CMMC Level 2.
HUNT HYPOTHESIS: {hunt_result['name']}
MITRE ATT&CK: {hunt_result['mitre']}
FINDING COUNT: {hunt_result['result_count']}
SEVERITY: {hunt_result['severity']}
SAMPLE FINDINGS:
{json.dumps(hunt_result.get('results', [])[:5], indent=2, default=str)}
ASSESS:
1. TRUE POSITIVE LIKELIHOOD: [High/Medium/Low] — specific reasoning based on the data
2. ALTERNATIVE BENIGN EXPLANATIONS: What legitimate activity could produce these results?
3. ADDITIONAL CONTEXT NEEDED: What additional data would confirm or deny a true threat?
4. RECOMMENDED ANALYST ACTIONS:
If High/Medium likelihood: specific investigation steps (in order)
If Low likelihood: what monitoring to maintain
5. ESCALATION: Escalate to ISSO immediately? [Yes/No — reason]
6. CMMC INCIDENT ASSESSMENT: Does this constitute a potential CMMC reportable incident?
[ANALYST REVIEW REQUIRED — AI assessment is a starting point, not a final determination]"""
response = aoai_client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[
{"role": "system", "content": "You are an experienced threat hunter for defense contractor environments. Be specific, be direct, and err on the side of caution for CMMC-scoped systems."},
{"role": "user", "content": assess_prompt}
],
temperature=0.1, max_tokens=1500
)
return response.choices[0].message.content
def run_weekly_hunt_cycle() -> list:
"""Run all hunt hypotheses and return assessed findings."""
hunt_results = []
for hypothesis in HUNT_HYPOTHESES:
result = run_hunt(hypothesis)
if result.get("result_count", 0) > 0:
result["ai_assessment"] = assess_hunt_findings(result)
hunt_results.append(result)
return hunt_results
Step 2: Build the Vulnerability Lifecycle Management Agent
Build the autonomous agent that manages vulnerability remediation from scan to verified closure.
# vuln_lifecycle_agent.py
# Autonomous vulnerability lifecycle management
​import requests, os, json, datetime
​from openai import AzureOpenAI
TENABLE_API_KEY = os.environ["TENABLE_API_KEY"]
TENABLE_SECRET = os.environ["TENABLE_SECRET"]
aoai_client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_KEY"],
api_version="2024-08-01-preview"
)
# Remediation SLAs (days to remediate by CVSS tier)
REMEDIATION_SLAS = {
"critical": 15, # CVSS 9.0-10.0
"high": 30, # CVSS 7.0-8.9
"medium": 90, # CVSS 4.0-6.9
"low": 180 # CVSS 0.1-3.9
}
def get_severity_tier(cvss_score: float) -> str:
if cvss_score >= 9.0: return "critical"
if cvss_score >= 7.0: return "high"
if cvss_score >= 4.0: return "medium"
return "low"
def get_new_vulnerabilities(days_back: int = 7) -> list:
"""Retrieve newly discovered vulnerabilities from Tenable.sc."""
headers = {
"X-ApiKeys": f"accessKey={TENABLE_API_KEY};secretKey={TENABLE_SECRET}"
}
# Get vulnerabilities found in the last N days
cutoff = int((datetime.datetime.now() - datetime.timedelta(days=days_back)).timestamp())
resp = requests.get(
"https://cloud.tenable.com/workbenches/vulnerabilities",
headers=headers,
params={
"filter.search_type": "and",
"filter.0.filter": "first_found",
"filter.0.quality": "gt",
"filter.0.value": str(cutoff),
"date_range": days_back
}
)
resp.raise_for_status()
return resp.json().get("vulnerabilities", [])
def triage_and_prioritize(vulnerabilities: list, kev_catalog: dict) -> list:
"""AI-assisted triage and prioritization of new vulnerabilities."""
# Group by CVE for efficiency
cve_groups = {}
for vuln in vulnerabilities:
cve = vuln.get("cve_id", f"NO-CVE-{vuln.get('plugin_id')}")
if cve not in cve_groups:
cve_groups[cve] = {
"cve": cve,
"plugin_name": vuln.get("plugin_name", ""),
"cvss3_base_score": vuln.get("cvss3_base_score", 0),
"affected_assets": [],
"in_kev": cve in kev_catalog,
"kev_due_date": kev_catalog.get(cve, {}).get("dueDate", None),
"plugin_family": vuln.get("plugin_family", "")
}
cve_groups[cve]["affected_assets"].append(vuln.get("asset_hostname", "Unknown"))
triaged = []
for cve, data in cve_groups.items():
cvss = data.get("cvss3_base_score", 0) or 0
severity_tier = get_severity_tier(cvss)
sla_days = REMEDIATION_SLAS[severity_tier]
# KEV override — CISA BOD 22-01 requires federal agencies to remediate
# KEV items within specified timeframes (typically 2 weeks for critical)
if data["in_kev"]:
sla_days = min(sla_days, 14) # KEV gets max 14 days
severity_tier = "critical" # Treat all KEV as critical
due_date = (datetime.date.today() + datetime.timedelta(days=sla_days)).isoformat()
triaged.append({
**data,
"severity_tier": severity_tier,
"sla_days": sla_days,
"remediation_due_date": due_date,
"affected_asset_count": len(set(data["affected_assets"])),
"priority_score": (
(10 if data["in_kev"] else 0) +
cvss +
len(set(data["affected_assets"])) * 0.5
),
"assignment_group": determine_assignment_group(data)
})
return sorted(triaged, key=lambda x: -x["priority_score"])
def determine_assignment_group(vuln_data: dict) -> str:
"""Determine which team should remediate based on vulnerability type."""
plugin_family = vuln_data.get("plugin_family", "").lower()
plugin_name = vuln_data.get("plugin_name", "").lower()
if "windows" in plugin_family or "microsoft" in plugin_name:
return "Windows Sysadmin Team"
elif "web" in plugin_family or "apache" in plugin_name or "nginx" in plugin_name:
return "Web Services Team"
elif "database" in plugin_family or "sql" in plugin_name:
return "DBA Team"
elif "network" in plugin_family or "cisco" in plugin_name or "palo" in plugin_name:
return "Network Team"
else:
return "Security Team (Triage Required)"
def generate_remediation_assignment(vuln: dict) -> str:
"""Generate a remediation assignment notification."""
assign_prompt = f"""Generate a remediation assignment notification for the following vulnerability.
The recipient is a system administrator who must remediate this within the SLA.
VULNERABILITY: {vuln['plugin_name']}
CVE: {vuln['cve']}
CVSS Score: {vuln['cvss3_base_score']}
Severity: {vuln['severity_tier'].upper()}
In CISA KEV Catalog: {'YES — BOD 22-01 compliance required' if vuln['in_kev'] else 'No'}
Affected Assets ({vuln['affected_asset_count']} systems): {', '.join(list(set(vuln['affected_assets']))[:10])}
Remediation Due Date: {vuln['remediation_due_date']} ({vuln['sla_days']} day SLA)
Generate a clear, specific assignment message that includes:
1. What the vulnerability is (plain language)
2. Why it matters (risk if not remediated)
3. Specific remediation steps (patch version, configuration change, or workaround)
4. Verification step (how to confirm remediation is successful)
5. SLA deadline and escalation contact if SLA cannot be met
Tone: Professional, direct. The recipient needs to act — do not bury the action in caveats.
Length: Under 300 words.
[SECURITY OPERATIONS — {datetime.date.today().isoformat()}]"""
response = aoai_client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[{"role": "user", "content": assign_prompt}],
temperature=0.1, max_tokens=600
)
return response.choices[0].message.content
def verify_remediation(asset_hostname: str, cve_id: str) -> bool:
"""Verify that a vulnerability has been remediated by triggering a targeted rescan."""
headers = {
"X-ApiKeys": f"accessKey={TENABLE_API_KEY};secretKey={TENABLE_SECRET}"
}
# Launch targeted scan against specific asset for the CVE's plugin
# (Full implementation requires Tenable.sc scan creation API)
# Simplified: check if CVE still appears in workbench for this asset
resp = requests.get(
"https://cloud.tenable.com/workbenches/assets",
headers=headers,
params={"filter.0.filter": "hostname", "filter.0.quality": "eq",
"filter.0.value": asset_hostname}
)
if resp.status_code != 200:
return False
assets = resp.json().get("assets", [])
if not assets:
return False
asset_id = assets[0].get("id")
vuln_resp = requests.get(
f"https://cloud.tenable.com/workbenches/assets/{asset_id}/vulnerabilities",
headers=headers
)
existing_cves = [v.get("cve_id") for v in vuln_resp.json().get("vulnerabilities", [])]
return cve_id not in existing_cves # True = remediated
Step 3: Configure the Vulnerability Lifecycle Logic App
Build the Logic App that orchestrates the complete vulnerability lifecycle autonomously.
## Custom AI Components
### Hunt Report Generator
**Type:** Prompt
Compiles individual hunt hypothesis results into a structured monthly threat hunt report for ISSO and ISSM review.
**Implementation:**
SYSTEM PROMPT: Compile the following individual threat hunt results into a monthly Threat Hunt Report for ISSO and leadership review. The report should demonstrate CMMC continuous monitoring capability (CA.L2-3.12.3) and support DCMA/C3PAO assessment evidence.
HUNT RESULTS THIS PERIOD: {hunt_results_json}
ORGANIZATION: {org_name} HUNT PERIOD: {period} ENVIRONMENT: CMMC Level {cmmc_level} boundary
Generate:
Testing & Validation
- Hunt query accuracy test: Run each of the 5 hunt queries against a 90-day historical dataset where actual incidents are known. Verify each query detects the known incidents and the true positive likelihood assessment correctly differentiates real threats from benign activity.
- Hunt escalation test: Inject a synthetic event matching the LSASS access hunt pattern on a test endpoint. Verify: (a) the hunt detects it within 24 hours, (b) the AI assessment flags it as high-likelihood true positive, (c) the ISSO receives a Teams notification within 1 hour of assessment.
- Vulnerability triage accuracy test: Run the triage pipeline against a known set of 20 CVEs (10 KEV, 10 non-KEV at various CVSS scores). Verify all 10 KEV items receive the 14-day SLA and critical priority. Verify CVSS scores are correctly mapped to severity tiers.
- SLA timer accuracy test: Create test vulnerability tickets with due dates 5, 3, 1, and 0 days in the future. Verify the daily SLA monitor correctly identifies each tier and sends alerts to the appropriate audience.
- Remediation verification test: Apply a patch for a known CVE on a test system. Trigger the verification workflow and confirm: Tenable rescan launches, CVE is no longer detected, ticket closes with verified status, POA&M updates.
- False remediation test: Mark a ticket as "complete" without actually patching the vulnerability. Verify the verification rescan detects the CVE is still present, reopens the ticket, and notifies the assignee.
- POA&M sync accuracy test: After 10 closed vulnerability tickets, run the monthly POA&M sync and verify all 10 appear in the POA&M update with correct closure dates and verification evidence.
- CMMC evidence completeness test: Have the ISSO review the audit trail in Azure SQL for a completed vulnerability lifecycle (creation → assignment → remediation → verification → closure). Verify it contains sufficient detail to demonstrate CA.L2-3.12.3 and RA.L2-3.11.3 for a C3PAO assessment.
Client Handoff
Handoff Meeting Agenda (90 minutes — ISSO + ISSM + System Administrators + SOC Lead + IT Lead)
1. Threat hunting framework review (25 min)
- Walk through each hunting hypothesis and its MITRE ATT&CK mapping
- Demonstrate a live hunt execution on recent data
- Review the AI assessment quality for 2 real findings
- Confirm escalation path and ISSO notification workflow
2. Vulnerability lifecycle demonstration (25 min)
- Walk through the complete lifecycle: new CVE detection → triage → assignment → verification → closure
- Demonstrate the KEV handling and 14-day SLA enforcement
- Show the SLA escalation workflow
- Walk through POA&M update process
3. CMMC evidence review (20 min)
- Show the Azure SQL audit trail and how it supports CMMC evidence requirements
- Walk through the monthly hunt report format and CMMC mapping
- Confirm the evidence retention schedule (3 years minimum for CMMC)
4. Roles and responsibilities (10 min)
- Confirm ISSO owns the hunt hypothesis library and reviews all findings
- Confirm system administrators are the remediation assignees for their platforms
- Confirm ISSM escalation path for missed SLAs and CMMC incidents
5. Documentation handoff
- Hunt hypothesis library (with ISSO review and approval signature)
- Remediation SLA matrix (ISSM-signed)
- Logic App vulnerability lifecycle flow documentation
- CMMC evidence archive structure
- Monthly hunt report template
- Vulnerability lifecycle audit trail guide for C3PAO assessment
Maintenance
Daily Tasks (Automated)
- Vulnerability SLA monitor runs at 07:00 ET — RED alerts for expiring SLAs
- Hunt queries run continuously via Sentinel scheduled analytics
- Remediation verification runs within 4 hours of ticket "Complete" status
Weekly Tasks
- Monday: Full vulnerability triage cycle for new CVEs
- ISSO reviews all hunt findings from the week and dispositions each
- Review any failed remediation verifications
Monthly Tasks
- Generate monthly threat hunt report for ISSM review
- Run monthly POA&M sync and update eMASS
- Review hunt hypothesis library — add new hypotheses based on current CISA/NSA DIB advisories
- Azure OpenAI and Sentinel consumption review
Quarterly Tasks
- Full hunt hypothesis review with ISSO — retire hypotheses that consistently produce zero results or high false positive rates; add new ones for emerging TTPs
- Remediation SLA review with ISSM — adjust if contract or regulatory requirements have changed
- CMMC evidence package review — confirm all required evidence is current
Annual Tasks
- Full CMMC evidence archive compilation for assessment
- Hunt hypothesis library major refresh — align with current MITRE ATT&CK version update
- Tenable scan policy review — ensure scan credentials and policies cover all in-scope assets
Alternatives
Palo Alto Cortex XSIAM (AI-Native SOC Platform)
Cortex XSIAM provides an AI-native SOC platform with built-in threat hunting, automated triage, and vulnerability management. FedRAMP High authorized. Best for: Large defense contractors or agencies wanting a single-vendor AI SOC platform. Tradeoffs: Enterprise pricing ($500K+/year); less customizable hunting hypotheses than the custom Sentinel approach.
CrowdStrike Falcon (Endpoint + Hunting + Vulnerability)
CrowdStrike Falcon provides EDR, managed threat hunting (Falcon OverWatch), and Spotlight (vulnerability management) in a single platform. FedRAMP High authorized. Best for: Organizations preferring a single EDR/hunting/vuln platform from one vendor. Tradeoffs: Higher per-endpoint cost than Microsoft Defender for Endpoint when already on M365 E5; hunting is managed by CrowdStrike analysts rather than autonomous.
Manual Hunting + Automated Vuln Lifecycle Only (Conservative)
For organizations not yet ready for autonomous threat hunting, deploy only the vulnerability lifecycle automation (scan → triage → assign → verify → close) and conduct manual monthly threat hunting sessions led by the ISSO using the KQL query library. Provides most of the CMMC compliance value at lower operational complexity. Best for: Smaller contractors (under 200 endpoints) where a full autonomous hunting agent is more infrastructure than the threat volume warrants.
Azure Logic App: Vulnerability Lifecycle Manager (Azure Government)
TRIGGER 1: WEEKLY TRIAGE (Scheduled — Monday 07:00 ET) → Get new vulnerabilities from Tenable.sc (last 7 days) → Run AI triage and prioritization → For each triaged vulnerability:
IF severity = "critical" OR in_kev = true: → Create URGENT ticket in POA&M system (SharePoint or JIRA) → Send Teams Adaptive Card to assignment_group lead + ISSO: Title: "🔴 CRITICAL VULNERABILITY — {sla_days}-day SLA" Body: AI-generated assignment message Action buttons: [Accept Assignment] [Request Extension] [Escalate] → Set SLA timer (15 days for critical/KEV)
IF severity = "high": → Create HIGH ticket in POA&M system → Send email to assignment_group + ISSO → Set SLA timer (30 days)
IF severity = "medium": → Create MEDIUM ticket in POA&M system → Send weekly digest to assignment_group (batch, not individual alerts) → Set SLA timer (90 days)
TRIGGER 2: DAILY SLA MONITOR (Scheduled — 07:00 ET daily) → Query POA&M system for all open tickets → Calculate days remaining to SLA → For tickets expiring in ≤ 5 days: send RED alert to assignee + team lead + ISSO → For tickets past SLA: → Escalate to ISSM → Flag for POA&M update (missed SLA is a CMMC finding) → Generate POA&M extension justification template for ISSM approval
TRIGGER 3: REMEDIATION VERIFICATION (Triggered when assignee marks ticket "Complete") → Run Tenable targeted rescan against affected asset(s) → Wait for scan completion (max 4 hours) → Check: does CVE still appear in Tenable results? IF Not Found (remediated): → Update POA&M ticket: "CLOSED — Verified Remediated" → Log closure date and verification method → Update CMMC evidence archive IF Still Found (not remediated): → Reopen ticket with comment: "Remediation verification FAILED — CVE still detected" → Notify assignee + team lead → Reset SLA timer (original due date maintained — no extension for failed remediation)
TRIGGER 4: MONTHLY POA&M SYNC (Scheduled — Last business day of month) → Export all closed vulnerabilities from the month → Generate POA&M update summary for ISSO/ISSM review → Update eMASS or SharePoint POA&M with closure dates and verification evidence → Generate monthly vulnerability metrics report for leadership
AUDIT TRAIL: All lifecycle events (creation, assignment, SLA milestones, verification, closure) logged to Azure SQL with timestamps for CMMC assessment evidence.
---
## THREAT HUNT REPORT — {period}
### EXECUTIVE SUMMARY
- Hypotheses executed: [N]
- Findings requiring investigation: [N]
- True positives confirmed: [N]
- Overall threat posture: [Normal / Elevated / High]
### MITRE ATT&CK COVERAGE
Table showing which tactics and techniques were hunted this period.
Coverage gaps to address next period.
### SIGNIFICANT FINDINGS
For each hypothesis with findings:
- Hypothesis name and MITRE mapping
- Finding description
- True positive likelihood
- Analyst actions taken or recommended
- Disposition (Closed-FP / Active-Investigating / Escalated)
### METRICS
| Metric | Value |
|--------|-------|
| Hypotheses run | |
| Total findings | |
| True positives | |
| False positive rate | |
| Mean time to assess | |
### NEXT PERIOD HUNTING PRIORITIES
Based on current NSA/CISA DIB advisories and this period's findings.
[DRAFT — REQUIRES ISSO REVIEW AND ISSM APPROVAL]
[Classification: CUI]