What You'll Learn
- Explain how threat intelligence transforms alert triage from pattern matching into informed decision-making
- Apply the intel-driven triage matrix to determine correct priority action based on severity and intel match status
- Distinguish between automated enrichment, manual enrichment, and hybrid approaches and select the right one for a given SOC scale
- Design intel-driven playbooks that formalize response procedures when IOC matches occur
- Evaluate intel reliability by assessing source confidence, IOC freshness, and potential for false intelligence
- Execute a complete intel-driven triage workflow from initial alert through enrichment, matrix application, and escalation
How Intel Changes the Triage Game
In Module 4, you learned to triage alerts using context — source IP reputation, time of day, correlation with adjacent events, and baseline deviation. You learned to distinguish true positives from false positives, to prioritize based on severity and business impact, and to document your reasoning for the next analyst. That was foundational. But it was incomplete.
Now we add the most powerful context source available to a SOC analyst: threat intelligence.
Consider two alerts, identical in every technical respect:
Alert A: Wazuh rule 5551 — SSH brute-force attempt (level 10). Source IP: 203.0.113.47. A random internet IP with no reputation data. Your firewall logs show it scanning your entire /24 subnet. This is spray-and-pray — an automated scanner hitting millions of IPs hoping for weak credentials.
Alert B: Wazuh rule 5551 — SSH brute-force attempt (level 10). Source IP: 198.51.100.88. Your MISP instance flags this IP: it appeared in a LockBit ransomware C2 feed published 36 hours ago with high confidence. The IP is tagged ransomware:lockbit, tlp:amber, and linked to an event describing an active campaign targeting your sector.
Same rule. Same severity level. Same alert text. Completely different urgency.
Alert A is routine — document it, ensure the account is not compromised, verify fail2ban is active, move on. Alert B is a potential precursor to a ransomware deployment against your organization by a known threat group. It demands immediate escalation, host isolation review, and coordination with your IR team.
The difference is not in the alert. The difference is in the intelligence context surrounding the alert.
This is what intel-driven triage means: using external and internal threat intelligence to adjust your triage priority, your investigation depth, and your response actions in real time. Without intel, you are triaging alerts in a vacuum — reacting to what your tools tell you without understanding what the adversary is doing. With intel, you are making informed decisions based on the current threat landscape.
A low-severity DNS query alert becomes critical when the domain matches a known APT campaign. A medium-severity file creation event becomes urgent when the hash appears in a ransomware feed published that morning. A high-severity brute-force alert becomes routine when you confirm the source is a well-known research scanner with no malicious history.
Intel does not replace your triage skills — it amplifies them. Everything you learned in Module 4 still applies. Intel adds a new dimension to every decision.
The Intel-Driven Triage Matrix
To make intel-driven triage systematic rather than ad hoc, we use a simple 2×2 framework that maps every alert into one of four quadrants based on two variables: the alert's severity level and whether there is a confirmed intelligence match for the IOCs in the alert.
Quadrant 1: LOW Severity + NO Intel Match → ROUTINE
Standard checklist triage. The alert is low severity and no threat intelligence source — internal MISP, commercial feeds, OSINT — recognizes the IOCs. This is your most common quadrant. The vast majority of alerts in any SOC fall here: informational events, known-benign activity, scanner noise, and low-confidence detections that match no known campaign.
Action: Follow your standard triage checklist. Validate the alert, check for obvious anomalies, and close if benign. Typical time: 1-3 minutes.
Example: Wazuh level 3 alert — DNS query to a newly registered domain. No MISP match. No VirusTotal detections. The domain resolves to a Cloudflare IP shared by thousands of sites. Likely a legitimate new website. Close with a note.
Quadrant 2: HIGH Severity + NO Intel Match → INVESTIGATE
The alert is serious — your SIEM rules triggered at a high level — but no intelligence source recognizes the IOCs. This does not mean the alert is a false positive. It means you might be facing a novel threat that has not yet been reported by the community, or it could be a false positive triggered by unusual but legitimate activity.
Action: Full investigation. Correlate with adjacent events, check endpoint telemetry, perform manual OSINT lookups. Do not dismiss because intel is silent — the absence of intelligence is not evidence of safety.
Example: Wazuh level 12 alert — PowerShell execution with a Base64-encoded command on a server that never runs PowerShell. No hash match in MISP. No VirusTotal hits. This could be a zero-day exploit using fresh infrastructure, or it could be a sysadmin running an encoded script for the first time. You cannot tell without investigating.
Quadrant 3: LOW Severity + INTEL Match → ELEVATE
This is the quadrant that separates intel-driven analysts from severity-driven analysts. The alert itself is minor — a low-level informational event that most analysts would dismiss in seconds. But an IOC in the alert matches a known threat campaign.
Action: Immediately elevate priority. Treat as HIGH severity regardless of the original alert level. The intel context overrides the severity score. Investigate fully, check for related activity, and prepare for potential escalation.
Example: Wazuh level 5 alert — DNS query to update-service.example.com from WIN-WORK-05. By itself, unremarkable. But automated enrichment tags the domain: MISP match → Emotet campaign → confidence: high → first seen 48 hours ago. That low-severity DNS query is now evidence that a workstation in your network is communicating with active Emotet infrastructure. This alert just jumped from "close in 30 seconds" to "investigate immediately."
Key principle: A low-severity alert with a confirmed intel match is MORE urgent than a high-severity alert with no intel context. The severity score reflects the rule author's generic assessment. The intel match reflects what is actually happening in the threat landscape right now. Current, confirmed intelligence always carries more weight than a static severity number.
Quadrant 4: HIGH Severity + INTEL Match → ESCALATE NOW
Both signals agree: the alert is serious and the IOCs are confirmed malicious by threat intelligence. This is the highest-priority quadrant. You are likely looking at an active campaign targeting your environment using known adversary infrastructure.
Action: Immediate escalation to L2 or incident response. Do not spend additional time on solo investigation — the combination of high severity and confirmed intel match has already provided sufficient evidence to justify escalation. Begin containment actions in parallel.
Example: Wazuh level 14 alert — outbound C2 beaconing to 185.220.101.42 from WIN-SERVER-01 every 4 hours. MISP tags the IP: Cobalt Strike infrastructure, linked to APT29, high confidence, seen in the wild this week. This is not a question — it is an incident. Escalate, isolate, and begin response.
The Practical Power of the Matrix
The matrix works because it forces you to consider two independent signals rather than relying on just one. Many SOCs triage exclusively on severity — high-severity alerts get attention, low-severity alerts get ignored. This approach systematically misses the ELEVATE quadrant: low-severity alerts that are actually critical because they match active campaigns. These are the alerts that adversaries rely on you to ignore.
Conversely, the matrix prevents overreaction to high-severity alerts with no intel backing. A rule firing at level 12 is concerning, but if extensive investigation and intel lookups all return clean, the INVESTIGATE quadrant tells you to keep digging rather than immediately escalating — it might be a false positive from an overly aggressive rule.
The "Is This IOC Known-Bad?" Workflow
When you encounter an IOC in an alert — an IP address, a domain name, a file hash — you need a systematic process for checking whether threat intelligence has anything to say about it. Here is the decision workflow:
┌─────────────────────────────────────────┐
│ Step 1: Extract the IOC from the alert │
│ (IP, domain, hash, URL, email) │
└────────────────┬────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Step 2: Search MISP (internal TI) │
│ Manual search or auto-enrichment result │
└────────────────┬────────────────────────┘
│
┌──────┴──────┐
▼ ▼
[MATCH] [NO MATCH]
│ │
▼ ▼
┌──────────────┐ ┌──────────────────────────┐
│ Step 3: │ │ Step 4: Check external │
│ What event? │ │ sources (VirusTotal, │
│ What campaign│ │ AbuseIPDB, Shodan, │
│ Confidence? │ │ URLhaus, MalwareBazaar) │
│ TLP? │ └──────────┬───────────────┘
│ Freshness? │ ┌──┴──┐
└──────┬───────┘ ▼ ▼
│ [MATCH] [NO MATCH]
│ │ │
│ │ ▼
│ │ ┌───────────────────┐
│ │ │ Step 5: NOT "clean"│
│ │ │ — just "not yet │
│ │ │ known." Don't │
│ │ │ dismiss. Note the │
│ │ │ gap and continue │
│ │ │ triage normally. │
│ │ └────────┬──────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────┐
│ Step 6: Update triage priority using │
│ the intel-driven triage matrix │
└─────────────────────────────────────────┘
Step 1 is mechanical — copy the indicator from the alert. But be precise. Extract the full domain (not just the subdomain), the complete hash (SHA256, not MD5), and the exact IP (check if it is the source or destination — they have very different implications).
Step 2 is where your internal threat intelligence platform does the heavy lifting. If your SOC has integrated MISP with your SIEM (or you run the lookup manually), this is the first place to check. Internal intel is more relevant than external because it includes feeds curated for your sector, events shared by your ISAC partners, and IOCs from your own previous investigations.
Step 3 applies when MISP returns a hit. Do not stop at "match found." Ask the critical follow-up questions: What campaign or threat actor is this linked to? What is the confidence level of the match? What is the TLP (Traffic Light Protocol) marking, and does it restrict how you can share the finding? How old is the event — was this IOC reported yesterday or six months ago?
Step 4 covers the case when MISP has nothing. This does not mean the IOC is clean — it means your internal platform does not have it. Check external sources: VirusTotal for file reputation and passive DNS, AbuseIPDB for IP abuse reports, Shodan for exposed services, URLhaus for malicious URLs, and MalwareBazaar for malware samples.
Step 5 is the most important mental discipline in intel-driven triage: no match does not mean clean. An IOC that appears in zero threat feeds could be brand-new infrastructure spun up an hour ago. It could be a targeted attack using custom tools that no vendor has seen yet. It could be perfectly legitimate. The absence of intelligence tells you nothing definitive. Treat "no match" as "insufficient data" — not as "confirmed safe."
Step 6 takes whatever you learned and maps it to the triage matrix. Match + high severity = escalate now. Match + low severity = elevate. No match + high severity = investigate. No match + low severity = routine.
Automated enrichment is the single highest-ROI improvement most SOCs can make. Manually checking MISP, VirusTotal, and AbuseIPDB for every alert is feasible at 10-15 alerts per hour. At 50+ alerts per hour — typical for a mid-size SOC — manual lookups become the bottleneck. Automating Step 2 (and optionally Step 4) so that every alert arrives pre-enriched with intel tags transforms analyst productivity overnight. The analyst opens the alert and immediately sees "MISP: Emotet campaign, confidence: high" instead of spending three minutes running the lookup themselves.
Automated vs Manual Enrichment
The workflow above works whether you perform each step manually or whether your infrastructure automates them. But the difference in scale and speed between the two approaches is dramatic.
Manual Enrichment
The analyst receives an alert, opens MISP in a browser tab, searches for the IOC, reads the event details, copies relevant context (campaign name, confidence, related IOCs), and adds that context to their triage notes. For each additional source (VirusTotal, AbuseIPDB), they open another tab, search, evaluate, and copy.
Advantages: The analyst reads the full event context, notices nuances that automated tagging might miss, and can make judgment calls about relevance and confidence.
Limitations: Slow. At 3-5 minutes per IOC lookup, an analyst checking three IOCs per alert can only process 10-15 alerts per hour before enrichment alone consumes all their time. This does not scale.
Automated Enrichment
The SIEM automatically queries the MISP API (and optionally external APIs) every time an alert fires. The response is parsed and relevant tags are added to the alert before any analyst sees it. The analyst opens the alert and immediately sees: misp:match, campaign:emotet, confidence:high, tlp:amber.
Advantages: Instant. Every alert is enriched within seconds of creation. Analysts spend zero time on lookups and 100% of their time on analysis and decisions. Scales to hundreds of alerts per hour.
Limitations: Automated tagging lacks nuance. The API returns "match" or "no match" — it does not tell the analyst that the MISP event was contributed by a less reliable source, or that the IOC was marked as "suspicious" rather than "confirmed malicious." Automated enrichment creates a risk of over-reliance if analysts treat tags as verdicts rather than context.
The Hybrid Approach
The best SOCs combine both: automate the lookup, let the human make the decision. Every alert gets automatic MISP enrichment tags. The analyst sees the tags, evaluates the context, and makes the triage call. Never automate the triage verdict — only the enrichment that informs it.
MISP REST API for Automated Lookup:
# Search MISP for an IP address
curl -s -H "Authorization: YOUR_MISP_API_KEY" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{"value": "198.51.100.88"}' \
https://misp.example.com/attributes/restSearch
The response includes matching attributes, parent event details, tags, confidence levels, and related IOCs — everything an analyst needs pre-packaged for triage.
Wazuh + MISP Integration Concept:
Wazuh supports CDB (Constant Database) lists that can be populated from MISP exports. When a log field matches a CDB list entry, Wazuh can fire a high-priority rule or add a tag. For real-time enrichment, Wazuh's active response framework can call a script that queries the MISP API and annotates the alert before it reaches the dashboard.
| Aspect | Manual Enrichment | Automated Enrichment | Hybrid Approach |
|---|---|---|---|
| Speed | 3-5 min per IOC | < 1 second per IOC | < 1 second lookup + analyst review |
| Accuracy | High (human reads full context) | Medium (tag-based, no nuance) | High (automated data, human judgment) |
| Scalability | 10-15 alerts/hour | 100+ alerts/hour | 100+ alerts/hour |
| Cost | High analyst time | API license/integration effort | Integration effort + moderate analyst time |
| Risk | Analyst fatigue at volume | Over-reliance on automated verdicts | Minimal (best of both) |
Never automate the triage verdict — only the enrichment. An automated tag that says "MISP match: Emotet" is valuable context. An automated rule that says "MISP match found → auto-escalate to IR" is dangerous. Intel can be stale, feeds can have false positives, and context that only a human can assess (is this host supposed to connect to that IP? Is this user's behavior normal?) gets bypassed entirely. Automate the data gathering. Keep the human in the decision loop.
Intel-Driven Playbooks
Once you have established your triage matrix and enrichment pipeline, the next step is formalizing your response into playbooks — documented, repeatable procedures that specify exactly what an analyst should do when a particular type of intel match occurs.
Playbooks eliminate guesswork. Instead of each analyst improvising a response when they see a ransomware IOC match, every analyst follows the same tested procedure. This ensures consistency, reduces response time, and creates accountability.
Playbook Template
Every intel-driven playbook follows a common structure:
PLAYBOOK: [Name]
TRIGGER: [What conditions activate this playbook]
ENRICHMENT: [What additional lookups to perform]
TRIAGE: [How to classify using the matrix]
RESPONSE: [Specific containment/investigation steps]
ESCALATION: [When and to whom to escalate]
DOCUMENTATION: [What to record and where]
Example: Known Ransomware C2 Playbook
Trigger: MISP auto-enrichment returns a match with any of the following tags: ransomware:*, cobalt-strike, C2:confirmed, OR the IOC is found in a ransomware-specific feed (e.g., abuse.ch Feodo Tracker) with confidence ≥ 70.
Enrichment:
- Confirm the MISP event is current (last updated within 30 days)
- Check MISP event confidence level and source reliability
- Identify all related IOCs in the event (other domains, IPs, hashes)
- Cross-reference the affected host in Wazuh for additional suspicious activity in the past 48 hours
Triage: ESCALATE NOW (any severity + ransomware intel = maximum urgency)
Response:
- Isolate the affected host immediately (network quarantine via EDR or switch port disable)
- Preserve volatile forensic data — collect running processes, network connections, and memory dump via Velociraptor before full isolation
- Hunt for all related IOCs from the MISP event across the full endpoint fleet (Velociraptor hunt) and SIEM logs (Wazuh search for past 30 days)
- Block all IOCs from the MISP event at the perimeter (firewall, DNS sinkhole, web proxy)
Escalation: Immediate L2 / IR team notification. If more than one host is affected, invoke the full incident response plan. Notify management within 1 hour per organizational policy.
Documentation: Create incident ticket with: enriched alert screenshot, MISP event ID and link, affected hosts, timeline of analyst actions, IOCs blocked, hunt results.
Example: Commodity Malware Playbook
Trigger: MISP match returns a tag associated with commodity malware (e.g., adware, miner, infostealer:generic) with confidence ≥ 50, OR VirusTotal shows > 10 vendor detections with a common malware family name.
Enrichment:
- Verify the MISP event is not stale (check last-updated date)
- Check if the malware family has known persistence mechanisms
- Assess host criticality — is this a workstation or a domain controller?
Triage: ELEVATE (treat as medium priority even if alert severity is low)
Response:
- Verify on the endpoint — use Velociraptor to check for the file hash, persistence mechanisms, and active processes
- Clean if confirmed — remove malware artifacts, persistence entries, and associated scheduled tasks
- Monitor the host for 24 hours — watch for re-infection, lateral movement, or additional C2 activity
- Update local MISP instance with any new IOCs discovered during cleanup
Escalation: Escalate to L2 only if the host is a critical server, if more than 3 hosts are affected, or if the malware shows lateral movement capability.
Documentation: Update ticket with: verification results, cleanup steps performed, monitoring period results.
Playbook Comparison
| Attribute | Ransomware C2 Playbook | Commodity Malware Playbook |
|---|---|---|
| Trigger | Ransomware / C2 tag, confidence ≥ 70 | Commodity malware tag, confidence ≥ 50 |
| Matrix Quadrant | ESCALATE NOW | ELEVATE |
| Urgency | Critical — minutes matter | Medium — hours acceptable |
| First Action | Isolate host immediately | Verify on endpoint |
| Escalation | Always (immediate IR team) | Conditional (only if critical asset or spread) |
| Containment | Network isolation before investigation | Investigation before removal |
| Timeline Target | < 15 minutes to containment | < 4 hours to remediation |
The difference in response speed and aggression reflects the difference in threat impact. Ransomware can encrypt an entire network in minutes — containment speed is everything. Commodity malware is disruptive but rarely catastrophic — you can afford to investigate before acting.
When Intel Lies: False Intelligence
Threat intelligence is powerful, but it is not infallible. Trusting intel blindly is nearly as dangerous as ignoring it entirely. An effective analyst knows when to trust intelligence and when to question it.
Stale IOCs
IP addresses get reassigned. Domains expire and get re-registered by unrelated parties. A C2 server from six months ago might now be hosting a children's charity website. If your MISP event is tagged "last updated: August 2025" and you are triaging an alert in February 2026, that IOC has had six months to change hands.
Mitigation: Always check the last_seen and event update timestamps. IOCs less than 30 days old deserve high confidence. IOCs 30-90 days old deserve medium confidence with additional verification. IOCs older than 90 days should be treated as historical context, not current intelligence.
Poisoned Feeds
Sophisticated adversaries know that defenders use threat feeds. They can submit false IOCs to open-source feeds — legitimate infrastructure flagged as malicious — to cause false positives in your SOC, waste analyst time, and potentially trick you into blocking your own business partners' IP addresses.
Mitigation: Vet your feed sources. Prefer feeds from trusted ISACs, vetted vendors, and platforms with editorial oversight. Cross-reference matches across multiple independent sources — a single feed match is context; three independent feed matches are high-confidence intelligence.
Low Confidence Matches
Not all MISP events are created equal. An event contributed by a well-known threat research team at a major vendor, tagged with high confidence and corroborated by three independent sources, is very different from an event auto-imported from a community feed with no analysis, no confidence tag, and a single uncorroborated IOC.
Mitigation: Always check the event's confidence level, the contributing organization, and the number of corroborating sources. A single low-confidence match should inform your investigation, not dictate your triage verdict.
Context Always Wins
If your intel says "known-bad" but all other evidence says benign — the endpoint shows no suspicious behavior, the network traffic patterns are normal, the user's activity is consistent with their role, and the IOC has only a single low-confidence source — do not blindly trust the intel tag. Investigate further. The intel might be wrong, stale, or misattributed.
The correct mental model: intel is evidence, not verdict. It is one input to your triage decision, alongside alert severity, endpoint telemetry, network context, and analyst judgment. Weigh all evidence together. Never let a single source — whether intel or otherwise — override everything else without question.
Stale intel creates false confidence. An analyst who sees "MISP match — known C2" and immediately escalates without checking the event date may be escalating based on a six-month-old IOC that is now a legitimate Cloudflare IP serving millions of users. The escalation wastes IR team time, damages the analyst's credibility, and — worst of all — trains the team to distrust future intel-based escalations. Always verify freshness before acting on intel.
Putting It All Together: A Complete Example
Let us trace a complete intel-driven triage workflow from alert to escalation, using every concept from this lesson.
14:32 — Alert fires.
Wazuh alert: DNS query to update-service.example.com from host WIN-WORK-05. Rule level 5 (low severity). The alert fired because the domain appeared in a newly-registered domain watchlist. On its own, this is unremarkable — hundreds of new domains are queried daily.
14:32 — Auto-enrichment executes.
The SIEM's automated enrichment pipeline queries MISP within 500ms of alert creation. Result: MATCH. The domain update-service.example.com appears in MISP Event #4821 — "Emotet Wave — February 2026." Tags: emotet, confidence:high, tlp:amber. The event contains 3 domains, 2 file hashes, and was last updated 18 hours ago by a sector ISAC. The alert is tagged automatically: misp:match, campaign:emotet, confidence:high.
14:33 — Analyst applies the matrix. LOW severity (level 5) + INTEL MATCH (Emotet, high confidence) = ELEVATE. The analyst immediately reclassifies this alert from routine to high priority. Without the matrix, this alert would have been closed in 30 seconds.
14:34 — Analyst reviews the MISP event. The analyst opens MISP Event #4821 and reviews the full context:
- 3 C2 domains (including
update-service.example.com) - 2 SHA256 hashes (Emotet loader variants)
- Campaign first observed 48 hours ago — this is fresh, active infrastructure
- Contributing organization is a trusted ISAC with a strong track record
- 4 other organizations have corroborated the IOCs
- The event includes a note: "Emotet is being used as initial access for ransomware deployment"
That last note changes everything. This is not just commodity malware — Emotet is being used as a delivery mechanism for ransomware. The analyst mentally shifts from the "Commodity Malware Playbook" to the "Ransomware Precursor" response.
14:35 — Analyst pivots to Wazuh.
The analyst searches Wazuh for all activity from WIN-WORK-05 in the last 24 hours. Results:
- 08:14 — User logged in (normal business hours)
- 09:22 — Outlook process spawned — normal
- 09:47 — PowerShell execution with encoded command (Wazuh rule 91816, level 6)
- 09:47 — File creation:
C:\Users\jsmith\AppData\Local\Temp\update.exe(level 4) - 14:31 — DNS query to
update-service.example.com(the alert that started this)
The timeline is damning. A PowerShell execution and file drop at 09:47, followed by a DNS query to a known Emotet C2 domain at 14:31. The ~5-hour gap is consistent with Emotet's behavior: initial loader execution, sleep period, then C2 callback.
14:36 — Analyst pivots to MISP for the hash.
The analyst takes the SHA256 of update.exe (extracted from the Wazuh file integrity event or via Velociraptor) and searches MISP. MATCH — the hash appears in the same Event #4821 as one of the two Emotet loader variants. The file on the workstation is confirmed Emotet.
14:37 — Analyst escalates. The analyst triggers the Ransomware Precursor playbook:
- Escalation ticket created with: original alert, MISP Event #4821 reference, host
WIN-WORK-05, userjsmith, complete timeline, file hash match confirmation - Host isolation requested pending IR team confirmation
- IOC hunt initiated — all three domains and both hashes from the MISP event searched across Wazuh logs fleet-wide
- Perimeter blocks submitted — three domains added to DNS sinkhole, C2 IPs added to firewall block list
Total time from alert to escalation: 5 minutes.
Without intel, here is what happens: analyst sees level 5 DNS alert, checks the domain on VirusTotal (maybe 2 detections from new vendors), concludes it is probably benign or low-risk, closes the ticket. The Emotet loader sits on WIN-WORK-05 undiscovered. The next C2 callback succeeds. 48 hours later, the adversary deploys ransomware across the network.
Five minutes of intel-driven triage prevented a potential ransomware incident. That is the power of this approach.
Build a personal "sector threat tracker." Keep a running list of campaigns actively targeting your industry sector — the threat actor names, their primary TTPs, and the IOC types they use. When you triage alerts, you will recognize patterns faster. "That domain naming convention looks like the campaign I read about on Tuesday" is the kind of pattern recognition that only comes from staying current with your sector's threat landscape. Subscribe to your ISAC feed, read weekly threat reports, and add relevant MISP events to your watchlist.
Key Takeaways
Key Takeaways
- Threat intelligence transforms triage from guesswork into informed decision-making. The same alert has completely different urgency depending on whether its IOCs match known campaigns.
- The intel-driven triage matrix has four quadrants — ROUTINE (low + no match), INVESTIGATE (high + no match), ELEVATE (low + match), ESCALATE NOW (high + match). The ELEVATE quadrant is the critical insight: low-severity alerts with intel matches should jump priority.
- "No match" does not mean "clean." The absence of intelligence is not evidence of safety. An IOC with no feed hits could be brand-new infrastructure or a targeted attack using custom tools.
- Automate enrichment, not verdicts. Automated MISP lookups on every alert dramatically increase analyst throughput. But the triage decision — escalate, investigate, or close — must remain with the human.
- Intel-driven playbooks formalize response procedures so every analyst responds consistently to the same threat type, reducing response time and eliminating improvisation during high-stress events.
- Intel can be wrong. Stale IOCs, poisoned feeds, and low-confidence sources can mislead. Always verify freshness, check source reliability, and weigh intel alongside all other evidence.
- Context always wins. Intel is evidence, not verdict. It is one input alongside severity, endpoint telemetry, network behavior, and analyst judgment. Weigh everything together.
What's Next
You have now completed Module 5. You can extract IOCs from reports, search them across MISP, correlate intelligence with SIEM alerts, pivot from one indicator to discover entire campaign infrastructure, and apply threat intelligence to transform your triage decisions. You have built the second pillar of the SOC analyst toolkit: intelligence.
In Module 6 — Endpoint Visibility with Velociraptor, you will build the third pillar: endpoint detection and response. When the SIEM fires an alert and the intel confirms the threat, the next question is always: "What is happening on the endpoint right now?" Velociraptor gives you that answer — live process inspection, file system forensics, artifact collection, and fleet-wide hunting. You will learn to deploy Velociraptor, investigate live endpoints, write custom artifacts, and use endpoint data to answer the questions that SIEM and intel alone cannot: Is the malware still running? Has it spread? What did it change? What do we need to clean up?
SIEM sees the network. Intel provides context. The endpoint tells the truth. Module 6 completes the triangle.
Knowledge Check: Intel-Driven Triage
10 questions · 70% to pass
In the intel-driven triage matrix, what is the correct action when an alert has LOW severity and a CONFIRMED intel match?
What is the primary advantage of a hybrid enrichment approach (automated lookup + human decision) over fully automated triage?
Which component is NOT part of an intel-driven playbook?
An IOC matches a MISP event that was last updated 8 months ago. The IOC is an IP address. What is the primary risk of acting on this intelligence without further verification?
In the 'Is This IOC Known-Bad?' workflow, an analyst searches MISP and all external sources for a suspicious domain and finds zero matches. What is the correct conclusion?
An analyst receives a HIGH severity alert (level 12) for outbound C2 beaconing. MISP enrichment returns a match tagged 'cobalt-strike' with high confidence, last updated 2 days ago. Using the triage matrix, what is the correct action?
Why should analysts weigh intel confidence level and number of corroborating sources rather than treating every MISP match equally?
In Lab 5.3, you take 3 IOCs from MISP and search Wazuh. One IOC matches a Wazuh alert with level 5 (low severity). Using the intel-driven triage matrix, what is the correct action?
In Lab 5.4, you extract IOCs from a ransomware report and find them in MISP. The MISP event is tagged TLP:AMBER. Who can you share the findings with?
After completing Labs 5.1-5.4, which enrichment approach would scale best for a SOC processing 500 alerts per day?
0/10 answered