What You'll Learn
- Prioritize a queue of 50 alerts spanning multiple attack types and severity levels
- Apply time-based triage to handle the most urgent alerts first while managing lower-priority noise
- Triage the top 10 most critical alerts with full context analysis
- Close false positives efficiently with documented justification
- Write a professional shift handoff note summarizing your shift's findings and open items
Lab Overview
| Detail | Value |
|---|---|
| Lab Profile | lab-wazuh |
| Containers | Wazuh Manager, Wazuh Indexer, Wazuh Dashboard |
| Estimated Time | 75–90 minutes |
| Difficulty | Advanced |
| Browser Access | Wazuh Dashboard (Web UI) |
| Pre-Loaded Data | 50 alerts spanning 8 simulated hours across 4 agents |
| Deliverable | Shift handoff note with triage summary, escalated items, and recommendations |
Your first real shift. This lab simulates what you'll face on Day 1 of a real SOC analyst job. Fifty alerts from the last 8 hours. Some are urgent, most are noise, and a few are genuinely dangerous. You need to work the queue intelligently, not just top-to-bottom. Your deliverable is the shift handoff note — the document that tells the next analyst exactly where things stand.
The Scenario
You are taking over the 06:00–14:00 shift. The overnight analyst left no handoff note (this happens). Your Wazuh Dashboard shows 50 unreviewed alerts from the past 8 hours across these agents:
| Agent | Role | What It Monitors |
|---|---|---|
| WIN-SERVER-01 | Domain Controller | Authentication, AD changes, service installations |
| linux-web-01 | Public-facing web server | HTTP access, SSH, file integrity, web application events |
| dns-server-01 | Internal DNS server | DNS queries, zone transfers, recursive queries |
| fw-edge-01 | Perimeter firewall | Inbound/outbound connections, blocked traffic, policy violations |
Phase 1: Rapid Severity Sort (10 minutes)
Step 1: Get the Big Picture
Before touching any individual alert, understand what you're dealing with:
- Navigate to Security Events → set time range to Last 24 hours
- Sort by rule.level (descending) to see critical alerts first
- Count the breakdown:
| Severity | Rule Level | Expected Count |
|---|---|---|
| Critical | 14-15 | ~3-4 alerts |
| High | 12-13 | ~6-8 alerts |
| Medium | 7-11 | ~20-25 alerts |
| Low | 3-6 | ~15-18 alerts |
Step 2: Identify the "Must-Touch" Alerts
Critical and high-severity alerts (rule.level >= 12) are your first priority. These represent potential active threats. Scan their rule descriptions and group them:
- Are multiple critical alerts related to the same host or IP? (likely a single incident)
- Are any critical alerts about data exfiltration, lateral movement, or active exploitation? (handle first)
- Are any critical alerts known false positives you can close quickly? (clear the noise fast)
Time management is a real SOC skill. You cannot spend 15 minutes on every alert when you have 50 in the queue. Develop a rhythm: 2 minutes for obvious FPs, 5-10 minutes for investigations, and flag anything needing deeper analysis for the next shift or L2.
Phase 2: Triage the Top 10 (40 minutes)
Work through the 10 most critical alerts. For each:
Full Triage Checklist
| Step | Question | Where to Check |
|---|---|---|
| 1 | What rule fired and why? | Alert details: rule.id, rule.description |
| 2 | Which asset is affected? | agent.name — is this a critical server? |
| 3 | Who or what triggered it? | data.srcip, data.dstuser, data.win.eventdata |
| 4 | Is the source known/expected? | IP reputation, internal asset list |
| 5 | When exactly did it happen? | timestamp — business hours? maintenance window? |
| 6 | Are there related alerts? | Search same host/IP/user in surrounding timeframe |
| 7 | What is the verdict? | TP / FP / Needs Investigation |
| 8 | What action is required? | Close / Escalate / Monitor |
Common Alert Types You'll Encounter
| Alert Pattern | Likely Classification | Key Decision Factor |
|---|---|---|
| Multiple failed logins from external IP | TP if > 10 attempts from known-bad IP | Check IP reputation + success after failures |
| New service installed on server | TP if at unusual time with suspicious name | Check time of day + service name legitimacy |
| File integrity change on web server | TP if in web root + not during deployment | Check if change matches known deployment schedule |
| Outbound connection to rare country | Needs investigation — depends on destination | Check if destination is CDN/business partner vs unknown |
| DNS query to long subdomain | TP if high entropy + hex patterns | Calculate entropy, check parent domain reputation |
| Vulnerability scan detected | FP if from known scanner IP | Verify against scanner IP allowlist |
| Admin tool usage (PSExec, etc.) | FP if during change window by admin | Check change management schedule |
Phase 3: Batch-Close False Positives (15 minutes)
After triaging the top 10, work through the remaining alerts looking for obvious false positive patterns you can close quickly:
Quick-Close Criteria
| Pattern | How to Confirm FP | Close With |
|---|---|---|
| Scheduled vulnerability scan | Source IP matches Nessus/Qualys scanner | "Known scanner — scheduled scan" |
| Windows Update activity | Source is Windows Update Service | "Legitimate Windows Update" |
| Backup job file changes | FIM alerts during backup window on backup paths | "Scheduled backup — expected FIM changes" |
| Admin RDP during business hours | Internal IP, known admin, working hours | "Authorized admin access" |
| DNS queries to Microsoft/Google CDNs | Domains resolve to known CDN infrastructure | "Legitimate CDN traffic" |
Build your FP library. Every SOC maintains a list of known false-positive patterns. As you encounter and verify FPs, document them. This makes future triage faster for you and your team. Some SOCs even create automated suppression rules for the most common FPs.
Phase 4: Write the Shift Handoff Note (15 minutes)
This is the most important deliverable. The next analyst relies on this document to continue where you left off.
Handoff Note Template
# Shift Handoff: [Date] — 06:00-14:00
## Shift Summary
- Total alerts reviewed: X/50
- True Positives escalated: X
- False Positives closed: X
- Requires further investigation: X
## Active Incidents
### Incident 1: [Title]
- **First alert:** [timestamp] — [rule description]
- **Affected assets:** [hosts/users]
- **Current status:** [escalated / under investigation / contained]
- **Action needed:** [what the next analyst should do]
### Incident 2: [Title]
(repeat for each active incident)
## Closed False Positives (Notable)
- [Brief description of any unusual FPs worth noting]
## Open Items for Next Shift
- [ ] [Specific follow-up task 1]
- [ ] [Specific follow-up task 2]
- [ ] [Specific follow-up task 3]
## Recommendations
- [Any tuning suggestions to reduce FP volume]
- [Any new IOCs to add to blocklists]
A missing handoff note is a SOC failure. If an analyst ends their shift without handing off, the next analyst starts blind. Active investigations go cold. Escalated items get lost. In a real SOC, this can mean the difference between containing an incident in hours vs. days.
Deliverable
Your final Shift Handoff Note must include:
- Shift Summary — counts of alerts reviewed, TPs, FPs, and open items
- Active Incidents — at least 2-3 incidents with status and next steps
- Open Items — specific tasks the next analyst should prioritize
- Recommendations — at least 1 tuning suggestion based on FP patterns observed
Key Takeaways
- A real SOC shift starts with severity-based prioritization, not random alert selection
- The first 10 minutes should be spent understanding the queue, not diving into individual alerts
- Batch-closing obvious FPs clears noise and lets you focus on what matters
- Time management is critical: 2 minutes for clear FPs, 5-10 for investigations, flag the rest
- The shift handoff note is the single most important document a SOC analyst produces
- Never end a shift without documenting active incidents and open items
- Building a personal FP pattern library accelerates your triage speed over time
What's Next
Congratulations — you've completed the Alert Triage module. You can now triage individual alerts, investigate suspicious events in depth, decode obfuscated payloads, and manage a full alert queue. Next, Module 7 takes you into Threat Intelligence, where you'll learn to use MISP and external intel to investigate faster and smarter.
Lab Challenge: Alert Queue Challenge
10 questions · 70% to pass
When you first open the Wazuh Dashboard with 50 unreviewed alerts, what should you do FIRST?
Among the 50 pre-loaded alerts, you find 3 critical alerts (rule.level 14-15) all involving WIN-SERVER-01 within a 20-minute window. What is the correct interpretation?
You find a medium-severity FIM (File Integrity Monitoring) alert showing changes to /var/www/html/index.php on linux-web-01 at 03:15 AM. The change management system shows no scheduled deployments. What is your triage decision?
During Phase 3 (batch-close FPs), you identify 8 alerts triggered by the IP 10.0.2.200, which your team confirmed is the Nessus vulnerability scanner running its weekly schedule. What is the most efficient way to handle these?
Your shift handoff note mentions an 'Active Incident' on WIN-SERVER-01 that you partially investigated but couldn't complete. What MUST your handoff note include for the next analyst?
You spend 25 minutes investigating a single medium-severity alert and still can't determine if it's TP or FP. There are still 35 unreviewed alerts in the queue. What is the best triage action?
In your handoff note's 'Recommendations' section, you notice that 12 of the 50 alerts were FPs from Windows Update activity during the overnight maintenance window. What should you recommend?
At the end of your shift, you reviewed 42 of 50 alerts. Your summary: 5 TPs escalated, 28 FPs closed, 3 under investigation, 6 low-priority not yet reviewed, 8 remaining unreviewed. Is this an acceptable shift performance?
You find a high-severity alert for outbound traffic from dns-server-01 to an IP in a country your organization has no business with, on port 53 (DNS). The query is for 'a3f8c2d1e5b7.data-sync-service.net'. What is your assessment?
What is the single most critical section of a shift handoff note?
0/10 answered