What You'll Learn
- Identify every field in a Wazuh alert and understand what it tells you
- Use the severity scale (0-15) to prioritize your alert queue instantly
- Read rule descriptions, groups, and MITRE mappings to understand what was detected
- Extract the critical investigation data from source fields and raw logs
- Apply the 60-second triage workflow to classify alerts at L1 speed
- Distinguish between alerts that need escalation and alerts you can close
From Raw Log to Alert
In Lesson 2.1, you learned about the 8 categories of log sources that feed a SIEM. But raw logs are just data — thousands of lines per minute, impossible for a human to read in real time. The SIEM's job is to transform that flood of data into alerts: structured, prioritized notifications that tell you something security-relevant happened.
Here's the transformation pipeline:
- Raw log arrives from a source (e.g., sshd writes a line to auth.log)
- Decoder parses the log format and extracts fields (source IP, username, program)
- Rule engine evaluates the decoded fields against hundreds of detection rules
- Match — if a rule's conditions are met, the SIEM generates an alert
- Alert appears in the dashboard with severity, description, and all extracted fields
In Wazuh, this pipeline runs in real time. The moment a log event matches a rule, you see an alert. Understanding every part of that alert is the skill that separates fast analysts from slow ones.
Speed matters. A typical L1 analyst is expected to triage 50-100 alerts per shift. At 60 seconds per alert, that's about 1-2 hours of pure triage work per 8-hour shift. The rest is investigation, documentation, and escalation. If you can't read an alert in under 60 seconds, you'll fall behind.
The Anatomy of a Wazuh Alert
Every Wazuh alert contains the same core fields, regardless of whether it came from a Windows event, an SSH log, or a firewall drop. Learning these fields once means you can read any alert in any module of this course.
Field-by-Field Breakdown
| Field | Path in Alert | What It Tells You | Example |
|---|---|---|---|
| Rule ID | rule.id | Which detection rule fired — your starting point for understanding the alert | 5551 |
| Severity Level | rule.level | How critical on a 0-15 scale — drives your priority | 10 |
| Description | rule.description | Human-readable summary of what was detected | "sshd: brute force trying to get access" |
| Rule Groups | rule.groups | Category tags for filtering and searching | ["sshd", "authentication_failed"] |
| MITRE ATT&CK | rule.mitre | Mapped ATT&CK technique ID and tactic | T1110 — Brute Force |
| Agent | agent.name, agent.id, agent.ip | Which endpoint generated the event | linux-web-01 (001) — 10.0.2.15 |
| Manager | manager.name | Which Wazuh manager processed the event | wazuh-manager |
| Timestamp | timestamp | When the event occurred (UTC) | 2026-02-15T06:25:01.990+0000 |
| Decoder | decoder.name | Which parser extracted the fields | sshd |
| Source Data | data.* | Extracted fields specific to this log type (IP, user, port, etc.) | data.srcip: 185.220.101.42 |
| Full Log | full_log | The complete original log line as received | Raw syslog or Windows event text |
| Location | location | The log file or channel the event came from | /var/log/auth.log or Security |
Rule ID Is Your Lookup Key. When you see a Wazuh alert, the first thing you should note is the rule ID. Wazuh has thousands of built-in rules — each with a unique ID. Searching for the rule ID in the Wazuh documentation or rule files tells you exactly what conditions triggered the alert, what it looks for, and how often it's a false positive in real environments.
Severity Levels: The 0-15 Scale
Wazuh uses a severity scale from 0 to 15. Unlike some SIEMs that use "Low / Medium / High / Critical" labels, Wazuh's numeric scale gives you finer granularity — and it drives how you prioritize your queue.
Level Ranges and What They Mean
| Range | Classification | SOC Action | Examples |
|---|---|---|---|
| 0-3 | Informational | No action needed — background noise | Agent started (530), log rotation (591), successful SSH login from known IP (5501) |
| 4-6 | Low | Review during quiet periods or in batch | Single failed SSH login (5503), invalid user attempt (5710), first-time sudo (5401) |
| 7-9 | Medium | Investigate within the hour | File integrity change (550), new service installed (5902), crontab modified (2832), new file in web directory (554) |
| 10-12 | High | Investigate immediately | SSH brute force confirmed (5551), Windows brute force (60204), SQL injection attempt (31103), reverse shell detected (100002), suspicious process execution |
| 13-15 | Critical | Drop everything — potential active breach | Rootkit detected (510), Windows event log cleared (80790), hidden process found — these fire rarely and almost always indicate a real problem |
How Severity Affects Your Workflow
In a real SOC, your alert queue is sorted by severity. Here's how an L1 analyst works through it:
- Start with 13-15 — These should be rare (1-2 per week in most environments). If one fires, it's your top priority.
- Then 10-12 — The bulk of your "real" investigation work. Every high-severity alert gets a decision: True Positive, False Positive, or Escalate to L2.
- Batch-process 7-9 — Medium alerts often need context: is the file integrity change expected (deployment) or unexpected (compromise)? Check with the asset owner.
- Scan 4-6 for patterns — A single level-5 alert is noise. Fifty level-5 alerts from the same IP in 10 minutes is a brute force campaign.
- Ignore 0-3 — Unless you're specifically hunting or troubleshooting.
Severity Is Not Always Accurate. Rule severity is set by the rule author's judgment, not by your organization's context. A "level 7 — new service installed" might be routine during a deployment window but critical on a production database server at midnight. Context always overrides severity. You'll practice this judgment extensively in Module 4 (Alert Triage).
Understanding Rule Groups
Every Wazuh rule belongs to one or more groups — category labels that help you understand what type of event you're looking at without reading the full description. Groups are your fast-filter mechanism.
Common Rule Groups and What They Indicate
| Group | What It Covers | Example Rules |
|---|---|---|
sshd | SSH daemon events | Logins, failures, brute force |
authentication_success | Any successful auth event | SSH login, Windows 4624, sudo success |
authentication_failed | Any failed auth event | SSH failure, Windows 4625, invalid user |
windows | Windows-specific events | Event IDs 4624, 4625, 4688, 7045, etc. |
syscheck | File integrity monitoring | File added, modified, deleted |
rootcheck | Rootkit detection | Hidden processes, suspicious files |
firewall | Firewall events | Drop events, rule changes |
web, accesslog | Web server events | HTTP requests, error codes, attacks |
ossec | Wazuh agent system events | Agent started, log rotation |
attack | Active attack indicators | SQL injection, reverse shell, exploitation |
Using Groups for Fast Filtering
In the Wazuh Dashboard, you can filter by rule groups to focus on specific event categories:
- Starting your shift? Filter
authentication_failed+ level 10+ to see any brute force activity that happened overnight. - Investigating a host? Filter by agent +
syscheckto see all file changes on that system. - Looking for lateral movement? Filter
windows+authentication_success+ logon type 3. - Checking for active attacks? Filter
attackto see all rule-matched attack indicators.
Build Group-Based Filters Into Muscle Memory. In Lab 2.2, you'll practice extracting fields from 10 different alerts. By Module 4, you should be able to filter to the right group and severity range in under 5 seconds. Fast filtering is what gives you time to think about the alert itself.
MITRE ATT&CK Mapping in Alerts
Many Wazuh rules include MITRE ATT&CK mappings — the rule.mitre field contains the technique ID, tactic, and technique name. This connects every alert directly to the ATT&CK framework you studied in Lesson 1.3 and Lab 1.2.
What the MITRE Fields Tell You
| Field | What It Means | Example |
|---|---|---|
rule.mitre.id | ATT&CK technique ID | ["T1110"] |
rule.mitre.tactic | Which phase of the attack | ["Credential Access"] |
rule.mitre.technique | Human-readable technique name | ["Brute Force"] |
Why This Matters for Investigation
When you see a MITRE mapping on an alert, it immediately answers: "What is the attacker trying to achieve?"
T1110 — Credential Access — Brute Force→ The attacker is trying to guess passwords. Check: how many attempts? Did any succeed? What account was targeted?T1543.003 — Persistence — Windows Service→ The attacker may have installed a backdoor service. Check: what's the service name? What executable does it run? Is it in a suspicious path?T1070.001 — Defense Evasion — Clear Windows Event Logs→ The attacker is destroying evidence. This is almost always malicious. Check: who cleared the logs? What was the account?
Not all rules have MITRE mappings — informational events (agent started, log rotation) typically don't. But most security-relevant rules at level 5+ do.
Callback to Lab 1.2. Remember the 15 APT29 techniques you mapped? If that APT29 attack happened in your Wazuh environment, some of those techniques would appear as alerts with MITRE mappings. T1110 (Brute Force) at level 10, T1053.005 (Scheduled Task) at level 7, T1070.001 (Clear Event Logs) at level 15. The ATT&CK mapping in alerts is how you connect real-time detections to the threat intelligence you studied.
Source Data: Where the Investigation Starts
The data.* fields contain the extracted, structured data from the original log event. These fields vary by log source, but they're where you find the answers to your investigation questions.
Source Data by Log Type
SSH / Linux Auth Events:
| Field | What It Contains |
|---|---|
data.srcip | Source IP address of the connection |
data.srcport | Source port |
data.dstuser | Target username |
Windows Security Events:
| Field | What It Contains |
|---|---|
data.win.system.eventID | Windows Event ID (4624, 4625, 4688, etc.) |
data.win.eventdata.targetUserName | Account that was targeted |
data.win.eventdata.ipAddress | Source IP (for logon events) |
data.win.eventdata.logonType | How the logon happened (2, 3, 5, 10) |
data.win.eventdata.newProcessName | Process path (for 4688 events) |
data.win.eventdata.serviceName | Service name (for 7045 events) |
File Integrity Monitoring:
| Field | What It Contains |
|---|---|
syscheck.path | File path that changed |
syscheck.event | What happened: modified, added, or deleted |
syscheck.md5_after | New file hash (for IOC checking) |
syscheck.size_before / syscheck.size_after | File size change |
Web / Access Logs:
| Field | What It Contains |
|---|---|
data.srcip | Client IP address |
data.url | Requested URL path |
data.protocol | HTTP method (GET, POST) |
data.id | HTTP status code |
Source Data Fields Are Your Evidence. When you document an alert in a ticket or escalation note, the source data is what you reference. "Failed logon from 185.220.101.42 targeting the Administrator account via Type 3 (network) on WIN-SERVER-01" — every piece of that sentence comes from source data fields. Never escalate an alert without including these specifics.
The Raw Log: Your Ground Truth
The full_log field contains the complete, unmodified original log line exactly as it was written by the source. This is your ground truth — the one field that can't be wrong because it's what actually happened, before any parsing or interpretation.
When to Read the Raw Log
- Verifying the alert — Does the raw log actually say what the rule description claims? Occasionally, a decoder might misparse a field and the rule fires on incorrect data.
- Finding details not in source data — The decoder might not extract every field. Command-line arguments, user-agent strings, or specific error messages might only be visible in the raw log.
- Investigating anomalies — If an alert looks unusual, the raw log often contains context that the parsed fields don't capture.
Raw Log Examples by Source
SSH Brute Force (Linux):
Feb 15 06:25:01 linux-web-01 sshd[5150]: Failed password for root from 185.220.101.42 port 45030 ssh2
Windows Failed Logon (4625):
An account failed to log on. Logon Type: 3 Account For Which Logon Failed: Account Name: Administrator Account Domain: CYBERBLUE Failure Reason: Unknown user name or bad password. Source Network Address: 91.234.99.87
SQL Injection Attempt (Web):
203.0.113.50 - - [15/Feb/2026:07:26:33 +0000] "GET /api/users?id=1'+OR+'1'='1 HTTP/1.1" 400 0 "-" "sqlmap/1.7.2"
File Integrity Change:
File '/etc/passwd' checksum changed. Size changed from '2847' to '2903'. Old md5sum: 'd41d8cd98f00b204e9800998ecf8427e'. New md5sum: 'a7f3e5b2c1d98764f0e2b3a4c5d6e7f8'.
Trust the Raw Log Over Everything. If the parsed fields say one thing and the raw log says another, the raw log wins. Decoders can have bugs, rule conditions can be overly broad, and extracted fields can be incomplete. The raw log is the authoritative record of what happened.
Compliance Metadata
Many Wazuh rules include compliance framework mappings — references to specific controls in PCI DSS, GDPR, HIPAA, and NIST 800-53. These appear in the alert as additional fields:
| Field | Framework | Example |
|---|---|---|
rule.pci_dss | Payment Card Industry | ["10.2.4", "10.2.5"] — Log access monitoring |
rule.gdpr | EU General Data Protection | ["IV_35.7.d"] — Security of processing |
rule.hipaa | Health Insurance Portability | ["164.312.b"] — Audit controls |
rule.nist_800_53 | NIST Cybersecurity Framework | ["AU.14", "AC.7"] — Audit records, failed logon attempts |
These fields don't change your investigation but they matter for two reasons:
- Compliance reporting — Your SOC may need to demonstrate to auditors that specific events are detected and investigated. Compliance mappings prove it.
- Alert prioritization in regulated industries — A healthcare SOC might escalate any alert with HIPAA mappings faster than others, regardless of severity.
The 60-Second Triage Workflow
Now that you understand every field, let's put it together into a repeatable workflow. This is the process you should follow for every alert, and it should take no more than 60 seconds for the initial classification.
Step 1: Severity (2 seconds)
Glance at rule.level. This instantly tells you how urgently you need to act:
- 0-6: Low priority — batch-process or skip
- 7-9: Medium — investigate within the hour
- 10-12: High — investigate now
- 13-15: Critical — drop everything
Step 2: Description + MITRE (5 seconds)
Read rule.description and check rule.mitre. Now you know what was detected and what phase of an attack it represents. This frames your entire investigation.
Step 3: Agent / Host (3 seconds)
Check agent.name and agent.ip. Which system is affected? A brute force on a development workstation is less urgent than the same attack on a domain controller.
Step 4: Source Data (15 seconds)
Extract the key fields from data.*:
- Who? Source IP, username
- What? Process name, service name, file path
- How? Logon type, HTTP method, protocol
Step 5: Raw Log (15 seconds)
Scan full_log for anything the parsed fields missed. Look for:
- Command-line arguments
- User-agent strings
- Specific error messages or status codes
- Anything that seems unusual
The Decision (20 seconds)
Based on what you've read, make one of three calls:
- Close as False Positive — Expected behavior, known maintenance, test activity
- Close as True Positive (handled) — Real but low-impact, no further action needed
- Escalate to L2 — Suspicious, needs deeper investigation, potential incident
60 Seconds Is a Target, Not a Rule. Simple informational alerts take 5 seconds. Complex multi-stage attacks take much longer. The 60-second target is for your initial classification pass. If an alert needs deeper investigation after the initial triage, that's a separate, longer process — and it means the triage worked correctly by flagging it for more attention.
Putting It All Together: Reading a Real Alert
Let's walk through a complete alert from your Lab 1.1 environment:
Alert: SSH Brute Force Confirmed
| Field | Value | What You Learn |
|---|---|---|
rule.id | 5551 | Wazuh's SSH brute force rule — fires after multiple failed attempts |
rule.level | 10 | High severity — investigate immediately |
rule.description | "sshd: brute force trying to get access to the system" | Active brute force attack in progress |
rule.mitre.id | T1110 | ATT&CK: Credential Access — Brute Force |
rule.groups | sshd, authentication_failed | SSH-related authentication failure |
agent.name | linux-web-01 | Your Linux web server is being targeted |
data.srcip | 185.220.101.42 | External IP — known Tor exit node |
timestamp | 2026-02-15T06:25:01 | Early morning — outside business hours |
full_log | "Failed password for root from 185.220.101.42 port 45030 ssh2" | Targeting the root account — highest privilege |
60-second triage decision: Level 10 + external IP + targeting root + outside business hours = True Positive. Check if the attacker eventually succeeded (look for a subsequent 4624/5501 from the same IP). If no successful login, close with note. If successful login found, escalate immediately — the system may be compromised.
Common Alert Patterns You'll See Repeatedly
After a few shifts in any SOC, you'll notice the same alert types appearing again and again. Recognizing these patterns speeds up your triage dramatically:
| Pattern | Typical Rule IDs | Usual Verdict | Key Question |
|---|---|---|---|
| SSH brute force from external IP | 5551, 5710, 5503 | True Positive (blocked) | Did any attempt succeed? |
| Windows failed logon from external | 18152, 60204 | True Positive (blocked) | Is the account locked out? |
| Successful logon after failures | 60106, 5715 | Escalate — possible compromise | Was the success from the same IP as the failures? |
| New service installed | 5902 | Investigate — context needed | Is it a known deployment or suspicious executable? |
| File integrity change on /etc/passwd | 550 | Investigate | Was a new user added? By whom? |
| Web server 400 errors from scanner | 31103 | True Positive (blocked) | Is the scanner internal (pentest) or external (attacker)? |
| Agent started | 530 | Informational — close | Normal unless the agent was supposed to be offline |
| Event log cleared | 80790 | Always escalate | Who cleared it and why? Almost never legitimate |
Key Takeaways
- Every Wazuh alert contains the same core fields: rule ID, severity, description, MITRE mapping, groups, agent, source data, timestamp, and raw log
- Severity levels 0-15 drive your prioritization: 0-6 is noise, 7-9 needs investigation, 10-12 is urgent, 13-15 is critical
- Rule groups let you filter alerts by category (sshd, windows, syscheck, attack) — build group-based filters into muscle memory
- MITRE ATT&CK mappings connect real-time alerts to the framework you studied in Module 1, answering "what is the attacker trying to achieve?"
- Source data fields (
data.*) contain the evidence you need for investigation: IPs, usernames, process names, file paths - The raw log (
full_log) is your ground truth — always trust it over parsed fields if they disagree - The 60-second triage workflow: Severity → Description → Agent → Source Data → Raw Log → Decision (close, handle, or escalate)
- Pattern recognition accelerates triage — most alert types repeat, and you'll learn the common verdicts through experience
What's Next
You can now read any alert in your SIEM. In Lesson 2.3 — Dashboards & Visualizations, you'll learn how to build views that organize hundreds of alerts into visual patterns — spotting brute force campaigns, tracking alert trends over time, and building the SOC dashboard you'll use every shift.
Knowledge Check: Anatomy of a SIEM Alert
10 questions · 70% to pass
In a Wazuh alert, what is the very first field you should check during the 60-second triage workflow?
A Wazuh alert shows rule.level 14. According to the severity scale, what should you do?
What does the rule.mitre field in a Wazuh alert tell the analyst?
Why should you 'trust the raw log over everything' when parsed fields and the raw log disagree?
An analyst sees 50 level-5 alerts from the same IP address within 10 minutes. According to the lesson, how should this be handled?
What is the primary purpose of rule groups (rule.groups) in Wazuh alerts?
In the labs, you encountered Wazuh rule 5551 (SSH brute force) at severity level 10. The lesson states 'severity is not always accurate.' What should always override the assigned severity level?
In Lab 1.1, you saw an SSH brute force alert (rule 5551) from IP 185.220.101.42 targeting root on linux-web-01. Using the source data fields covered in this lesson, which field would tell you the exact username being targeted?
In Lab 1.3, you explored alerts from WIN-SERVER-01 that included Windows Event ID 4624 with Logon Type 3. Based on this lesson's source data field breakdown, which specific field in the Wazuh alert contains the logon type?
In Lab 1.3, you saw rule 80790 (Windows audit log cleared) which would be severity level 13-15. According to the common alert patterns table in this lesson, what is the correct verdict for this type of alert?
0/10 answered