Lesson 1 of 6·10 min read·Includes quiz

Why Detection Engineering Matters

From reacting to creating alerts

What You'll Learn

  • Explain what detection engineering is and why it has become a critical SOC function beyond traditional alert monitoring
  • Describe the detection lifecycle from hypothesis through deployment, tuning, and retirement
  • Differentiate between reactive detection (vendor rules) and proactive detection (custom rules written by your team)
  • Identify the three layers of detection coverage: network, endpoint, and log-based
  • Explain what Sigma is and why it solves the vendor lock-in problem for detection rules
  • Connect detection engineering concepts to the hands-on Sigma exercises in Labs 8.1–8.6

From Alert Consumer to Alert Creator

In every module so far, you have been consuming alerts. Wazuh fired an alert — you triaged it. Suricata flagged traffic — you investigated it. Velociraptor showed suspicious artifacts — you analyzed them. YARA matched a pattern — you reviewed the results.

But here is a question that separates junior analysts from senior ones: who wrote those alerts in the first place?

Every detection rule in your SIEM was written by someone. Some were written by Wazuh's development team (vendor rules). Some were written by open-source contributors. Some were written by a detection engineer on a SOC team who saw an attacker technique and thought: "I need my SIEM to catch this next time."

Detection engineering is the discipline of designing, building, testing, deploying, and maintaining detection rules. It is the shift from "I respond to alerts" to "I create the alerts that my team responds to." In a mature SOC, detection engineering is not a side project — it is a core function that directly determines what your team can and cannot see.

Reactive SOCDetection-Driven SOC
Relies entirely on vendor-provided rulesWrites custom rules for threats specific to their environment
Discovers blind spots only during incidentsProactively maps coverage against ATT&CK and hunts for gaps
Accepts noisy rules as "normal"Tunes rules continuously, targeting < 5 false positives per day per rule
Detection quality is unknownMeasures detection coverage, mean time to detect, and false positive rate
New threats stay undetected until vendor updatesCustom rules deployed within hours of a new threat report or hunt finding

Detection engineering is not just for large SOCs. Even a one-person security team benefits from writing custom detection rules. If you read a threat report about attackers using schtasks.exe for persistence and your SIEM does not alert on suspicious scheduled task creation, you have a blind spot. Writing one Sigma rule to cover that technique takes 15 minutes. Waiting for a vendor update might take months — or never happen at all.

The Detection Lifecycle

Detection rules are not "write once, deploy forever." They follow a lifecycle that mirrors software development: plan, build, test, deploy, monitor, tune, and eventually retire.

The detection lifecycle — from threat hypothesis through rule creation, testing, deployment, tuning, and retirement

Phase 1: Hypothesis

Every good detection starts with a question:

  • Threat-driven: "The latest APT29 campaign uses DLL sideloading via a renamed rundll32.exe. Can our SIEM detect this?"
  • Hunt-driven: "During a threat hunt, I found PowerShell executing Base64-encoded commands. We have no rule for this."
  • Incident-driven: "Post-incident review: the attacker persisted via a Windows service. We need to detect new service installations."
  • Coverage-driven: "Our ATT&CK coverage map shows zero detections for T1053 (Scheduled Task/Job). We need to fill this gap."

The hypothesis defines what you are trying to detect and why it matters.

Phase 2: Research

Before writing a rule, you need to understand the technique:

QuestionWhy It Matters
What log source captures this activity?If the log source is not enabled, no rule will work
What fields are populated?Your detection logic depends on specific field names and values
What does the attack look like in logs?You need real or simulated examples to write accurate patterns
What does legitimate activity look like?You need to understand normal behavior to avoid false positives
Are there existing rules you can adapt?SigmaHQ has 3,000+ rules — check before writing from scratch

Phase 3: Write

This is where you create the detection logic. In Modules 2–4, you saw Wazuh rules in XML format. In Module 7, you wrote YARA rules in YARA syntax. Now, in Module 8, you will write Sigma rules in YAML format — a vendor-neutral language that can be converted to any SIEM.

Phase 4: Test

A rule that has not been tested is a rule you cannot trust:

  • True positive test: Does the rule fire when the attack happens?
  • True negative test: Does the rule stay silent during normal operations?
  • Edge case test: What about variations — different usernames, different paths, different process names?
  • Performance test: Does the rule cause excessive load on the SIEM?

Phase 5: Deploy

Move the rule from your testing environment into production. In Lab 8.3, you will convert a Sigma rule to Wazuh format and deploy it to a live SIEM.

Phase 6: Tune

The real work begins after deployment. Every rule generates some noise. Tuning means:

  • Adding exclusions for known-good activity (Lab 8.5 focuses entirely on this)
  • Adjusting field matching to be more specific
  • Changing severity levels based on observed alert volume
  • Adding context fields to help analysts triage faster

Phase 7: Retire

Rules become obsolete. The software gets patched. The attack technique changes. A better rule replaces the old one. Retirement is a deliberate decision, not neglect.

The Three Layers of Detection

A mature SOC does not rely on a single detection layer. Attackers who evade network detection may be visible on the endpoint. Attackers who evade endpoint detection may leave traces in authentication logs. True defense in depth means detection at every layer.

LayerWhat It SeesTools You KnowExample Detection
NetworkTraffic between hosts, DNS queries, HTTP requests, TLS metadataSuricata, EveBox"Alert on DNS queries to known C2 domains"
EndpointProcess creation, file system changes, registry modifications, memoryVelociraptor, YARA"Alert when powershell.exe spawns from winword.exe"
Log-basedAuthentication events, application logs, cloud audit trailsWazuh, OpenSearch"Alert on 5+ failed logins from one source IP in 5 minutes"
💡

ATT&CK as your detection roadmap. MITRE ATT&CK maps every known attack technique. Each technique lists the data sources required to detect it. If your ATT&CK coverage map (Module 1, Lab 1.2) shows gaps, those gaps tell you exactly which Sigma rules to write next.

Why Sigma Changes Everything

Before Sigma, detection rules were written in vendor-specific formats. A Wazuh rule only works in Wazuh. A Splunk search only works in Splunk. An Elastic query only works in Elasticsearch. If your organization switches SIEMs, every custom rule must be rewritten from scratch.

Sigma solves this problem. Created by Florian Roth and Thomas Patzke, Sigma is a generic signature format for SIEM systems — the YARA of log-based detection. You write a rule once in Sigma's YAML format, then convert it to whatever SIEM you use.

Sigma as the universal detection language — write once, convert to any SIEM

title: Suspicious Scheduled Task Creation
id: 1b39d014-e3f2-4e4e-987c-0a0a9a0e4c2d
status: test
description: Detects creation of scheduled tasks from unusual locations
logsource:
    category: process_creation
    product: windows
detection:
    selection:
        Image|endswith: '\\schtasks.exe'
        CommandLine|contains:
            - '/create'
            - '-create'
    filter:
        ParentImage|endswith:
            - '\\cmd.exe'
            - '\\powershell.exe'
        User|contains: 'SYSTEM'
    condition: selection and not filter
level: medium
tags:
    - attack.persistence
    - attack.t1053.005

This single YAML file can be converted to:

Target SIEMConversion Command
Wazuh / OpenSearchsigma convert -t opensearch rule.yml
Splunksigma convert -t splunk rule.yml
Elastic / Kibanasigma convert -t elasticsearch rule.yml
Microsoft Sentinelsigma convert -t microsoft365defender rule.yml
QRadarsigma convert -t qradar rule.yml

Sigma conversion is not magic. Not every rule converts perfectly to every backend. Field name mappings differ across SIEMs. Some detection features (like aggregation or near-time correlation) may not have equivalent functionality in all targets. In Lab 8.4, you will see conversion challenges firsthand and learn how to troubleshoot them.

The SigmaHQ Repository

The power of Sigma extends beyond writing your own rules. The SigmaHQ repository on GitHub contains over 3,000 community-contributed rules covering:

  • Windows process creation and command-line detections
  • PowerShell execution patterns (encoded commands, download cradles, AMSI bypass)
  • Persistence techniques (registry run keys, scheduled tasks, services, WMI)
  • Lateral movement indicators (PsExec, WMI, DCOM, WinRM)
  • Privilege escalation patterns
  • Defense evasion techniques
  • Credential access (Mimikatz, LSASS access, credential dumping)
  • Web server attacks (SQL injection, webshell activity, path traversal)
  • Linux and macOS detections
  • Cloud and container security detections

In your lab environment, all 3,047+ SigmaHQ rules are pre-installed at /opt/sigma-rules/. In Lab 8.6, you will browse this repository, select high-value rules, batch-convert them, and deploy them to Wazuh.

Measuring Detection Quality

Writing rules is only half the job. You need to measure whether your detections actually work. Mature SOC teams track these metrics:

MetricWhat It MeasuresTarget
Detection coveragePercentage of ATT&CK techniques with at least one rule> 60% for top-priority techniques
Mean time to detect (MTTD)Time from attack execution to first alert< 5 minutes for critical techniques
False positive rateAlerts that turn out to be benign, per rule< 5 per day per rule
Detection latencyTime from log ingestion to rule evaluation< 60 seconds
Rule healthRules that fire at least once per month (not broken/stale)> 90% of deployed rules
🚨

A rule that fires 200 times a day is worse than no rule at all. Noisy rules cause alert fatigue. Analysts start ignoring them. The one real attack buried in 200 false positives gets missed. In Lab 8.5, you will take a rule generating 200 alerts per day and tune it down to fewer than 5 — without losing real detections.

Key Takeaways

  • Detection engineering is the shift from consuming alerts to creating them — the discipline of building, testing, deploying, and tuning detection rules
  • The detection lifecycle has seven phases: hypothesis, research, write, test, deploy, tune, and retire
  • Mature SOCs detect at three layers: network (Suricata), endpoint (Velociraptor/YARA), and log-based (Wazuh/Sigma)
  • Sigma is a vendor-neutral rule format: write once, convert to any SIEM using sigma convert
  • SigmaHQ provides 3,000+ community rules pre-installed in your lab at /opt/sigma-rules/
  • Detection quality is measured by coverage, MTTD, false positive rate, and rule health — not just rule count
  • A noisy rule is worse than no rule: every detection must be tuned for the environment it runs in

What's Next

Now that you understand why detection engineering matters and where Sigma fits in the SOC, Lesson 8.2 breaks down the Sigma rule structure in detail — every field, every keyword, and what each one controls. You will learn to read any Sigma rule fluently before writing your own in Lab 8.1.

Knowledge Check: Why Detection Engineering Matters

10 questions · 70% to pass

1

What is the primary shift that detection engineering represents for a SOC analyst?

2

Which phase of the detection lifecycle addresses the problem of rules generating too many false positives?

3

What core problem does Sigma solve for detection teams?

4

In the lab environment, Sigma rules from the SigmaHQ community repository are pre-installed. Approximately how many rules are included?

5

Which three detection layers provide defense in depth in a mature SOC?

6

A detection engineer reads a threat report about attackers using schtasks.exe for persistence. Which detection lifecycle phase does this represent?

7

In Lab 8.5, you will tune a noisy rule that fires 200 times per day. What is the target false positive rate per rule per day that a mature SOC aims for?

8

Why is a rule that fires 200 times a day described as 'worse than no rule at all'?

9

In Lab 8.1, you will analyze 5 different Sigma rules. Which ATT&CK technique does the scheduled task creation rule in this lesson map to?

10

What command converts a Sigma rule to Wazuh/OpenSearch format?

0/10 answered