What You'll Learn
- Navigate the SigmaHQ repository structure and understand how rules are organized by category, platform, and ATT&CK technique
- Evaluate Sigma rule quality using status, date, references, false positive documentation, and detection logic depth
- Select high-value rules for deployment based on your organization's threat model, log sources, and coverage gaps
- Batch-convert and deploy multiple SigmaHQ rules to Wazuh using sigma-cli
- Use SigmaHQ to rapidly close ATT&CK coverage gaps identified in threat hunts or incident reviews
- Apply SigmaHQ knowledge in Lab 8.6 to deploy 10 community rules to a live SIEM
The Power of Community Detection
Writing every detection rule from scratch is like writing every software library from scratch — technically possible, but wildly impractical. Just as developers use open-source libraries, detection engineers use community-contributed rules.
The SigmaHQ repository is the largest open-source collection of Sigma detection rules in the world. Maintained by the Sigma project team and hundreds of contributors, it contains over 3,000 rules covering every major operating system, attack technique, and log source. In your lab environment, the full SigmaHQ repository is pre-installed at /opt/sigma-rules/.
Repository Structure
The SigmaHQ repository follows a consistent directory structure organized by platform and log type:
/opt/sigma-rules/
├── rules/
│ ├── windows/
│ │ ├── builtin/
│ │ │ ├── security/ # Windows Security Event Log
│ │ │ ├── system/ # Windows System Event Log
│ │ │ ├── application/ # Windows Application Event Log
│ │ │ └── ...
│ │ ├── process_creation/ # Sysmon/4688 process events
│ │ ├── file_event/ # File creation/modification
│ │ ├── registry_event/ # Registry changes
│ │ ├── network_connection/ # Outbound connections
│ │ ├── image_load/ # DLL loading
│ │ ├── pipe_created/ # Named pipe events
│ │ └── powershell/ # PowerShell script block logging
│ ├── linux/
│ │ ├── auditd/ # Linux audit daemon
│ │ ├── process_creation/ # Process exec events
│ │ └── ...
│ ├── macos/
│ ├── cloud/
│ │ ├── aws/ # AWS CloudTrail
│ │ ├── azure/ # Azure Activity Log
│ │ └── gcp/ # GCP Audit Log
│ ├── web/ # Web server logs (Apache, Nginx, IIS)
│ └── network/ # Network device logs
├── rules-emerging-threats/ # Rules for active campaigns
├── rules-threat-hunting/ # Broader hunting queries
└── rules-compliance/ # Compliance-specific detections
Navigating the Repository
Finding Rules by ATT&CK Technique
Every well-written SigmaHQ rule includes ATT&CK tags. To find all rules for a specific technique:
# Find all rules for T1053 (Scheduled Task/Job)
grep -rl "t1053" /opt/sigma-rules/rules/
# Find all rules for credential access tactic
grep -rl "attack.credential_access" /opt/sigma-rules/rules/
# Find all rules for a specific tool (Mimikatz)
grep -rl -i "mimikatz" /opt/sigma-rules/rules/
Finding Rules by Tool or Malware
# Cobalt Strike detections
grep -rl -i "cobalt.strike\|cobaltstrike" /opt/sigma-rules/rules/ | head -20
# PsExec lateral movement
grep -rl -i "psexec" /opt/sigma-rules/rules/windows/
# Ransomware indicators
grep -rl -i "ransomware\|ransom" /opt/sigma-rules/rules/
Counting Rules by Category
# Total Windows process creation rules
ls /opt/sigma-rules/rules/windows/process_creation/ | wc -l
# Total Linux rules
find /opt/sigma-rules/rules/linux/ -name "*.yml" | wc -l
# Total cloud rules
find /opt/sigma-rules/rules/cloud/ -name "*.yml" | wc -l
Evaluating Rule Quality
Not all 3,000+ rules are equal. Before deploying a community rule, evaluate its quality:
The Quality Checklist
| Criterion | What to Check | Red Flag |
|---|---|---|
| Status | Is it stable, test, or experimental? | experimental rules need validation before production |
| Date and modified | When was it last updated? | Rules older than 2 years without updates may be stale |
| References | Does it link to threat intel, blog posts, or ATT&CK? | No references means the detection rationale is unclear |
| Detection depth | How specific is the logic? Does it use multiple fields? | Single-field rules (`Image |
| False positives | Are known FPs documented? | Empty falsepositives section suggests the author did not test in a real environment |
| Level | Is the severity appropriate for the detection? | critical on a low-fidelity rule indicates poor calibration |
| Logsource | Does it match logs you actually collect? | A Sysmon rule is useless if you do not deploy Sysmon |
Example: High-Quality Rule
title: Mimikatz Command Line Execution
id: a642964e-bead-4bed-8910-1bb4d63e3b4d
status: stable
description: Detects well-known Mimikatz command line arguments
references:
- https://www.slideshare.net/haborlabs/advanced-incident-detection-and-threat-hunting
- https://attack.mitre.org/software/S0002/
author: Teymur Kheirkhabarov, oscd.community
date: 2019/10/22
modified: 2023/02/04
logsource:
category: process_creation
product: windows
detection:
selection_1:
CommandLine|contains:
- 'sekurlsa::'
- 'kerberos::'
- 'crypto::'
- 'lsadump::'
- 'privilege::'
- 'token::'
- 'vault::'
selection_2:
CommandLine|contains|all:
- 'DumpCreds'
- 'DumpCerts'
condition: 1 of selection_*
fields:
- CommandLine
- ParentImage
- User
falsepositives:
- Legitimate administration tools that include these strings (unlikely)
level: critical
tags:
- attack.credential_access
- attack.t1003.001
- attack.t1003.002
- attack.t1003.004
- attack.t1003.005
- attack.t1003.006
Why this is high quality:
- Status: stable — production-tested
- Multiple references — linked to threat intel and ATT&CK
- Multiple technique tags — covers several LSASS/credential dumping sub-techniques
- Detection depth — matches specific Mimikatz module names, not just the binary name
- False positives documented — explicitly acknowledges the (unlikely) legitimate scenario
- Level: critical — appropriate because Mimikatz execution indicates active credential theft
Example: Low-Quality Rule (Proceed with Caution)
title: PowerShell Execution
status: experimental
description: Detects PowerShell execution
logsource:
category: process_creation
product: windows
detection:
selection:
Image|endswith: '\\powershell.exe'
condition: selection
level: medium
Why this is low quality:
- No ID — cannot be tracked or referenced
- Experimental — untested
- No references — no context for why this matters
- Detection logic is too broad — fires on every PowerShell execution, which happens hundreds of times per day on any Windows system
- No false positives section — author did not consider noise
- No ATT&CK tags — cannot be mapped to the framework
Selecting Rules for Your Environment
Not every SigmaHQ rule belongs in your SIEM. Select rules based on:
1. Your Threat Model
What threats are most relevant to your organization?
| Org Type | Priority Rules |
|---|---|
| Financial services | Credential theft, lateral movement, data exfiltration |
| Healthcare | Ransomware, PHI access, privilege escalation |
| Technology | Supply chain attacks, code signing abuse, CI/CD pipeline compromise |
| Government | APT techniques, persistence, C2 communication |
2. Your Log Sources
A rule is useless if you do not collect the logs it requires:
# Check which logsources your Wazuh instance collects
# In Lab 8.6, verify log sources before deploying rules:
# - Do you have Sysmon? → process_creation rules will work
# - Do you have PowerShell logging? → PowerShell rules will work
# - Do you have DNS logging? → DNS exfiltration rules will work
3. Your ATT&CK Coverage Gaps
Use your ATT&CK Navigator coverage map (from Lab 1.2) to identify techniques with zero detections, then find SigmaHQ rules that cover those gaps:
# Example: You have no detection for T1547.001 (Registry Run Keys)
grep -rl "t1547.001" /opt/sigma-rules/rules/windows/registry_event/
# Result: 3 rules covering registry persistence
# Example: You have no detection for T1071.001 (Web Protocols for C2)
grep -rl "t1071.001" /opt/sigma-rules/rules/
# Result: 5 rules across network and proxy categories
Batch Conversion and Deployment
Once you have selected your rules, convert and deploy them in bulk:
Step 1: Select Rules
# Create a directory for your selected rules
mkdir -p /tmp/selected-rules
# Copy your chosen rules
cp /opt/sigma-rules/rules/windows/process_creation/proc_creation_win_mimikatz_*.yml /tmp/selected-rules/
cp /opt/sigma-rules/rules/windows/process_creation/proc_creation_win_psexec_*.yml /tmp/selected-rules/
cp /opt/sigma-rules/rules/windows/builtin/security/win_security_susp_failed_logons_*.yml /tmp/selected-rules/
Step 2: Batch Convert
# Convert all selected rules to OpenSearch format
sigma convert -t opensearch -p windows /tmp/selected-rules/ > converted_rules.ndjson
# Preview what each rule converts to
for rule in /tmp/selected-rules/*.yml; do
echo "=== $(basename \$rule) ==="
sigma convert -t opensearch -p windows "\$rule"
echo ""
done
Step 3: Verify Conversion
Check each converted query for completeness. Some rules may produce warnings:
# Convert with verbose output to see warnings
sigma convert -t opensearch -p windows -v /tmp/selected-rules/ 2>&1
Step 4: Deploy to Wazuh
Import each converted query as an OpenSearch alert monitor in the Wazuh dashboard (Alerting → Monitors → Create Monitor).
Start small, expand strategically. Do not deploy all 3,000 rules at once. Start with 10-20 high-confidence rules for your top threats. Monitor for a week. Tune as needed. Then add the next batch. In Lab 8.6, you deploy exactly 10 rules — a realistic first deployment.
Emerging Threats and Hunting Rules
Beyond the core rules, SigmaHQ includes two special categories:
rules-emerging-threats/
Rules for active threat campaigns — published quickly when new CVEs are exploited or new malware families are discovered. These rules are often experimental or test status and may need tuning, but they provide rapid coverage for breaking threats.
# Browse emerging threat rules
ls /opt/sigma-rules/rules-emerging-threats/
rules-threat-hunting/
Broader, higher-noise rules designed for proactive threat hunting rather than continuous alerting. These rules cast a wider net and are meant to be run manually during hunting sessions, not deployed as 24/7 monitors.
# Browse hunting rules
ls /opt/sigma-rules/rules-threat-hunting/
Keeping Your Rules Current
The SigmaHQ repository is updated frequently. In a production environment:
# Update to latest rules
cd /opt/sigma-rules && git pull
# Check for new rules since your last update
git log --oneline --since="2 weeks ago" --name-only -- rules/
Schedule weekly or bi-weekly updates. Review new and modified rules before deploying them.
Key Takeaways
- SigmaHQ contains 3,000+ community rules organized by platform (Windows, Linux, cloud), log type (process creation, registry, network), and purpose (detection, hunting, compliance)
- Evaluate quality before deploying: check status, references, detection depth, false positive documentation, and ATT&CK tags
- Select rules based on your environment: threat model, available log sources, and ATT&CK coverage gaps
- Batch convert with
sigma convert -t opensearch -p windows /path/to/rules/to process multiple rules at once - Start with 10-20 high-confidence rules, monitor, tune, then expand — do not deploy everything at once
- Emerging threat rules provide rapid coverage for active campaigns; hunting rules are meant for manual investigation, not continuous monitoring
- Update regularly — the repository evolves with the threat landscape
What's Next
You have completed Module 8: Sigma — Detection Engineering for the SOC. You can now write Sigma rules from scratch, convert them to any SIEM, deploy them, tune them for your environment, and leverage the 3,000+ community rules in SigmaHQ. In Module 9, you will shift from detection to response — learning the incident response lifecycle and case management with TheHive and Cortex.
Knowledge Check: SigmaHQ — 3,000+ Rules
10 questions · 70% to pass
In the lab environment, where is the SigmaHQ repository pre-installed?
What command would you use to find all SigmaHQ rules that detect Mimikatz?
Which of the following is a red flag when evaluating a SigmaHQ rule's quality?
In Lab 8.6, you deploy 10 rules from SigmaHQ. Why should you start with a small batch rather than deploying all 3,000+ rules?
How are SigmaHQ rules organized in the repository directory structure?
What is the difference between rules in 'rules-emerging-threats/' and 'rules-threat-hunting/'?
In Lab 8.6, you verify that deployed rules are active in Wazuh. What makes the Mimikatz command line detection rule 'high quality'?
Before deploying a SigmaHQ rule for Sysmon process creation events, what must you verify about your environment?
How would you use SigmaHQ to close an ATT&CK coverage gap for T1547.001 (Registry Run Keys)?
What is the recommended approach for keeping SigmaHQ rules current in a production environment?
0/10 answered