What You'll Learn
- Extract files from packet captures using Wireshark Export Objects, NetworkMiner, and command-line tools (foremost, binwalk)
- Identify Command & Control (C2) beacon patterns by analyzing connection intervals, jitter, and sleep timers
- Detect DNS-based attacks including tunneling, Domain Generation Algorithms (DGAs), and abuse of TXT records
- Apply JA3/JA3S TLS fingerprinting and certificate analysis to identify malicious encrypted traffic
- Build network forensic timelines that reconstruct attacker activity from packet-level evidence
- Correlate network findings with endpoint telemetry to produce complete incident narratives
- Document network-based findings in formats suitable for escalation and legal proceedings
In the previous lessons you learned to read packets, write filters, and follow streams. Now you put those skills to work on the two questions that define network forensics: what did the attacker transfer, and how did they maintain control?
File extraction pulls malware samples, exfiltrated documents, and staging payloads directly out of PCAPs. C2 detection reveals the command channel the attacker uses to persist in the environment. Together, they transform raw packet captures into actionable evidence that drives containment decisions.
File Extraction from Packet Captures
Every file that crosses the network — malware droppers, exfiltrated databases, staging scripts — exists inside the PCAP as a sequence of reassembled TCP segments. Your job is to carve those files back out.
Wireshark Export Objects
Wireshark's built-in extraction handles the most common protocols with zero configuration.
Navigate to File → Export Objects and select the protocol:
| Protocol | What It Extracts | Typical Findings |
|---|---|---|
| HTTP | Files transferred over unencrypted HTTP | Malware downloads, webshell uploads, exfiltrated archives |
| SMB | Files shared over Windows file sharing | Lateral movement payloads, ransomware binaries |
| TFTP | Files transferred via Trivial FTP | Bootkit payloads, router config exfiltration |
| IMF | Email messages (Internet Message Format) | Phishing attachments, data exfiltration via email |
| DICOM | Medical imaging files | Relevant in healthcare-targeted attacks |
For HTTP extraction, Wireshark reassembles the TCP stream, strips HTTP headers, and writes the raw file content to disk. You get the original filename, content type, and the hostname it was downloaded from.
File → Export Objects → HTTP
Select the suspicious file → Save
Right-click entry → Save All (exports every HTTP object in the PCAP)
Treat every extracted file as potentially malicious. Never open extracted executables on your analysis workstation. Transfer them to an isolated sandbox or submit hashes to VirusTotal before any interaction. Use sha256sum (Linux) or Get-FileHash (PowerShell) immediately after extraction.
NetworkMiner — Automated Extraction
NetworkMiner takes a different approach: load a PCAP and it automatically extracts every file, image, credential, and session it can find. It excels at bulk extraction where Wireshark requires manual protocol selection.
# Linux
mono /opt/NetworkMiner/NetworkMiner.exe capture.pcap
# Windows
NetworkMiner.exe → File → Open → capture.pcap
NetworkMiner organizes results into tabs: Files, Images, Messages, Credentials, Sessions, DNS. The Files tab shows every extracted object with metadata — filename, source/destination IP, protocol, size, and MD5 hash. This hash-on-extract capability is critical for chain-of-custody documentation.
| Feature | Wireshark Export | NetworkMiner |
|---|---|---|
| Protocol support | HTTP, SMB, TFTP, IMF, DICOM | HTTP, FTP, SMB, SMTP, POP3, IMAP, TFTP |
| Automatic extraction | No (manual per protocol) | Yes (all protocols at once) |
| Hash calculation | No (manual after export) | Yes (MD5 on extraction) |
| Credential extraction | No | Yes (cleartext passwords, NTLM hashes) |
| Reassembly quality | Excellent | Good (occasional misses on fragmented streams) |
Command-Line Carving: foremost and binwalk
When protocol-aware extraction fails — corrupted streams, custom protocols, or partially captured transfers — file carving recovers files based on their binary signatures (magic bytes) rather than protocol structure.
foremost scans raw data for known file headers and footers:
foremost -t exe,pdf,zip,png,doc -i capture.pcap -o carved_files/
ls -la carved_files/
foremost creates subdirectories by file type. It works on raw PCAPs, disk images, or any binary blob. The -t flag specifies which file types to carve.
binwalk identifies embedded files and compressed archives:
binwalk -e suspicious_payload.bin
binwalk --dd='.*' captured_stream.raw
binwalk is particularly effective against staged payloads where attackers embed executables inside images, append archives to legitimate files, or use custom packing.
Extraction workflow for investigations: Start with Wireshark Export Objects for clean HTTP/SMB extractions. Run NetworkMiner for bulk extraction with automatic hashing. Fall back to foremost/binwalk for corrupted or protocol-agnostic carving. Hash everything with SHA-256 and log each extraction in your forensic notes.
Command & Control Beacon Detection
C2 frameworks — Cobalt Strike, Sliver, Metasploit, Havoc — all share a fundamental network behavior: the implant periodically calls home to receive commands. This beaconing pattern is the single most reliable network indicator of compromise.
Beacon Anatomy
A C2 beacon follows a predictable cycle:
- Sleep — implant waits a configured interval (e.g., 60 seconds)
- Check-in — implant sends an HTTP/HTTPS/DNS request to the C2 server
- Task retrieval — server responds with commands (or an empty response if no tasks)
- Task execution — implant runs the command and sends results on the next check-in
Detecting Regular Intervals
The most basic beacon detection is statistical: measure the time delta between connections from an internal host to the same external destination.
# tshark: extract connection timestamps to a specific destination
tshark -r capture.pcap -Y "ip.dst == 203.0.113.50 && tcp.flags.syn == 1" \
-T fields -e frame.time_epoch | awk '{
if (NR > 1) { delta = $1 - prev; print delta }
prev = $1
}'
A legitimate web browsing session produces irregular deltas: 0.2s, 45s, 3s, 120s, 0.8s. A C2 beacon produces deltas clustered around the sleep interval: 60.1s, 59.8s, 60.3s, 59.9s.
Key metrics for beacon analysis:
| Metric | Calculation | What It Reveals |
|---|---|---|
| Mean interval | Average of all deltas | The configured sleep time |
| Standard deviation | Spread around the mean | Low = rigid beacon; high = heavy jitter or human activity |
| Jitter percentage | (max − min) / mean × 100 | Cobalt Strike default jitter is 0-50% |
| Session duration | Last timestamp − first timestamp | Long-lived sessions (hours/days) suggest implant persistence |
| Bytes per request | Average payload size | Small, uniform requests suggest check-ins; large bursts suggest data exfiltration |
Jitter Analysis
Sophisticated C2 operators configure jitter — random variation added to the sleep timer — to evade simple interval-based detection. Cobalt Strike, for example, defaults to 0% jitter but operators commonly set 10-50%.
A 60-second beacon with 20% jitter produces intervals between 48 and 72 seconds. The distribution remains tight enough to detect statistically but irregular enough to defeat simple threshold rules.
import statistics
intervals = [61.2, 58.4, 63.1, 57.9, 62.5, 59.1, 60.8, 58.2, 63.4, 61.0]
mean = statistics.mean(intervals)
stdev = statistics.stdev(intervals)
jitter_pct = ((max(intervals) - min(intervals)) / mean) * 100
print(f"Mean: {mean:.1f}s, StdDev: {stdev:.1f}s, Jitter: {jitter_pct:.1f}%")
# Mean: 60.6s, StdDev: 1.9s, Jitter: 9.1%
Detection threshold guidance: A standard deviation below 5% of the mean interval across 20+ connections is a strong beacon indicator. Human-driven browsing typically shows standard deviation exceeding 50% of the mean. The gap between these values is where your detection rules operate.
Sleep Timer Fingerprinting
Common C2 frameworks use default sleep intervals that analysts can fingerprint:
| Framework | Default Sleep | Default Jitter | Common Custom Values |
|---|---|---|---|
| Cobalt Strike | 60s | 0% | 30s, 300s, 900s (with 10-50% jitter) |
| Sliver | 60s | 0% | 5s-3600s configurable |
| Metasploit (Meterpreter) | 5s | 0% | Often left at default |
| Havoc | 2s | 0% | Configurable per listener |
| Mythic | 10s | 0% | Agent-specific |
DNS Forensics
DNS is the attacker's favorite covert channel. Every network allows DNS traffic. Most organizations do not inspect it beyond basic reputation checks. Attackers exploit this blind spot for tunneling, exfiltration, and command delivery.
DNS Tunneling Detection
DNS tunneling encodes data in DNS queries and responses — typically as Base64 or hex-encoded subdomains. Tools like iodine, dnscat2, and dns2tcp use this technique for full bidirectional communication over DNS.
Indicators of DNS tunneling:
# Normal DNS query
www.example.com → 15 characters
# DNS tunnel query (dnscat2)
a]kd9f3kLmNp2xRt4vWy.tunnel.evil.com → 43 characters
# DNS tunnel query (iodine)
t5sAAAAAHBrZy1maWxlLnR4dA.t.evil.com → 41 characters
| Indicator | Normal DNS | Tunneling DNS |
|---|---|---|
| Subdomain length | 5-25 characters | 30-63 characters (max label length) |
| Query volume | 10-50 queries/min per host | 200-1000+ queries/min |
| Character entropy | Low (readable words) | High (encoded data) |
| Record types | A, AAAA, MX, CNAME | TXT, NULL, CNAME (large response capacity) |
| Unique subdomains | Low (cached, repeated) | Extremely high (every query is unique) |
| Response size | Small (4-16 bytes for A records) | Large (up to 65KB for TXT records) |
# tshark: find DNS queries with unusually long subdomains
tshark -r capture.pcap -Y "dns.qry.name.len > 40 && dns.flags.response == 0" \
-T fields -e frame.time -e ip.src -e dns.qry.name -e dns.qry.type
# tshark: count DNS queries per source IP (find high-volume talkers)
tshark -r capture.pcap -Y "dns.flags.response == 0" \
-T fields -e ip.src | sort | uniq -c | sort -rn | head -20
Domain Generation Algorithms (DGAs)
DGAs generate hundreds or thousands of pseudo-random domain names daily. The malware queries these domains until it finds one the attacker has registered, establishing the C2 channel. This makes domain-based blocklisting nearly impossible.
DGA domain characteristics:
# Human-registered domains
google.com, microsoft.com, github.io
# DGA-generated domains (Conficker-style)
mphtfrynqxwz.net, kldsvboqjztm.com, xnrpfhwaecby.org
# DGA-generated domains (dictionary-based)
bright-ocean-castle.com, red-forest-path.net
Detection focuses on entropy analysis and NXDomain response rates. DGA domains produce a high ratio of NXDOMAIN (non-existent domain) responses because the attacker only registers a small fraction of the generated domains.
# Count NXDomain responses per querying host
tshark -r capture.pcap -Y "dns.flags.rcode == 3" \
-T fields -e ip.dst | sort | uniq -c | sort -rn | head -10
DGA traffic often correlates with active malware infections. If you observe a host generating 50+ NXDomain responses per minute to seemingly random domains, treat it as a confirmed indicator of compromise. Isolate the endpoint immediately and begin forensic acquisition before the malware receives a self-destruct command.
TXT Record and Unusual Record Type Abuse
Attackers prefer TXT records for DNS-based C2 because TXT records can carry up to 65,535 bytes of arbitrary text data — far more than the 4 bytes of an A record. Other abused types include NULL records and CNAME chains.
# Find all TXT record queries (potential data channel)
tshark -r capture.pcap -Y "dns.qry.type == 16" \
-T fields -e frame.time -e ip.src -e dns.qry.name
# Find NULL record queries (rare in legitimate traffic)
tshark -r capture.pcap -Y "dns.qry.type == 10" \
-T fields -e frame.time -e ip.src -e dns.qry.name
HTTPS/TLS Analysis
Encryption blinds traditional packet inspection, but TLS metadata — the handshake, certificates, and fingerprints — reveals more than attackers expect.
JA3/JA3S Fingerprinting
JA3 generates an MD5 hash from the TLS Client Hello parameters (TLS version, cipher suites, extensions, elliptic curves, elliptic curve point formats). Every TLS client — browser, malware implant, curl — produces a characteristic JA3 hash.
JA3 hash components:
TLSVersion,Ciphers,Extensions,EllipticCurves,EllipticCurvePointFormats
Example:
769,47-53-5-10-49161-49162-49171-49172-50-56-19-4,0-10-11,23-24-25,0
→ MD5 → e7d705a3286e19ea42f587b344ee6865
JA3S does the same for the Server Hello. The JA3 + JA3S pair uniquely identifies a client-server TLS negotiation. Cobalt Strike, Metasploit, and other frameworks produce known JA3 hashes that differ from legitimate browsers.
# Wireshark display filter for TLS Client Hello
tls.handshake.type == 1
# Zeek: extract JA3 hashes (if Zeek is available)
zeek -r capture.pcap policy/protocols/ssl/ja3.zeek
cat ssl.log | zeek-cut ja3 server_name
| Fingerprint | Description | Detection Value |
|---|---|---|
| JA3 | Client TLS fingerprint | Identifies C2 implant type regardless of destination |
| JA3S | Server TLS fingerprint | Identifies C2 server configuration |
| JA3 + JA3S pair | Full handshake fingerprint | Highest specificity — links client type to server type |
| JARM | Active server fingerprint (20 probes) | Fingerprints C2 servers by their TLS stack behavior |
Practical JA3 workflow: Export JA3 hashes from your PCAP using Zeek or tshark. Cross-reference against the JA3 fingerprint database and known C2 JA3 values published by threat intelligence providers. Flag any internal host producing a JA3 hash that does not match known browsers (Chrome, Firefox, Edge, Safari).
Certificate Analysis
Self-signed certificates, short validity periods, and generic subject fields are hallmarks of hastily configured C2 infrastructure.
# Extract certificate details from a PCAP
tshark -r capture.pcap -Y "tls.handshake.type == 11" \
-T fields -e ip.dst -e tls.handshake.certificate \
-e x509sat.utf8String -e x509sat.printableString
Red flags in TLS certificates:
| Field | Legitimate | Suspicious |
|---|---|---|
| Issuer | Well-known CA (DigiCert, Let's Encrypt) | Self-signed or unknown CA |
| Validity period | 90 days – 2 years | < 30 days or > 10 years |
| Subject CN | Matches the domain | Generic (localhost, server, test) or random string |
| Subject Alternative Names | Present and matching | Missing or mismatched |
| Serial number | High entropy, unique | Sequential or low-entropy |
SNI Inspection
Server Name Indication (SNI) is sent in plaintext during the TLS Client Hello, revealing the intended destination hostname even when the payload is encrypted.
# Extract SNI values
tshark -r capture.pcap -Y "tls.handshake.type == 1" \
-T fields -e ip.src -e ip.dst -e tls.handshake.extensions_server_name
Cross-reference SNI values against threat intelligence feeds. Look for SNI values that do not match the destination IP's reverse DNS, which may indicate domain fronting — a technique where the attacker routes C2 traffic through legitimate CDN infrastructure.
Building Network Forensic Timelines
Individual findings — an extracted file, a beacon pattern, a tunneling session — become evidence only when assembled into a chronological narrative. The network forensic timeline is your primary deliverable.
Timeline Construction Process
Step 1: Anchor events. Identify the earliest and latest indicators in the PCAP. The first C2 beacon marks initial access time. The last data transfer marks the end of active operations.
Step 2: Map connections to phases. Categorize each significant network event using the kill chain or ATT&CK framework:
| Phase | Network Evidence |
|---|---|
| Initial Access | Exploit delivery (malicious document download, drive-by download) |
| Execution | Payload retrieval (HTTP GET for staged malware) |
| C2 Establishment | First beacon check-in to C2 server |
| Discovery | Internal scanning (port sweeps, DNS queries for internal resources) |
| Lateral Movement | SMB file transfers, WMI/PSExec connections, RDP sessions |
| Exfiltration | Large outbound transfers, DNS tunneling data, cloud storage uploads |
Step 3: Correlate with endpoint data. Every network event has a corresponding endpoint event. The HTTP download of malware corresponds to a process creation on the endpoint. The C2 beacon corresponds to a persistent scheduled task or service. Cross-reference:
Network: 10:14:32 — HTTP GET /update.exe from 203.0.113.50 (4.2 MB)
Endpoint: 10:14:33 — svchost.exe spawns update.exe (PID 4872)
Network: 10:14:45 — First TLS connection to 198.51.100.25:443
Endpoint: 10:14:44 — update.exe creates scheduled task "WindowsUpdate"
Network: 10:15:00 — DNS TXT query to data.evil.com (1.2 KB response)
Endpoint: 10:14:58 — update.exe reads C:\Users\admin\Documents\*.xlsx
Step 4: Document gaps. Note where the PCAP does not cover — periods before capture started, encrypted sessions you cannot decrypt, or traffic that traversed a different network path. Gaps are as important as evidence because they define the limits of your conclusions.
Timeline tools: For manual analysis, a spreadsheet with columns for timestamp, source, destination, protocol, description, and kill chain phase works well. For complex investigations, tools like Plaso (log2timeline) can ingest PCAPs alongside endpoint logs to build unified timelines automatically.
Correlating Network and Endpoint Evidence
Network forensics alone tells half the story. A beacon to a C2 server is suspicious; a beacon from a process running as SYSTEM from a known-compromised host confirmed via Velociraptor is proof of compromise.
Correlation Points
| Network Finding | Endpoint Correlation | Combined Conclusion |
|---|---|---|
| HTTP download of .exe | Process creation event, file hash match | Confirmed malware delivery |
| C2 beacon to IP X | Process with active connection to IP X | Identified implant binary |
| DNS tunnel to evil.com | Process making DNS API calls | Identified tunneling tool |
| SMB transfer to internal host | New file on target, logon event 4624 type 3 | Confirmed lateral movement |
| Large outbound HTTPS transfer | File access events for matching data volume | Confirmed exfiltration |
PowerShell Network Correlation (Windows)
# Find processes with active connections to a suspected C2 IP
Get-NetTCPConnection -RemoteAddress "203.0.113.50" |
Select-Object LocalPort, RemotePort, State, OwningProcess,
@{N='Process';E={(Get-Process -Id $_.OwningProcess).ProcessName}}
# Get file hash of the suspicious process
Get-Process -Id 4872 | Select-Object -ExpandProperty Path | Get-FileHash -Algorithm SHA256
Linux Network Correlation
# Find processes connected to suspected C2 IP
ss -tnp | grep "203.0.113.50"
# Get process details and file hash
ls -la /proc/<PID>/exe
sha256sum /proc/<PID>/exe
cat /proc/<PID>/cmdline | tr '\0' ' '
# Check process start time
stat /proc/<PID>/exe
Documenting Network-Based Findings
Network evidence is volatile and contextual — without proper documentation at the time of analysis, critical details are lost. Every finding needs enough detail for another analyst (or a legal team) to independently verify your conclusions.
Evidence Documentation Template
For each significant network finding, record:
FINDING: [Brief description]
PCAP: [Filename, SHA-256 hash, time range]
FILTER: [Exact Wireshark display filter to reproduce]
EVIDENCE:
- Timestamp: [UTC]
- Source: [IP:port]
- Destination: [IP:port]
- Protocol: [HTTP/DNS/TLS/etc.]
- Details: [Specific observation]
EXTRACTED FILES:
- Filename: [name]
- SHA-256: [hash]
- Size: [bytes]
- VirusTotal: [detection ratio or "not submitted"]
CONCLUSION: [What this evidence proves]
CONFIDENCE: [High/Medium/Low]
LIMITATIONS: [What this evidence does NOT prove]
Chain of custody matters. Hash every PCAP file before and after analysis. Hash every extracted file immediately upon extraction. Record the exact tool versions used. These details transform your analysis from internal notes into defensible forensic evidence.
Key Takeaways
- File extraction from PCAPs uses three approaches: Wireshark Export Objects (protocol-aware), NetworkMiner (automated bulk), and foremost/binwalk (binary carving) — use all three for complete coverage
- C2 beacon detection relies on statistical analysis of connection intervals — low standard deviation relative to the mean is the primary indicator, with jitter analysis distinguishing framework configurations
- DNS tunneling is identified by long subdomain queries, high query volume, high entropy labels, TXT/NULL record abuse, and extreme unique-subdomain counts
- DGA detection combines entropy analysis with NXDomain response rate monitoring — a host generating dozens of NXDomain responses per minute to random domains is almost certainly infected
- JA3/JA3S fingerprinting identifies C2 implants through their TLS handshake characteristics, even when payload content is encrypted
- Certificate analysis catches hastily configured C2 infrastructure through self-signed certs, short validity periods, and generic subject fields
- Network forensic timelines are the primary deliverable — anchor events, map to kill chain phases, correlate with endpoint data, and document gaps
- Every extracted file must be immediately hashed (SHA-256) and every PCAP must be hashed before and after analysis for chain-of-custody integrity
- Network evidence alone proves network activity occurred — correlation with endpoint telemetry is required to prove what happened on the host
What's Next
You have built a complete network forensics toolkit — from packet-level analysis through file extraction, C2 detection, and evidence correlation. In Module 6: Endpoint Visibility & Investigation, you shift your focus from the wire to the endpoint itself. You will learn how to use Velociraptor for live endpoint interrogation, investigate running processes, hunt for persistence mechanisms, and combine the network evidence you have gathered here with host-based forensic artifacts to build complete incident narratives.
Knowledge Check: Network Forensics & C2 Detection
10 questions · 70% to pass
Which Wireshark feature allows you to extract files transferred over HTTP, SMB, and TFTP directly from a PCAP?
An analyst observes outbound connections from a host to the same external IP occurring every 58-62 seconds for 14 hours. The standard deviation of the intervals is 1.4 seconds. What does this most likely indicate?
Which command-line tool recovers files from a PCAP based on binary file signatures (magic bytes) rather than protocol structure?
During Lab 5.6, you observe a host generating 400+ DNS queries per minute to subdomains averaging 45 characters in length, all under the same parent domain. The subdomains contain high-entropy character sequences. What attack technique is this?
What distinguishes DGA traffic from DNS tunneling in a packet capture?
A JA3 hash is generated from which component of a TLS connection?
In Lab 5.6, you extract a file from a PCAP and need to verify its integrity for chain-of-custody documentation. On a Linux system, which command produces a SHA-256 hash of the file?
Which TLS certificate characteristic is MOST indicative of hastily configured C2 infrastructure?
During your Lab 5.6 investigation, you build a network forensic timeline and need to correlate a C2 beacon with endpoint evidence. Which Windows event confirms the process responsible for the network connection?
Why should an analyst document gaps in PCAP coverage as part of the network forensic timeline?
0/10 answered