T1053 Scheduled Task Detection Lab
Detect and score risky Windows Scheduled Task abuse (MITRE ATT&CK T1053.005) with a ready-to-run CLI, example data, and copy-pasteable detections.
This repo equips SOC analysts and defenders to identify suspicious Scheduled Task usage for persistence, on-demand execution, and privilege escalation. It ships with:
- A Python CLI (
t1053lab) that reads device process logs (JSON/JSONL) and outputs a risk score + feature breakdown per event. - KQL and Sigma rules aligned to MITRE ATT&CK T1053.005.
- A minimal docs site (MkDocs Material), an architecture diagram, and CI that lint/tests on push.
- Docker and Dev Container options for zero-friction setup.
⚠️ Safety first. Perform any attack simulations only in an isolated lab that you own and control. Do not run in production.
✨ Highlights
- Green on first run:
t1053lab score examples/DeviceProcessEvents.jsonimmediately produces results. - Transparent heuristics: The risk model is intentionally simple and explainable (documented below).
- XDR/SIEM ready: KQL + Sigma detections included; exit codes let you wire the CLI into pipelines/CI.
- Hardened engineering: Ruff, mypy, pytest, pre-commit, and GitHub Actions are configured.
📦 What’s in the box
.
├─ src/t1053lab/ # Python package (CLI + heuristics + I/O)
│ ├─ cli.py # Typer-based CLI
│ ├─ risk.py # Explainable scoring for scheduled task abuse
│ └─ io.py # JSON/JSONL loader
├─ detections/
│ ├─ kql/t1053_schtasks.kql
│ └─ sigma/t1053_schtasks.yml
├─ examples/DeviceProcessEvents.json
├─ docs/ (MkDocs) + architecture.mmd
└─ .github/workflows/ci.yml
🚀 Quick Start
Option A — Python (recommended)
# Linux/macOS
python -m venv .venv && . .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -e ".[dev]"
# Windows (PowerShell)
python -m venv .venv; .\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install -e ".[dev]"
# Run on the example file
t1053lab score examples/DeviceProcessEvents.json
# or
python -m t1053lab score examples/DeviceProcessEvents.jsondocker build -t t1053lab ./docker
docker run --rm -v "$PWD/examples:/data" t1053lab score /data/DeviceProcessEvents.jsonOption C — Dev Container (VS Code)
- Open the folder in VS Code.
- When prompted, Reopen in Container.
- Run:
t1053lab score examples/DeviceProcessEvents.json
🧭 CLI Usage
t1053lab score PATH [--threshold N]Arguments
PATH: Path to a JSON array file or JSONL (one JSON object per line). Each object should contain at least:ProcessCommandLine(string)- optionally
DeviceName/ComputerName
Options
--threshold, -t: Integer threshold for alerting via exit code. Default:3.
Exit Codes
0— No events at/above threshold (or no events).1— One or more events at/above threshold (CI/pipeline-friendly).
Example Output (abridged)
┏━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Device ┃ Score ┃ Features ┃
┡━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ host01 │ 6 │ has_schtasks=True runs_as_system=True elev_highest=True uses_lolbin=True │
│ │ │ bad_path=True has_remote=False remote_creds=False uses_encoded=False │
│ │ │ interactive=False │
│ host02 │ 2 │ has_schtasks=True runs_as_system=False elev_highest=False uses_lolbin=True│
│ │ │ bad_path=False has_remote=False remote_creds=False uses_encoded=False │
│ │ │ interactive=False │
└───────────┴───────┴────────────────────────────────────────────────────────────────────────────┘
ALERT: 1 event(s) scored >= 3
🧪 Sample Data
Try the bundled sample:
t1053lab score examples/DeviceProcessEvents.jsonYou can also point to your own DeviceProcessEvents exports (array or JSONL). Only ProcessCommandLine is required for scoring; DeviceName/ComputerName are used for display.
🧠 How the Risk Model Works
The risk model is intentionally simple and transparent. It looks for Scheduled Task semantics and related suspicious indicators, then sums feature flags.
Feature extraction (binary, 0/1):
| Feature Key | What it looks for (case-insensitive) | Rationale |
|---|---|---|
has_schtasks |
Presence of schtasks or PowerShell Register-ScheduledTask variants |
Scope gate for task operations |
uses_create |
Any of /create, /change, /delete, /run |
Scope gate (task manipulation) |
uses_encoded |
-enc/-encodedCommand + long base64-looking string |
Obfuscation |
has_remote |
/s with a remote target |
Remote task manipulation |
remote_creds |
/u or /p |
Remote credentials |
runs_as_system |
/ru system or variants |
Privilege |
elev_highest |
/rl highest |
Elevation |
interactive |
/it |
User deception/abuse |
bad_path |
Paths like \Users\Public, %APPDATA%, %TEMP%, \ProgramData |
Living-off-dirs |
uses_lolbin |
powershell.exe, pwsh.exe, cmd.exe, wscript.exe, rundll32.exe, mshta.exe |
LOLBIN execution |
Scoring rule:
- If neither
has_schtasksnoruses_createis present → score = 0 (not a task op). - Otherwise, score = sum of all other feature flags (each contributes
+1).
This provides a strong baseline with zero magic. Extend it by adding weights, allow/deny lists, and baselining (see Roadmap).
🔍 Detections (SIEM/XDR)
-
KQL (Microsoft Defender/MDI/MDI2 etc.)
detections/kql/t1053_schtasks.kql -
Sigma (process_creation)
detections/sigma/t1053_schtasks.yml
Both include indicators for remote manipulation, SYSTEM run, highest run level, LOLBIN usage, and suspicious paths.
ATT&CK Mapping
- Technique: T1053.005 — Scheduled Task/Job: Scheduled Task (Windows)
- Tactics: Persistence, Privilege Escalation, Execution
🛠️ Development
Install (local)
python -m venv .venv && . .venv/bin/activate
python -m pip install -e ".[dev]"
pre-commit installCommon tasks
make lint # ruff check
make fmt # ruff format
make type # mypy type-checks
make test # pytest
make run # run CLI on bundled example
make docs # mkdocs dev serverCI
- GitHub Actions runs lint, type-checks, tests, and a dependency audit (
pip-audit) on push/PR.
📚 Documentation Site
We use MkDocs Material.
mkdocs serve
# open http://127.0.0.1:8000🧩 Architecture
flowchart TD A[DeviceProcessEvents / Sysmon] --> B[Parser + Heuristics] B --> C[Risk Score] C --> D[CLI Output] B --> E[Detections (KQL/Sigma)] E --> F[Alerts in SIEM/XDR]
Source diagram: `architecture.mmd`
---
⚙️ Configuration
- **Threshold** default: `3` (see `config/defaults.yaml`).
Override on the CLI with `--threshold` per run.
- **Environment variables** (optional): `.env.example` shows pattern.
The current CLI does not require env vars.
---
🧩 Integrations & Ideas
- **Pipelines/CI:** Use the exit code to gate builds or raise issues when risky events appear in fixture logs.
- **Detections Tuning:** Convert known-good admin patterns into allow-lists; raise the threshold for heavily automated hosts.
- **Export Formats:** Extend the CLI to write CSV/JSON outputs for ingestion (see **Roadmap**).
---
🔐 Security & Compliance
- **Security posture:** linters, type checks, tests, pre-commit, and `pip-audit` in CI.
- **Reporting:** See `SECURITY.md`.
- **MITRE ATT&CK:** Mapped to **T1053.005** in docs and detections.
---
🧭 Troubleshooting
- **No output / empty table?** Ensure your input is a JSON **array** or **JSONL** and contains `ProcessCommandLine`.
- **Windows shell quoting:** When running the example command on Windows, prefer PowerShell’s quoting rules (already shown).
- **Docker volume path:** On Windows, mount paths like `-v "${PWD}\examples:/data"` (PowerShell) or adjust for WSL.
---
🤝 Contributing
PRs are welcome! Please:
1. Open an issue describing the feature or fix.
2. Include tests where possible.
3. Run `make lint type test` locally and ensure CI is green.
See `CONTRIBUTING.md` and `CODE_OF_CONDUCT.md`.
---
🗺️ Roadmap
- Feature weights + host baselines
- Windows Event Log (XML) parsing
- Structured outputs (CSV/JSON)
- Small web UI for visualization
(Details in `ROADMAP.md`.)
---
📜 License
**MIT** — See `LICENSE`.
---
🙋 FAQ
**Q: Can I point this at Sysmon logs?**
A: Yes, if they’re normalized to include a `ProcessCommandLine` field. The loader accepts JSON arrays or JSONL.
**Q: Will this detect *all* Scheduled Task abuse?**
A: No single rule will. This lab focuses on **explainable heuristics** and pragmatic coverage. Use it as a starting point and extend for your environment.
**Q: How do I tune false positives?**
A: Create allow-lists for known-good automations (paths, task names, users), increase the threshold, and supplement with parent/ancestor process context in your SIEM queries.