Implement Agent Safe Shell — POSIX commands as safe builtins#46945
Implement Agent Safe Shell — POSIX commands as safe builtins#46945
Conversation
Port the embedded POSIX shell interpreter from the POC branch and implement 6 new commands (head, cat, grep, wc, sort, uniq) as safe builtins with no external process spawning. Safety-harden by blocking eval/exec/source/trap, capping ping count at 100, adding tail -f 60s timeout, rejecting sort -o and grep -r depth > 10, and enforcing a shell-wide 1MB output cap. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Go Package Import DifferencesBaseline: 81f476f
|
matt-dz
left a comment
There was a problem hiding this comment.
Overall: Architecture is sound — forking mvdan.cc/sh with a deny-by-default exec policy is excellent. The eval/exec/source/trap blocking is correctly implemented. New builtins generally follow POSIX semantics well. 3 critical issues and several important items need addressing before merge.
General items not tied to specific lines:
- Missing safety test for
grep -rdepth cap (claimed in PR description but no test verifies it) - Missing test for
tail -f60-second timeout (key safety feature with no coverage) - New builtin files use upstream mvdan.cc/sh copyright headers but are Datadog-authored code — use Datadog copyright
catlarge file warning at 1MB matches the truncation threshold — consider warning earlier (e.g., 512KB) for advance notice
Files inventory check summaryFile checks results against ancestor 81f476fe: Results for datadog-agent_7.78.0~devel.git.128.12a80b0.pipeline.99159549-1_amd64.deb:No change detected |
- Add interpreter denylist to AllowedCommands (sh, bash, python, etc.) - Make limitedWriter concurrency-safe with sync.Mutex - Replace deprecated io/ioutil with os.ReadDir in handler.go - Fix grep -r depth off-by-one (> to >=) - Replace custom parseInt64 with strconv.ParseInt - Remove dead code (strings.Repeat no-op) in grep - Add default 300s timeout to PAR shell handler - Fix import ordering in subcommands.go - Refactor tail -f to use ring buffer instead of io.ReadAll - Lower find max recursion from 256 to 20 - Lower cat warning threshold from 1MB to 512KB - Move wc buffer allocation outside loop - Document sort single-key limitation - Fix copyright headers on new Datadog-authored files - Add tests: AllowedCommands denylist, grep depth cap, tail timeout Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
matt-dz
left a comment
There was a problem hiding this comment.
Re-review after fix commit
The fix commit addressed all 12 prior findings. This review covers the current state of the full diff.
General findings:
bufio.Scanner(64KB default max line size) is used incat,grep,sort,uniq,head, andtailring buffer — none checkscanner.Err(). Files with lines >64KB (e.g. minified JSON) will silently truncate. Consider addingscanner.Err()checks or increasing the buffer viascanner.Buffer().wcword counting checks whitespace byte-by-byte (' ', '\t', '\r', '\v', '\f'), missing multi-byte Unicode whitespace (e.g. U+00A0). Acceptable for a diagnostics shell but deviates from strict POSIX.- PR description says "Warning on files > 1MB" for
catbut code warns at 512KB — align the documentation.
matt-dz
left a comment
There was a problem hiding this comment.
Security Audit: Agent Safe Shell
1 critical, 3 high, 3 medium findings. See inline comments for details.
Security fixes: - Add SafeOpenHandler that blocks all file writes via redirects (>, >>) - Wire SafeOpenHandler into both PAR handler and CLI command - Remove AllowedCommands passthrough from PAR handler (builtins only) - Expand interpreter denylist: add env, xargs, awk, gawk, nawk, mawk, expect, script Correctness fixes: - Cap tail ring buffer at 100K lines to prevent memory exhaustion - Add scanner.Err() checks to cat, head, grep, sort, uniq, tail - Fix PAR timeout error message to use actual timeout value - Reject multiple -k flags in sort (only single key supported) - Add tests: redirect write/append blocked, SafeOpenHandler coverage Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Final Code Review (Round 3)Prior fixes are all correctly implemented. Remaining findings: IMPORTANT: Denylist still has gaps relevant to
|
matt-dz
left a comment
There was a problem hiding this comment.
Security Audit (Round 2) + Final Code Review (Round 3)
1 critical sandbox escape, 2 medium, and 3 suggestions. See inline comments.
The safe shell should only run ported builtin commands. This commit: - Removes AllowedCommands, deniedCommands, and --allowed-commands CLI flag - External commands are unconditionally blocked in call() — no denylist needed - Fixes command builtin sandbox escape (was calling r.exec() directly) - Tightens FIFO exemption: tracks exact created paths instead of prefix match - Restricts FIFO permissions from 0o666 to 0o600 - Caps PAR timeout at 3600s maximum - Adds tests: external command blocked, command builtin no bypass Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Code Review Round 4Verdict: No critical or blocking issues. The security architecture is solid — Remaining cleanup items:Important: Suggestion: Suggestion: 🤖 Code Review Round 4 by Claude Code |
Security Audit — Final (Round 5)Overall Assessment: PASS — Security posture is acceptable. After 4 rounds of fixes, the safe shell implementation demonstrates a sound security architecture. All critical sandbox invariants hold: Verified Secure
Informational
I am satisfied that this implementation is ready for merge from a security perspective. 🔒 Security Audit by Claude Code |
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Moves mvdan.cc/sh/v3 and golang.org/x/term from indirect to direct deps. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Static quality checks❌ Please find below the results from static quality gates Error
Gate failure full details
Static quality gates prevent the PR to merge! Successful checksInfo
13 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 81f476f Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | -1.65 | [-4.69, +1.38] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | quality_gate_metrics_logs | memory utilization | +0.78 | [+0.56, +1.01] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_logs | memory utilization | +0.75 | [+0.64, +0.86] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.42 | [+0.23, +0.61] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | +0.31 | [+0.15, +0.46] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.30 | [+0.24, +0.35] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.15 | [-0.08, +0.38] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.14 | [-0.02, +0.29] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.13 | [+0.07, +0.20] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.09 | [+0.01, +0.17] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.12, +0.14] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.00 | [-0.24, +0.24] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.09, +0.09] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.00 | [-0.13, +0.12] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.02 | [-0.06, +0.02] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.03 | [-0.41, +0.36] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.03 | [-0.08, +0.01] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.10 | [-0.55, +0.36] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.10 | [-0.53, +0.32] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.11 | [-0.16, -0.06] | 1 | Logs bounds checks dashboard |
| ➖ | file_tree | memory utilization | -0.73 | [-0.78, -0.67] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -1.23 | [-1.32, -1.15] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | -1.41 | [-2.90, +0.07] | 1 | Logs bounds checks dashboard |
| ➖ | docker_containers_cpu | % cpu utilization | -1.65 | [-4.69, +1.38] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
Implements a sandboxed sed stream processor for use in pipelines (e.g. stripping timestamps from log output). All write/exec operations are blocked: -i, w/W commands, e command, s///w and s///e flags. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
🔒 Security Audit:
|
| # | Finding | Severity | Resolution |
|---|---|---|---|
| 1 | Infinite loop via branching | HIGH | ✅ Iteration limit (10k) + ctx.Done() in sedExecCommands |
| 2 | Unbounded r file read |
MEDIUM | ✅ io.LimitReader capped at 1 MiB |
| 3 | All input lines buffered | MEDIUM | ✅ Capped at 100k lines |
| 4 | Pattern/hold space growth | MEDIUM | ✅ Capped at 10 MiB |
| 5 | r/R reads arbitrary files |
MEDIUM | Accepted risk (same as cat builtin) |
| 6 | --in-place=.bak error message |
LOW | ✅ Prefix matching added |
| 7 | ReDoS | LOW | N/A (Go RE2 engine) |
| 8 | D recursion |
LOW | ✅ Fixed by flattening + iteration limit |
| 9 | -f script file unbounded |
INFO | ✅ io.LimitReader capped at 1 MiB |
| 10 | Labels inside groups wrong index | INFO | ✅ Fixed by flattening command tree |
matt-dz
left a comment
There was a problem hiding this comment.
🔒 Security audit and 📝 code review of the sed builtin implementation. Inline comments on specific findings below.
| if !sedMatchAddress(cmd, state) { | ||
| continue | ||
| } | ||
|
|
There was a problem hiding this comment.
🔴 CRITICAL: Infinite loop via branching — no cycle limit
[Security: HIGH | Correctness: CRITICAL]
sedExecCommands has no ctx.Done() check or iteration limit in its inner loop. A script like :loop; b loop causes infinite CPU spin within a single line's processing — the per-line context check in sedProcessInput (L276) is never reached.
The b, t, and T commands can all set *cmdIdx backwards, creating a tight infinite loop here.
Suggested fix:
func (r *Runner) sedExecCommands(ctx context.Context, commands []*sedCommand, labels map[string]int, state *sedState) sedAction {
const maxIterations = 10000
iterations := 0
for i := 0; i < len(commands); i++ {
iterations++
if iterations > maxIterations {
r.errf("sed: execution limit exceeded (possible infinite loop)\n")
return sedActionContinue
}
select {
case <-ctx.Done():
return sedActionContinue
default:
}
// ... rest unchanged
}
return sedActionContinue
}References: CWE-835 (Loop with Unreachable Exit Condition), CWE-400 (Uncontrolled Resource Consumption)
There was a problem hiding this comment.
Fixed in 12a80b0: Added iteration limit (10,000) and ctx.Done() check inside sedExecCommands. Test: TestSed_InfiniteLoopProtection.
| match1 := sedAddrMatches(cmd.addr1, state) | ||
|
|
||
| if cmd.addr2 == nil { | ||
| return match1 // single address |
There was a problem hiding this comment.
🔴 CRITICAL: Address range implementation broken for regex ranges
[Correctness: CRITICAL]
The comment on L514 acknowledges: "A proper implementation would track inRange state per command." Without per-command inRange state, regex-to-regex ranges (/start/,/end/d) do not work. The range needs to "latch on" when addr1 matches and "unlatch" when addr2 matches. Lines between start and end that don't individually match either pattern are missed.
The existing test only covers 2,3d (line-number-to-line-number) which happens to work with simple comparison.
Suggested fix: Add inRange bool field to sedCommand and implement proper toggle logic:
func sedMatchAddressRaw(cmd *sedCommand, state *sedState) bool {
if cmd.addr1 == nil {
return true
}
if cmd.addr2 == nil {
return sedAddrMatches(cmd.addr1, state)
}
if cmd.inRange {
if sedAddrMatches(cmd.addr2, state) {
cmd.inRange = false
}
return true
}
if sedAddrMatches(cmd.addr1, state) {
cmd.inRange = true
return true
}
return false
}There was a problem hiding this comment.
Fixed in 12a80b0: Added inRange bool field to sedCommand with proper toggle logic in sedMatchAddressRaw. Tests: TestSed_RegexRange, TestSed_RegexRangeSubstitute.
| line = line[:nl] | ||
| } | ||
| r.outf("%s\n", line) | ||
| case 'n': |
There was a problem hiding this comment.
🟡 IMPORTANT: n and N commands are broken
[Correctness]
n outputs the pattern space but does not advance to the next input line — remaining commands still operate on the current line.
N is a complete no-op — it silently does nothing. Any script using N for multi-line processing (e.g., sed 'N;s/\n/ /' to join pairs of lines) will silently produce wrong results.
Since sedProcessInput already reads all lines into a slice, implementing these properly is feasible by passing the lines slice and current index into the execution functions, or by returning a special action that the processing loop handles.
Fix: Either implement properly or return an explicit "unsupported" error so users aren't silently given wrong results.
There was a problem hiding this comment.
Fixed in 12a80b0: n/N now properly advance the line pointer via shared lines slice and lineIdx. n at EOF returns sedActionEndCycle per POSIX. Tests: TestSed_NextLine, TestSed_AppendNextLine.
| if err != nil { | ||
| return "", err | ||
| } | ||
| defer f.Close() |
There was a problem hiding this comment.
🟡 IMPORTANT: Unbounded r file read — DoS via OOM
[Security: MEDIUM]
io.ReadAll(f) has no size limit. A script containing r /dev/zero would read from an infinite stream, and r /path/to/huge/file would read the entire contents into memory, causing OOM.
Suggested fix:
const maxReadFileBytes = 1 << 20 // 1 MiB
limited := io.LimitReader(f, maxReadFileBytes+1)
data, err := io.ReadAll(limited)
if err != nil {
return "", err
}
if len(data) > maxReadFileBytes {
return "", fmt.Errorf("file too large (>1MB)")
}References: CWE-770 (Allocation of Resources Without Limits or Throttling)
There was a problem hiding this comment.
Fixed in 12a80b0: sedReadFile now uses io.LimitReader capped at maxOutputBytes (1 MiB).
| var exit exitStatus | ||
| scanner := bufio.NewScanner(reader) | ||
|
|
||
| // Collect all lines to know the last line |
There was a problem hiding this comment.
🟡 IMPORTANT: All input lines buffered in memory — DoS risk
[Security: MEDIUM]
All lines are read into []string before processing to determine lastLine for $ address matching. For large inputs piped through sed (e.g., yes | sed 's/y/n/'), this causes unbounded memory growth.
Suggested fix: Add a line count cap (e.g., 100,000 lines) or switch to a streaming approach with one-line lookahead:
const maxSedLines = 100000
var lines []string
for scanner.Scan() {
lines = append(lines, scanner.Text())
if len(lines) > maxSedLines {
break
}
}References: CWE-770 (Allocation of Resources Without Limits or Throttling)
There was a problem hiding this comment.
Fixed in 12a80b0: sedProcessInput now caps input at sedMaxInputLines (100,000) with a warning to stderr.
| sedActionRestart // restart cycle (branch to beginning) | ||
| sedActionDelete // d | ||
| sedActionDeleteFirstLine // D | ||
| sedActionBranch // b/t/T (label handled separately) |
There was a problem hiding this comment.
🟡 Cleanup: Unused sedActionBranch constant
This constant is declared but never referenced. Branches are handled by modifying *cmdIdx and returning sedActionContinue or sedActionRestart. Should be removed.
There was a problem hiding this comment.
Fixed in 12a80b0: Removed sedActionBranch, renamed sedActionRestart to sedActionEndCycle.
| return exit | ||
| } | ||
| scriptFiles = append(scriptFiles, args[i]) | ||
| case "-i", "--in-place": |
There was a problem hiding this comment.
💡 SUGGESTION: --in-place=SUFFIX gets generic error instead of security message
[Security: LOW — not exploitable]
GNU sed accepts --in-place=SUFFIX (e.g., --in-place=.bak). This falls through to the default case which returns a generic "invalid option" error rather than the specific "not available in safe shell" message. Not a bypass (the option is still rejected), but the error message is misleading.
Fix: Use prefix matching:
default:
if strings.HasPrefix(arg, "--in-place") {
r.errf("sed: -i (in-place edit) is not available in safe shell\n")
exit.code = 2
return exit
}There was a problem hiding this comment.
Fixed in 12a80b0: Long option matching uses strings.HasPrefix(arg, "--in-place"). Test: TestSafety_SedInPlaceLongOption.
| ) | ||
|
|
||
| // Parse options | ||
| i := 0 |
There was a problem hiding this comment.
💡 SUGGESTION: Option parser inconsistency
The other builtins (grep, sort, uniq, etc.) all use the flagParser struct from builtin.go. This implementation uses a hand-rolled option parser. While the sed options are more complex (combined flags like -nE, -e taking rest-of-string), the inconsistency adds maintenance burden. Consider a comment explaining why the custom parser is necessary.
There was a problem hiding this comment.
Acknowledged. Added comment explaining why the custom parser is necessary (combined flags like -nE, -e taking rest-of-arg).
| numStr := "" | ||
| for p.pos < len(p.script) && p.peek() >= '0' && p.peek() <= '9' { | ||
| numStr += string(p.next()) | ||
| } |
There was a problem hiding this comment.
💡 SUGGESTION: q/Q exit code not range-validated
The quit code is parsed as int but stored as uint8 (via exit.code = uint8(state.quitCode) at L254). Values outside 0-255 will silently truncate. Consider validating the range.
There was a problem hiding this comment.
Fixed in 12a80b0: q/Q exit codes now clamped to 0-255 range.
| exit.code = 2 | ||
| return exit | ||
| } | ||
| data, err := io.ReadAll(f) |
There was a problem hiding this comment.
💡 SUGGESTION: Script file (-f) read has no size limit
io.ReadAll(f) has no size cap. A very large script file could consume excessive memory during parsing. Consider adding a size limit (e.g., 1MB — far more than any reasonable sed script).
References: CWE-770 (Allocation of Resources Without Limits or Throttling)
There was a problem hiding this comment.
Fixed in 12a80b0: -f script file reads now use io.LimitReader capped at sedMaxScriptFileBytes (1 MiB).
Fixes all 13 review findings:
- Add iteration limit (10k) and ctx.Done() check in sedExecCommands
to prevent infinite loops from branch commands (CRITICAL)
- Fix address ranges with proper inRange toggle for regex ranges (CRITICAL)
- Implement n/N commands properly with line advancement (IMPORTANT)
- Flatten command tree so labels inside {} groups resolve correctly
- Cap r file reads at 1MB via io.LimitReader
- Cap input buffering at 100k lines
- Cap pattern/hold space at 10MB
- Cap -f script file reads at 1MB
- Block --in-place=SUFFIX with prefix matching
- Validate q/Q exit codes to 0-255 range
- Remove unused sedActionBranch constant
- Rename sedActionRestart to sedActionEndCycle
- Fix parseText line-continuation efficiency
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
This pull request has been automatically marked as stale because it has not had activity in the past 15 days. It will be closed in 30 days if no further activity occurs. If this pull request is still relevant, adding a comment or pushing new commits will keep it open. Also, you can always reopen the pull request if you missed the window. Thank you for your contributions! |
|
This pull request was automatically closed because it has been stale for 15 days with no activity. If this pull request is still relevant, please reopen it or create a new pull request with updated information. Thanks! |
Summary
origin/alex/poc_shell) onto a fresh branch frommainhead,cat,grep,wc,sort,uniqeval/exec/source/trap, capping -cat 100, rejectping -f, addtail -f60s timeout, rejectsort -o, capgrep -rdepth at 10, enforce shell-wide 1MB output capagent shellCLI subcommand and PAR bundle registrationSafety Decisions
eval,exec,source/.,trapfind -exec,-deleteping -fping -ctail -fsort -ogrep -rdepthcatlarge filesTest plan
go test ./pkg/shell/interp/...— All 27 tests passgo test ./pkg/privateactionrunner/bundles/ddagent/shell/...— PAR bundle tests passdatadog-agent shell -c 'echo hello | grep hel | wc -l'datadog-agent shell -c 'eval "echo pwned"'🤖 Generated with Claude Code