[integrity] DIFC Integrity-Filtered Events Report — 2026-03-23 #22397
Closed
Replies: 1 comment
-
|
This discussion has been marked as outdated by Daily DIFC Integrity-Filtered Events Analyzer. A newer discussion is available at Discussion #22751. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Executive Summary
In the last 7 days (data observed: 2026-03-21 through 2026-03-23), 1,403 DIFC integrity-filtered events were detected across 102 workflow runs and 19 distinct workflows. The most frequently filtered tool was
issue_read(556 events, 40%), followed bylist_issues(475 events, 34%). The dominant filter reason was integrity (1,402 of 1,403 events, 99.9%), with only 1 secrecy-triggered filter.The filtering volume is heavily concentrated on 2026-03-22 (943 events — 67% of all events), largely driven by the Issue Monster and Auto-Triage Issues workflows. Issue Monster alone accounts for 533 events (38% of total), reflecting its high-frequency schedule (~30 min interval) repeatedly attempting to read the same set of integrity-blocked issues. The integrity system is functioning correctly: it is protecting agentic workflows from reading untrusted contributor content (tagged
none:all) and content from unapproved contributors (unapproved:all). However, the repeated-retry pattern in Issue Monster represents an optimization opportunity.Key Metrics
github)none:allintegrity tagunapproved:allintegrity tag📈 Events Over Time
The analysis window contains 3 days of data. Activity was minimal on 2026-03-21 (171 events), spiked sharply on 2026-03-22 (943 events — a 5.5× increase), then fell back to 289 events on 2026-03-23. The 2026-03-22 spike coincides with multiple large runs of Auto-Triage Issues (56–62 events each), the Org Health Report (175 events in a single run), and a high density of Issue Monster runs. The pattern suggests a normal operational burst rather than a systemic regression.
🔧 Top Filtered Tools
The five most-filtered tools are all GitHub read operations targeting issues and PRs:
issue_readlist_issuessearch_issuessearch_pull_requestspull_request_readget_discussion,list_discussions,list_commits,get_job_logs)This distribution is expected: agentic workflows routinely scan issue/PR feeds that contain untrusted external contributions. All filtered calls hit the
githubMCP server — no other MCP server was affected.🏷️ Filter Reasons and Tags
99.9% integrity, 0.1% secrecy. Every event carries the
none:allintegrity tag, meaning the content being read came from contributors with no established trust level (external contributors without explicit trust grants). 130 events (9.3%) additionally carryunapproved:all, indicating they originated from contributors classified asCONTRIBUTORbut not yet approved —dsymeis the primary example.The single secrecy-filtered event involved a resource tagged
private/secret, correctly blocked to prevent leaking sensitive content to agent context.📋 Per-Workflow Breakdown
📋 Per-Server Breakdown
githubAll filtered events came from the GitHub MCP server. No other MCP servers (playwright, filesystem, etc.) triggered integrity filtering during this window.
👤 Per-User Breakdown
github-actionsdsymeunapproved:alltag — not yet approvedsamuelkahessaymnkieferCarlson-JLQveverkapmhavelockFmarzochiTiggySravanEsomoire-consultancy-Company🔍 Per-User Analysis
The majority of filtered events (949 / 67.6%) are attributable to automated actors: 601 events from scheduler-triggered runs with no human author context and 348 events from
github-actions[bot]. This is the expected pattern for a system that runs scheduled issue-scanning workflows — the automation reads many issues from untrusted external contributors, all of which are correctly blocked.Among human contributors,
dsyme(81 events, 5.8%) is the largest individual driver. This user is classified as aCONTRIBUTOR(has prior merged PRs) but carriesunapproved:alltags, indicating their content is not yet approved for agent consumption. Grantingdsymeexplicit trust or moving them to an approved tier would eliminate these 81 recurring filters.The remaining ~100 individual contributors each trigger 1–8 events, consistent with normal external issue/PR activity.
💡 Tuning Recommendations
Cache integrity-blocked issue IDs in Issue Monster (High impact)
Issue Monster accounts for 38% of all filtered events by repeatedly attempting to read the same ~10 integrity-blocked issues (e.g., 🔍 Multi-Device Docs Testing Report - 2026-03-22 #22226, [refactor] Semantic Function Clustering Analysis: Refactoring Opportunities in pkg/ #22167, [deps] Update safe patch dependencies (1 update) #21935) on every ~30-minute run. The workflow should maintain a cache of known-blocked issue IDs and skip them on subsequent runs. This alone could reduce total filtered events by ~38%.
Review and approve
dsymeas a trusted contributor (Medium impact)dsymeis a CONTRIBUTOR with 81 filtered events (5.8% of total) due tounapproved:allclassification. If this contributor's content is safe to process, granting explicit approval or adding them to the trusted tier would eliminate these blocks. Review their recent contributions and proceed accordingly.Add per-run filtering caps to Auto-Triage Issues (Medium impact)
Auto-Triage Issues generates 56–62 filtered events per large run (346 total across the window). Consider limiting the number of issues scanned per run to those created/updated since the last run, rather than scanning the full open issue list. This would reduce event volume without sacrificing coverage.
Investigate the Org Health Report single-run spike (175 events) (Low-Medium impact)
One Org Health Report run generated 175 events — the most of any single run. This suggests it scanned a broad cross-section of issues and PRs in a single pass. Consider scoping its queries to trusted-content-only sources or filtering to recently-active items.
No action needed on secrecy filtering (Informational)
The single secrecy-filtered event is an isolated occurrence and does not warrant a tuning change at this time. Monitor for recurring patterns.
Consider a DIFC "trust warming" mechanism for high-volume external contributors (Long-term)
The ~100 individual external contributors each appearing in the filtered-user list suggests a long tail of new participants. A lightweight approval flow (e.g., auto-approve contributors after N merged PRs or after manual review) could proactively reduce future filtering volume.
References:
Beta Was this translation helpful? Give feedback.
All reactions