Summary
Found 4 discrepancies between documentation and implementation during nightly reconciliation check.
Critical Issues 🔴
1. payloadSizeThreshold Incorrectly Documented as TOML-Only / Not Available in JSON Stdin
Location: docs/CONFIGURATION.md, line 418 — "TOML-only / CLI-only options" table
Problem: The table lists "Payload size threshold" as not available in JSON stdin format. This is incorrect.
Actual Behavior: payloadSizeThreshold is defined in StdinGatewayConfig with json:"payloadSizeThreshold,omitempty" and is actively consumed when parsing JSON stdin configs.
Impact: Users writing JSON stdin configs who want to configure the payload size threshold will be misled into thinking they must use CLI flags or env vars, when in fact they can set it directly in their JSON config as "gateway": { "payloadSizeThreshold": 1048576 }.
Suggested Fix:
- Add
payloadSizeThreshold to the Gateway Configuration Fields table (it belongs alongside payloadDir).
- Remove "Payload size threshold" from the "TOML-only / CLI-only options" table.
Code References:
internal/config/config_stdin.go:42 — PayloadSizeThreshold *int json:"payloadSizeThreshold,omitempty"
internal/config/config_stdin.go:318-319 — if stdinCfg.Gateway.PayloadSizeThreshold != nil { cfg.Gateway.PayloadSizeThreshold = *stdinCfg.Gateway.PayloadSizeThreshold }
Important Issues 🟡
2. connect_timeout Server Field Is Undocumented
Location: docs/CONFIGURATION.md — Server Configuration Fields section
Problem: The connect_timeout per-server field is not mentioned anywhere in the documentation.
Actual Behavior: connect_timeout is defined in ServerConfig (used for both TOML and JSON file configs) and also in StdinServerConfig (json:"connect_timeout,omitempty"). It controls the per-transport connection timeout for HTTP backend connect attempts (streamable HTTP → SSE sequence). Default: 30 seconds.
Impact: Users with slow HTTP backends who want to increase the connection timeout have no documentation to guide them.
Suggested Fix: Add a connect_timeout entry to the Server Configuration Fields section:
connect_timeout (optional, HTTP servers only): Per-transport connection timeout in seconds for connecting to HTTP backends. The gateway tries streamable HTTP then SSE transports in sequence; this timeout applies to each attempt. Default: 30.
Code Reference: internal/config/config_core.go:210-215
3. rate_limit_threshold and rate_limit_cooldown Fields Are Undocumented
Location: docs/CONFIGURATION.md — Server Configuration Fields section
Problem: The circuit breaker fields rate_limit_threshold and rate_limit_cooldown are not documented anywhere.
Actual Behavior: Both fields are defined in ServerConfig. They configure a per-backend rate-limit circuit breaker: after rate_limit_threshold (default: 3) consecutive rate-limit errors, the circuit opens and requests are rejected until rate_limit_cooldown seconds (default: 60) elapse. Note: these fields are available in TOML and JSON file-based configs but NOT in JSON stdin format.
Impact: Users experiencing rate-limiting from backend MCP servers have no documentation on how to tune the circuit breaker behavior.
Suggested Fix: Add entries to the Server Configuration Fields section for these two fields.
Code Reference: internal/config/config_core.go:219-229
Minor Issues 🔵
4. make test-race Not Documented in CONTRIBUTING.md Testing Section
Location: CONTRIBUTING.md — Testing section
Problem: The test-race make target is present in the Makefile and listed in make help, but is not mentioned in the CONTRIBUTING.md Testing section alongside test-unit, test-all, etc.
Actual Behavior: make test-race runs unit tests with Go's race detector (-race flag) to catch concurrent data races. This is particularly important for a concurrent HTTP server.
Impact: Contributors working on concurrent code have no documentation pointing them to the race detection test target.
Suggested Fix: Add a subsection under "Testing" in CONTRIBUTING.md:
#### Race Detection Tests
Run unit tests with Go's race detector to catch concurrent data races:
```bash
make test-race
The MCP Gateway is a concurrent server; use this to validate thread safety when modifying concurrent code.
**Code Reference:** `Makefile:63-69`
---
## Documentation Completeness
### Missing Documentation
- `connect_timeout` field (HTTP per-server timeout) exists in code but not in `docs/CONFIGURATION.md`
- `rate_limit_threshold` and `rate_limit_cooldown` fields (circuit breaker) exist in code but not in `docs/CONFIGURATION.md`
- `make test-race` make target exists in Makefile but not documented in `CONTRIBUTING.md`
### Inaccurate Documentation
- `payloadSizeThreshold` incorrectly classified as TOML-only / not available in JSON stdin (it IS available)
### Accurate Sections ✅
- `README.md` Quick Start — Docker run command with `MCP_GATEWAY_PORT`, `MCP_GATEWAY_DOMAIN`, `MCP_GATEWAY_API_KEY` is accurate; these are validated by `run_containerized.sh` entrypoint
- `README.md` Quick Start — JSON config field names (`type`, `container`, `env`, `apiKey`) are correct for JSON stdin format
- `README.md` Auth docs — API key format `Authorization: <api-key>` (not Bearer) matches code
- `CONTRIBUTING.md` — Go 1.25.0 requirement matches `go.mod`
- `CONTRIBUTING.md` — All documented make targets (`build`, `test`, `test-unit`, `test-integration`, `test-all`, `lint`, `coverage`, `test-ci`, `test-serena`, `test-serena-gateway`, `test-container-proxy`, `format`, `clean`, `install`) verified to exist in Makefile
- `CONTRIBUTING.md` — Binary name `awmg` matches Makefile `BINARY_NAME`
- `CONTRIBUTING.md` — `make install` steps (Go version check, golangci-lint, `go mod download`) match Makefile
- `CONTRIBUTING.md` — Project structure section accurately reflects `internal/` directory layout
- `config.example.toml` — TOML format fields and comments are accurate
- `docs/ENVIRONMENT_VARIABLES.md` — All documented env vars verified against source code
## Tested Commands
All make targets from `CONTRIBUTING.md` were verified via `make --dry-run`:
- ✅ `make build` — works as documented
- ✅ `make test` / `make test-unit` — works as documented
- ✅ `make test-integration` — works as documented (auto-builds binary if needed)
- ✅ `make test-all` — works as documented
- ✅ `make coverage` — works as documented
- ✅ `make test-ci` — works as documented
- ✅ `make lint` — works as documented
- ✅ `make format` — works as documented
- ✅ `make clean` — works as documented
- ✅ `make install` — works as documented
- ⚠️ `make test-race` — works but **not documented** in CONTRIBUTING.md
- i️ `make build` skipped (network-restricted sandbox, Go toolchain download blocked)
## Recommendations
### Immediate Actions Required:
1. Fix `payloadSizeThreshold` classification in `docs/CONFIGURATION.md` — move to JSON-available gateway config table and remove from TOML-only section
### Nice to Have:
1. Document `connect_timeout` in `docs/CONFIGURATION.md` Server Configuration Fields
2. Document `rate_limit_threshold` / `rate_limit_cooldown` in `docs/CONFIGURATION.md` Server Configuration Fields
3. Add `make test-race` to CONTRIBUTING.md Testing section
## Code References
- StdinGatewayConfig: `internal/config/config_stdin.go:32-44`
- ServerConfig fields: `internal/config/config_core.go:165-231`
- Variable expansion: `internal/config/validation.go:48-155`
- Default listen port: `internal/cmd/root.go:34`
> [!WARNING]
> <details>
> <summary><strong>⚠️ Firewall blocked 1 domain</strong></summary>
>
> The following domain was blocked by the firewall during workflow execution:
>
> - `proxy.golang.org`
>
> To allow these domains, add them to the `network.allowed` list in your workflow frontmatter:
>
> ```yaml
> network:
> allowed:
> - defaults
> - "proxy.golang.org"
> ```
>
> See [Network Configuration](https://github.github.com/gh-aw/reference/network/) for more information.
>
> </details>
> Generated by [Nightly Documentation Reconciler](https://github.com/github/gh-aw-mcpg/actions/runs/24435376116/agentic_workflow) · ● 3.3M · [◷](https://github.com/search?q=repo%3Agithub%2Fgh-aw-mcpg+is%3Aissue+%22gh-aw-workflow-call-id%3A+github%2Fgh-aw-mcpg%2Fnightly-docs-reconciler%22&type=issues)
> - [x] expires <!-- gh-aw-expires: 2026-04-18T03:57:00.569Z --> on Apr 18, 2026, 3:57 AM UTC
<!-- gh-aw-agentic-workflow: Nightly Documentation Reconciler, engine: copilot, model: auto, id: 24435376116, workflow_id: nightly-docs-reconciler, run: https://github.com/github/gh-aw-mcpg/actions/runs/24435376116 -->
<!-- gh-aw-workflow-id: nightly-docs-reconciler -->
<!-- gh-aw-workflow-call-id: github/gh-aw-mcpg/nightly-docs-reconciler -->
Summary
Found 4 discrepancies between documentation and implementation during nightly reconciliation check.
Critical Issues 🔴
1.
payloadSizeThresholdIncorrectly Documented as TOML-Only / Not Available in JSON StdinLocation:
docs/CONFIGURATION.md, line 418 — "TOML-only / CLI-only options" tableProblem: The table lists "Payload size threshold" as not available in JSON stdin format. This is incorrect.
Actual Behavior:
payloadSizeThresholdis defined inStdinGatewayConfigwithjson:"payloadSizeThreshold,omitempty"and is actively consumed when parsing JSON stdin configs.Impact: Users writing JSON stdin configs who want to configure the payload size threshold will be misled into thinking they must use CLI flags or env vars, when in fact they can set it directly in their JSON config as
"gateway": { "payloadSizeThreshold": 1048576 }.Suggested Fix:
payloadSizeThresholdto the Gateway Configuration Fields table (it belongs alongsidepayloadDir).Code References:
internal/config/config_stdin.go:42—PayloadSizeThreshold *int json:"payloadSizeThreshold,omitempty"internal/config/config_stdin.go:318-319—if stdinCfg.Gateway.PayloadSizeThreshold != nil { cfg.Gateway.PayloadSizeThreshold = *stdinCfg.Gateway.PayloadSizeThreshold }Important Issues 🟡
2.
connect_timeoutServer Field Is UndocumentedLocation:
docs/CONFIGURATION.md— Server Configuration Fields sectionProblem: The
connect_timeoutper-server field is not mentioned anywhere in the documentation.Actual Behavior:
connect_timeoutis defined inServerConfig(used for both TOML and JSON file configs) and also inStdinServerConfig(json:"connect_timeout,omitempty"). It controls the per-transport connection timeout for HTTP backend connect attempts (streamable HTTP → SSE sequence). Default: 30 seconds.Impact: Users with slow HTTP backends who want to increase the connection timeout have no documentation to guide them.
Suggested Fix: Add a
connect_timeoutentry to the Server Configuration Fields section:Code Reference:
internal/config/config_core.go:210-2153.
rate_limit_thresholdandrate_limit_cooldownFields Are UndocumentedLocation:
docs/CONFIGURATION.md— Server Configuration Fields sectionProblem: The circuit breaker fields
rate_limit_thresholdandrate_limit_cooldownare not documented anywhere.Actual Behavior: Both fields are defined in
ServerConfig. They configure a per-backend rate-limit circuit breaker: afterrate_limit_threshold(default: 3) consecutive rate-limit errors, the circuit opens and requests are rejected untilrate_limit_cooldownseconds (default: 60) elapse. Note: these fields are available in TOML and JSON file-based configs but NOT in JSON stdin format.Impact: Users experiencing rate-limiting from backend MCP servers have no documentation on how to tune the circuit breaker behavior.
Suggested Fix: Add entries to the Server Configuration Fields section for these two fields.
Code Reference:
internal/config/config_core.go:219-229Minor Issues 🔵
4.
make test-raceNot Documented in CONTRIBUTING.md Testing SectionLocation:
CONTRIBUTING.md— Testing sectionProblem: The
test-racemake target is present in the Makefile and listed inmake help, but is not mentioned in the CONTRIBUTING.md Testing section alongsidetest-unit,test-all, etc.Actual Behavior:
make test-raceruns unit tests with Go's race detector (-raceflag) to catch concurrent data races. This is particularly important for a concurrent HTTP server.Impact: Contributors working on concurrent code have no documentation pointing them to the race detection test target.
Suggested Fix: Add a subsection under "Testing" in
CONTRIBUTING.md:The MCP Gateway is a concurrent server; use this to validate thread safety when modifying concurrent code.