Summary
Any client that can reach the server can call the ListInferenceRoutes RPC. The handler does not check identity, API keys, or any other credential. It loads all inference route records from the persistent store, decodes each one as a full InferenceRoute protobuf (which includes the spec field), and returns them in the response. The InferenceRouteSpec type explicitly includes an api_key field, so every returned route exposes the stored upstream API key (e.g. for LLM providers). Unauthenticated callers can therefore harvest all configured inference API keys and abuse paid services or pivot to other systems.
Source Code
- The RPC and response types are defined in
proto/inference.proto: the ListInferenceRoutes RPC (line 23), ListInferenceRoutesResponse with repeated InferenceRoute routes (lines 69-71), and InferenceRouteSpec.api_key (line 30) and InferenceRoute.spec (line 38).
- The handler lives in
crates/navigator-server/src/inference.rs in list_inference_routes (lines 239-265). It takes the request body for limit and offset only; there is no read of gRPC metadata (e.g. no x-sandbox-id or auth header). It calls store.list(InferenceRoute::object_type(), limit, request.offset), then for each record decodes record.payload as InferenceRoute and pushes it into the response vector. No redaction is applied.
- Request routing is in
crates/navigator-server/src/multiplex.rs (lines 98-108). The multiplexer decides between Navigator and Inference services only by the request path prefix (/navigator.inference.v1.Inference/). There is no authentication middleware or interceptor.
Originally by @drew on 2026-02-19T08:58:48.065-08:00
Summary
Any client that can reach the server can call the
ListInferenceRoutesRPC. The handler does not check identity, API keys, or any other credential. It loads all inference route records from the persistent store, decodes each one as a fullInferenceRouteprotobuf (which includes thespecfield), and returns them in the response. TheInferenceRouteSpectype explicitly includes anapi_keyfield, so every returned route exposes the stored upstream API key (e.g. for LLM providers). Unauthenticated callers can therefore harvest all configured inference API keys and abuse paid services or pivot to other systems.Source Code
proto/inference.proto: theListInferenceRoutesRPC (line 23),ListInferenceRoutesResponsewithrepeated InferenceRoute routes(lines 69-71), andInferenceRouteSpec.api_key(line 30) andInferenceRoute.spec(line 38).crates/navigator-server/src/inference.rsinlist_inference_routes(lines 239-265). It takes the request body for limit and offset only; there is no read of gRPC metadata (e.g. nox-sandbox-idor auth header). It callsstore.list(InferenceRoute::object_type(), limit, request.offset), then for each record decodesrecord.payloadasInferenceRouteand pushes it into the response vector. No redaction is applied.crates/navigator-server/src/multiplex.rs(lines 98-108). The multiplexer decides between Navigator and Inference services only by the request path prefix (/navigator.inference.v1.Inference/). There is no authentication middleware or interceptor.Originally by @drew on 2026-02-19T08:58:48.065-08:00