Summary
lark-cli currently has doctor, but from an integration perspective it is still much more oriented toward config / login / user-auth flows than toward bot credential health checks.
For systems that integrate lark-cli as the official CLI execution surface, there is a strong need for a first-class bot probe / bot doctor capability.
What we need
We need a way to:
- check availability using bot credentials
- directly retrieve bot info
- clearly determine whether the configured
app_id / app_secret are valid
In other words, we need a bot-oriented health check, not just a user-auth/config-oriented doctor flow.
Current workaround
Today this can be approximated by calling:
lark-cli api GET /open-apis/bot/v3/info --as bot
This works as a low-level probe, but it is not ideal because:
- the intent is not explicit
- output is not normalized as a health check / diagnosis result
- callers need to interpret API-level errors themselves
- it does not provide a stable first-class UX for bot health verification
Proposed UX
Either of these would solve the problem well:
Option A
Option B
A good result should ideally include fields such as:
ok
app_id
- resolved
bot_name
bot_open_id
- endpoint / brand info if relevant
- normalized error message if credentials are invalid or the bot cannot be resolved
Why this matters
This is especially important for:
- AI agents
- orchestration systems
- wrapper CLIs
- server-side integrations that route across multiple bots
In these environments, bot credentials are often managed externally, and the first thing the integrator needs is a reliable answer to:
- does this bot credential pair actually work?
- which bot identity does it resolve to?
- is the failure caused by invalid credentials, endpoint mismatch, or something else?
Without a first-class bot doctor/probe, integrators usually end up re-implementing this logic outside the CLI.
Suggested behavior
- Use bot credentials directly
- Return machine-readable structured output
- Normalize common failures, for example:
- invalid
app_id / app_secret
- bot info endpoint unavailable
- brand / endpoint mismatch
- bot identity cannot be resolved
- Keep this stable enough for automation and CI-style health checks
Related context
This request is related to broader identity / integration needs such as #18, but this issue is specifically about first-class bot credential diagnosis and probing.
Expected impact
This would make lark-cli significantly easier to use as a reliable integration surface, not only as an interactive terminal tool.
Summary
lark-clicurrently hasdoctor, but from an integration perspective it is still much more oriented toward config / login / user-auth flows than toward bot credential health checks.For systems that integrate
lark-clias the official CLI execution surface, there is a strong need for a first-class bot probe / bot doctor capability.What we need
We need a way to:
app_id/app_secretare validIn other words, we need a bot-oriented health check, not just a user-auth/config-oriented doctor flow.
Current workaround
Today this can be approximated by calling:
This works as a low-level probe, but it is not ideal because:
Proposed UX
Either of these would solve the problem well:
Option A
Option B
A good result should ideally include fields such as:
okapp_idbot_namebot_open_idWhy this matters
This is especially important for:
In these environments, bot credentials are often managed externally, and the first thing the integrator needs is a reliable answer to:
Without a first-class bot doctor/probe, integrators usually end up re-implementing this logic outside the CLI.
Suggested behavior
app_id/app_secretRelated context
This request is related to broader identity / integration needs such as #18, but this issue is specifically about first-class bot credential diagnosis and probing.
Expected impact
This would make
lark-clisignificantly easier to use as a reliable integration surface, not only as an interactive terminal tool.