[RLlib] Cleanup examples folder #15: Add example script for policy (RLModule) inference w/ ConnectorV2.#45845
Conversation
simonsays1980
left a comment
There was a problem hiding this comment.
LGTM. Next step: Let’s think about how to use a naked module without RLlib connectors and episodes, e.g. on the edge.
| Each connector piece receives as input the output of the previous connector | ||
| piece in the pipeline. | ||
| """ | ||
| shared_data = shared_data if shared_data is not None else {} |
There was a problem hiding this comment.
Nice! This should spare us some naughty definitions in learner code
| input_observation_space=env.observation_space, | ||
| input_action_space=env.action_space, | ||
| connectors=[ | ||
| AddObservationsFromEpisodesToBatch(), |
There was a problem hiding this comment.
Here we need more documentation which connectors are needed. For the module we get its inputs from the input specs.
imo this example is still an easy one b/c the connectors are stateless themselves. This could be done w/o connectors easily but with a mean_std filter we need the actual state of the connector.
There was a problem hiding this comment.
Agreed, this example wouldn't work with stateful connectors.
I'll add more documentation to the examples. Need to write all the ConnectorV2 docs still anyways :)
| ) | ||
| # Create new Algorithm and restore its state from the last checkpoint. | ||
| rl_module = RLModule.from_checkpoint( | ||
| os.path.join( |
There was a problem hiding this comment.
Does this also work with patchilb?
There was a problem hiding this comment.
I think so. For joining, I like to use os.path.join. Seems simpler to use.
|
|
||
| # For the module-to-env pipeline, we will use the convenient config utility. | ||
| print("Module-to-env ConnectorV2 ...") | ||
| module_to_env = base_config.build_module_to_env_connector(env=env) |
There was a problem hiding this comment.
Why can we build this easily from the alto and the env-to-module not?
There was a problem hiding this comment.
We can do both with config.build_[env_to_module/module_to_env]_connector, just wanted to show both ways.
| reward, | ||
| terminated=terminated, | ||
| truncated=truncated, | ||
| # Same here: [0] b/c RLModule output is batched (w/ B=1). |
There was a problem hiding this comment.
We should give here also the hint that we use the episode here to store the states of the module such that it can be used in the connectors and module without intervention.
There was a problem hiding this comment.
The user might otherwise question why to store all this and harm performance with it
There was a problem hiding this comment.
True! I agree, the next thing should be an "on-the-edge" example:
- RLModule (as ONNX re-import?)
- stateful ConnectorV2 (loaded from checkpoint?)
- Episode (how do we make this efficient and not carry around a maybe 10k ts episode?) <- here, I have to say though, extending an episode is just adding items to a list, which does NOT pose a large performance penalty
- what if all the above must run on non-python, e.g. C++ ??
…e_w_connector Signed-off-by: Sven Mika <svenmika1977@gmail.com>
…r policy (RLModule) inference w/ ConnectorV2. (ray-project#45845) Signed-off-by: Richard Liu <ricliu@google.com>
Cleanup examples folder #15: Add example script for policy (RLModule) inference w/ ConnectorV2.
Why are these changes needed?
Related issue number
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.