Skip to content
This repository was archived by the owner on Oct 6, 2025. It is now read-only.

Conversation

@doringeman
Copy link
Collaborator

On top of #64.

You can launch docker/model-runner#46 using make docker-run.

$ MODEL_RUNNER_HOST=http://localhost:8080 docker model run ai/smollm2 hi
$ MODEL_RUNNER_HOST=http://localhost:8080 docker model unload --all
Unloaded 1 model(s).
$ # run again
$ MODEL_RUNNER_HOST=http://localhost:8080 docker model unload ai/smollm2
Unloaded 1 model(s).
$ # run again
$ MODEL_RUNNER_HOST=http://localhost:8080 docker model unload --backend llama.cpp ai/smollm2
Unloaded 1 model(s).

@doringeman doringeman changed the title Unload Add docker model unload May 21, 2025
@doringeman doringeman mentioned this pull request May 21, 2025
@crazy-max
Copy link
Member

Same as #63 (comment)

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
@doringeman doringeman merged commit f7fd3d9 into docker:main May 21, 2025
1 check passed
@doringeman
Copy link
Collaborator Author

Fixed docs in #67.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants