Skip to content

Conversation

@felixhjh
Copy link
Collaborator

@felixhjh felixhjh commented Jan 5, 2023

PR types(PR类型)

Bug Fix

Description

  • When loading model from memory, don't need CheckModelFormat

FastDeploy load model from memory usages(For now temporarily only support Paddle Inference backend):

import fastdeploy as fd
runtime_option = fd.RuntimeOption()
# runtime_option read model form memory
runtime_option.set_model_buffer(model_buffer.read(), model_size, params_buffer.read(), params_size)
runtime_option.use_paddle_backend()
# Initialize model without model path and params path
model = fd.vision.classification.PaddleClasModel( "", "", config_file, runtime_option=runtime_option)
model.predict(im)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

能否提供模型加密的接口?

2 participants