Skip to content

Supporting YOLO models: aten::_reinterpret_tensor not implemented #203

@oluwatimilehin

Description

@oluwatimilehin

Hi team, thanks for the fantastic work on PyTorchSIM.

Just flagging here that I've just attempted to run inference on a YOLO model on PyTorchSIM, and it failed with the error:

NotImplementedError: Could not run 'aten::_reinterpret_tensor' with arguments from the 'npu' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). 

This operator is registered here:

TORCH_LIBRARY_FRAGMENT(aten, m) {

but it's not picked up during inference.

Here's my config:

def run_yolo(batch, config):
    device = PyTorchSimRunner.setup_device().custom_device()
    
    model = torch.hub.load("ultralytics/yolov5", "yolov5s")
    url = "https://ultralytics.com/images/zidane.jpg"

    response = requests.get(url)
    img = Image.open(BytesIO(response.content)).convert("RGB")

    transform = transforms.Compose([
        transforms.Resize((640, 640)),
        transforms.ToTensor(),
    ])

    x = transform(img).unsqueeze(0)  
   
    # Move model and input tensors to the custom device
    x = x.to(device)   
    model.to(device)

    # Compile and run the model with PyTorchSim
    compiled_model = torch.compile(dynamic=False)(model)
    y = compiled_model(x)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions