Skip to content

Slice should clip to end of tensor #439

@jacobhinkle

Description

@jacobhinkle

Slice behavior currently does not match numpy/PyTorch when the end_indices extend past the tensor extent:

// Slice with end beyond size of input. This should clip to input, not pad.
TEST_F(NVFuserTest, FusionResizeSlice6_CUDA) {
  Fusion fusion;
  FusionGuard fg(&fusion);

  std::vector<int64_t> shape({9});

  // concrete shapes to avoid dynamic Fusion
  auto tv0 = makeConcreteTensor(shape);
  fusion.addInput(tv0);

  auto tv1 = slice(tv0, {{fusion.zeroVal(), IrBuilder::create<Int>(11)}});
  fusion.addOutput(tv1);

  auto options = at::TensorOptions().dtype(at::kFloat).device(at::kCUDA, 0);

  auto t0 = at::randn(shape, options);
  std::vector<c10::IValue> aten_inputs({t0});

  FusionExecutor fe;
  fe.compileFusion(&fusion, aten_inputs);
  auto cg_outputs = fe.runFusion(aten_inputs);

  auto ref = t0.index({at::indexing::Slice(0, 11)});

  testValidate(&fusion, cg_outputs, aten_inputs, {ref}, __LINE__, __FILE__);
  // C++ exception with description "The size of tensor a (9) must match the
  // size of tensor b (11) at non-singleton dimension 0 Exception raised from
  // infer_size_impl at /opt/pytorch/pytorch/aten/src/ATen/ExpandUtils.cpp:31
}

In these cases, we should clip to the end of the tensor. That is, we should implicitly use min(extent, end_index) instead of index itself. As discussed in #397 this means we should not concretize slice ops prematurely unless we know their input extents.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions