Skip to content

[BUG] Integer overflow during interval analysis and simplification #2455

@sgrechanik-h

Description

@sgrechanik-h

Consider the following code:

import tvm

dtype = 'int32'
cnst = lambda c: tvm.const(c, dtype)
vs = [tvm.var("x" + str(i), dtype) for i in range(4)]
[x0, x1,x2, x3] = vs
vranges = {v: tvm.Range(tvm.const(-256*256*2560, dtype), tvm.const(256*256*256*20, dtype))
               for v in vs}

expr = x0*cnst(2) + x1*cnst(2) + x2*cnst(2) + cnst(-1) < x3*cnst(-1)

res = tvm.ir_pass.CanonicalSimplify(expr, vranges)

print(vranges)
print(expr)
print(res)

(Note that all the specified ranges are within the limits for int32).
CanonicalSimplify simplifies the expression to 1 which is wrong. The reason is that during interval analysis (EvalSet) an integer overflow happens. Note that although internally values are represented as int64, which should be enough, the function IntImm::make from HalideIR intentionally performs truncation of higher bits.
I don't know what to do with this problem, the truncation seems important if we want to preserve the exact behavior of integer operations, but I think it should be disabled for interval analysis somehow.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions