Conversation
Codecov Report
@@ Coverage Diff @@
## master #5231 +/- ##
=======================================
Coverage 99.50% 99.50%
=======================================
Files 77 77
Lines 14590 14603 +13
=======================================
+ Hits 14518 14531 +13
Misses 72 72
Continue to review full report at Codecov.
|
|
Great spot and fix. It's certainly debatable, but keeping the spirit of |
|
not advocating one way or the other, but just noting that one difference of int vs int64 is that there are values of int64 that can't be represented exactly in double right? IIRC somewhere like 2^53 where doubles no longer have integer-level precision. whereas that's not true for int -- all valid int can be represented as double. so it seems like returning int64 could me more correct in some cases, IIUC |
|
Perfectly fine for me. The only problem I see arising then is that different optimization levels will produce different outputs, see also my comment here. library(bit64)
DT = data.table(x=c(lim.integer64(), 1, 1), g=1:2)
options(datatable.optimize=0L)
DT[, prod(x), g]
#> g V1
#> 1: 1 -9223372036854775807
#> 2: 2 9223372036854775807
options(datatable.optimize=2L)
DT[, prod(x), g]
#> g V1
#> 1: 1 -9.223372e+18
#> 2: 2 9.223372e+18My worries are not about losing precision but about the different types of the different optimization levels. This can become an even bigger problem for the user when certain functions turn off gforce optimization as e.g. |
|
Great points. Ok returning integer64 it is then. |
Closes #5225.