Conversation
b4d0aec to
e9cfd68
Compare
| var timeTraversed uint64 = 0 | ||
| var weightedPriceSum sdk.Dec = sdk.ZeroDec() | ||
| for _, price := range prices { | ||
| newTimeTraversed := uint64(ctx.BlockTime().Unix()) - price.SnapshotTimestampInSeconds |
There was a problem hiding this comment.
just double check ctx.BlockTime() is static (i.e. block start time) right?
There was a problem hiding this comment.
yeah it's set during BeginBlock based on block proposal (so consistent across all nodes)
| return res, true | ||
| } | ||
|
|
||
| func (k Keeper) GetAllPrices(ctx sdk.Context, contractAddr string, pair types.Pair) (list []*types.Price) { |
There was a problem hiding this comment.
when do we purge prices we no longer need?
There was a problem hiding this comment.
|
|
||
| func calculateTwap(ctx sdk.Context, prices []*types.Price, lookback uint64) sdk.Dec { | ||
| // sort prices in descending order to start iteration from the latest | ||
| sort.Slice(prices, func(p1, p2 int) bool { |
There was a problem hiding this comment.
since the ordering won't change, we could consider using a priority queue to reduce number of sorts . If number of prices is pretty small then probably doesn't matter
| } else { | ||
| newPrices = append(twap.Prices, twap.Prices[len(twap.Prices)-1]) | ||
| // condition to prevent unsigned integer overflow | ||
| if currentEpoch >= priceRetention { |
There was a problem hiding this comment.
is it always guaranteed that an epoch length is 60s? wouldnt it be good to add a check here that we only delete the epoch data if the current BlockTime - epoch BlockTime > 60 min? however one issue with that is that there is no mechanism to go back to an epoch if you skip over it on that condition :/ WDYT @codchen
There was a problem hiding this comment.
actually for dex I'll extend the retention to be way longer than 60min since its price snapshots are needed for historical price queries as well. In that case we probably don't have this issue with twap I think?
| var weightedPriceSum sdk.Dec = sdk.ZeroDec() | ||
| for _, price := range prices { | ||
| newTimeTraversed := uint64(ctx.BlockTime().Unix()) - price.SnapshotTimestampInSeconds | ||
| if newTimeTraversed > lookback { |
There was a problem hiding this comment.
in this case, wouldn't we exclude any data from the first time interval?
Lets say for example we had data at T-61 and then T-59 ... T-0, when calculating the TWAP, it would technically only represent the weighted price sum for T-59 - T-0, right? because we would break on T-61 and never include data for the first interval?
There was a problem hiding this comment.
that's true. We should align on the behavior in such case between dex twap and oracle twap. Still factoring in T-61 but only assign a time weight of 1 sounds reasonable to me, if that's also how oracle twap is calculated
There was a problem hiding this comment.
yeah that makes sense to me, currently my implementation also has the missing first interval issue, but I can modify it to keep the first out of bounds data point for calculating to the start of the TWAP window
There was a problem hiding this comment.
will approve once this change is in
There was a problem hiding this comment.
pushed the fix as well as a unit test for this case
7ae981a to
a28779b
Compare
[SeiDB] Add Sqlite Benchmark Setup
The previous twap endpoint was stateful and hardcoded to Vortex's use case.
The new twap endpoint itself is stateless, and performs calculation on the fly based on price snapshots. An exchange price snapshot is generated roughly every 60s and has a retention of one hour. Even though the snapshotting cadence is "roughly" 60s, the price snapshot itself will store the exact timestamp, which would make the twap calculation accurate.
Tested through both unit tests and local chain manual tests.
PR is large mainly due to proto-generated files.
The following steps will be performed in subsequent PRs: