-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Introduce extra flags for Instance Limits #11949
Description
Description of bug
With the new pooling strategies for wasm instance created, parity deprecated the InstanceReuse strategy (which was the default until 0.9.24)
The issue comes from the hardcoded limits set if the selected strategy is pooling based, a.k.a. the limits set here
substrate/client/executor/wasmtime/src/runtime.rs
Lines 376 to 401 in 7d8e5a1
| config.allocation_strategy(wasmtime::InstanceAllocationStrategy::Pooling { | |
| strategy: wasmtime::PoolingAllocationStrategy::ReuseAffinity, | |
| // Pooling needs a bunch of hard limits to be set; if we go over | |
| // any of these then the instantiation will fail. | |
| instance_limits: wasmtime::InstanceLimits { | |
| // Current minimum values for kusama (as of 2022-04-14): | |
| // size: 32384 | |
| // table_elements: 1249 | |
| // memory_pages: 2070 | |
| size: 64 * 1024, | |
| table_elements: 3072, | |
| memory_pages, | |
| // We can only have a single of those. | |
| tables: 1, | |
| memories: 1, | |
| // This determines how many instances of the module can be | |
| // instantiated in parallel from the same `Module`. | |
| // | |
| // This includes nested instances spawned with `sp_tasks::spawn` | |
| // from *within* the runtime. | |
| count: 32, | |
| }, | |
| }); |
This limits are based on the Kusama runtime, which is probably not very big compared to some parachains. Consequently, a parachain with a runtime bigger than Kusama will highly likely reach this limit and be unable to leverage the new strategies, which is the issue we are facing at composable.
A simple solution would be to introduce extra flags for the limits, such that every team will be able to fine tune the limits w.r.t their runtimes, this would allow us to use the new strategies.
Also, we noticed that the limit was increased here this might not be a sustainable solution, it will be good to introduce a flag here with a default value instead.
[image: A screenshot of the error we are facing]

Steps to reproduce
No response