Note: the following are some perf observations/ideas as a result of looking at the code - no profiling was done to ensure that this is a perf-critical area in the grand scheme of things. We should profile carefully before doing anything.
Following is a list of full tree traversal that we do for each query we execute (before compilation):
Parameter extraction:
Query caching:
- One pass to calculate the hash code of the query
- One pass per query in the cache with the same hash code, for Equals
So we're traversing the query tree at least four times for a cached query. We may be able to improve this by:
- Use a trie structure instead of the current dictionary (IMemoryCache), similar to trie-based string lookups.
- Merge parameter extraction with the trie traversal. As we're traversing the tree to find a cached entry, we also perform parameter extraction.
We should keep possible optimizations like this in mind when determining what is public and what isn't, to avoid breaking changes.
Note: the following are some perf observations/ideas as a result of looking at the code - no profiling was done to ensure that this is a perf-critical area in the grand scheme of things. We should profile carefully before doing anything.
Following is a list of full tree traversal that we do for each query we execute (before compilation):
Parameter extraction:
One pass with EvaluatableExpressionFindingExpressionVisitor to find evaluatable expressionsNOTE: This was optimized with the funcletizer rewrite in Rewrite the funcletizer to support precompiled queries #33106Query caching:
So we're traversing the query tree at least four times for a cached query. We may be able to improve this by:
We should keep possible optimizations like this in mind when determining what is public and what isn't, to avoid breaking changes.