Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions zephyr/wrapper.c
Original file line number Diff line number Diff line change
Expand Up @@ -153,12 +153,6 @@ static void *heap_alloc_aligned_cached(struct k_heap *h, size_t min_align, size_
ptr = heap_alloc_aligned(h, align, aligned_size);
if (ptr) {
ptr = uncache_to_cache(ptr);

/*
* Heap can be used by different cores, so cache
* needs to be invalidated before next user
*/
z_xtensa_cache_inv(ptr, aligned_size);
}

return ptr;
Expand All @@ -171,6 +165,11 @@ static void heap_free(struct k_heap *h, void *mem)
{
k_spinlock_key_t key = k_spin_lock(&h->lock);

#ifdef ENABLE_CACHED_HEAP
z_xtensa_cache_flush_inv(z_soc_cached_ptr(mem),
sys_heap_usable_size(h, mem));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do I understand it correctly, that sys_heap_usable_size() returns the size of an allocated memory area, available to the user, excluding the preceding header, i.e. the same size, that the used has used to allocate that memory? Which means, cache of the header area isn't invalidated here? So, if that area is then reused, it can still be overwritten by a later cache eviction?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lyakh my understanding is that the zephyr allocator would do the header invalidate on incoherent systems ? @andyross ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heap backend only ever uses the uncached mapping, and by construction it won't live in the same cache lines as a cached heap block (it's fixed here in this file to be an integer number of 64-byte cache lines).

It's true that there's still an opportunity for code somewhere to (1) allocate an uncached heap block (which has a header in the same cache line) and (2) convert it manually to a cached region, polluting cache lines on top of the heap metadata. But we really can't protect against that, it's just an API violation. Code that is using cached data needs to do it in dedicated lines always, by definition.

#endif

sys_heap_free(&h->heap, mem);

k_spin_unlock(&h->lock, key);
Expand Down