Skip to content

VirtualFunction cache#1090

Closed
oscar-wos wants to merge 2 commits intoroflmuffin:mainfrom
oscar-wos:vfunc-leak-fix
Closed

VirtualFunction cache#1090
oscar-wos wants to merge 2 commits intoroflmuffin:mainfrom
oscar-wos:vfunc-leak-fix

Conversation

@oscar-wos
Copy link
Copy Markdown

Refactor CreateVirtualFunction to include caching and hashing.

AddCommand("css_teleport", "", (player, info) =>
{
    if (player is null)
    {
        return;
    }

    Console.WriteLine($"teleporting {player.PlayerPawn}");
    player.PlayerPawn.Value?.Teleport(new Vector3(0, 0, 0));
});
image

Refactor CreateVirtualFunction to include caching and hashing.
@oscar-wos oscar-wos requested a review from roflmuffin as a code owner October 25, 2025 22:12
@2vg
Copy link
Copy Markdown

2vg commented Oct 27, 2025

This can be also possible on the C++ side, now the vf will not leak.
I have this modification in my private fork one, and it has been running for over a week on a 64person ZE server without a single reboot or crash.
Well, its possible that a destructor isnt necessary.
cache code e.g.)

    // destructor
    - ValveFunction::~ValveFunction() {}
    + ValveFunction::~ValveFunction() {
    +     if (m_precallback) {
    +         globals::callbackManager.ReleaseCallback(m_precallback);
    +         m_precallback = nullptr;
    +     }
    +     if (m_postcallback) {
    +         globals::callbackManager.ReleaseCallback(m_postcallback);
    +         m_postcallback = nullptr;
    +     }
    + }

    // cache
    auto it = std::find_if(m_managed_ptrs.begin(), m_managed_ptrs.end(), [&](ValveFunction* vf) {
        return vf && vf->m_ulAddr == function_addr &&
               vf->m_eCallingConvention == CONV_THISCALL &&
               vf->m_Args == args &&
               vf->m_eReturnType == return_type &&
               vf->m_offset == vtable_offset;
    });

    if (it != m_managed_ptrs.end())
    {
        return *it;
    }

@roflmuffin
Copy link
Copy Markdown
Owner

I think this is a useful feature, but I believe it would be better implemented on the native side rather than on the C# side

@ELDment
Copy link
Copy Markdown
Contributor

ELDment commented Nov 16, 2025

This can be also possible on the C++ side, now the vf will not leak. I have this modification in my private fork one, and it has been running for over a week on a 64person ZE server without a single reboot or crash. Well, its possible that a destructor isnt necessary. cache code e.g.)

    // destructor
    - ValveFunction::~ValveFunction() {}
    + ValveFunction::~ValveFunction() {
    +     if (m_precallback) {
    +         globals::callbackManager.ReleaseCallback(m_precallback);
    +         m_precallback = nullptr;
    +     }
    +     if (m_postcallback) {
    +         globals::callbackManager.ReleaseCallback(m_postcallback);
    +         m_postcallback = nullptr;
    +     }
    + }

    // cache
    auto it = std::find_if(m_managed_ptrs.begin(), m_managed_ptrs.end(), [&](ValveFunction* vf) {
        return vf && vf->m_ulAddr == function_addr &&
               vf->m_eCallingConvention == CONV_THISCALL &&
               vf->m_Args == args &&
               vf->m_eReturnType == return_type &&
               vf->m_offset == vtable_offset;
    });

    if (it != m_managed_ptrs.end())
    {
        return *it;
    }

great suggestion
however, imo, for performance critical communities like ZE servers, I believe using a custom hash function is more appropriate as it guarantees O(1) lookup time instead of O(n)

@2vg
Copy link
Copy Markdown

2vg commented Nov 16, 2025

however, imo, for performance critical communities like ZE servers, I believe using a custom hash function is more appropriate as it guarantees O(1) lookup time instead of O(n)

Yes, this was implemented as a trial, and there may be better implementations.
The server I run is a 9950X, and since there aren't almost frame spikes for now, I'm still using it as is. :D

@roflmuffin
Copy link
Copy Markdown
Owner

This should be fixed by PR #1121 as it's done on the native layer now, but thank you for prompting the discussion

@roflmuffin roflmuffin closed this Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants