Skip to content

Conversation

@cshung
Copy link
Contributor

@cshung cshung commented Aug 26, 2022

For regions, it is possible for special_sweep_p to override the behavior of is_compaction_mandatory when all heaps are joining to make the decision whether to sweep or to compact.

This is causing a problem for LOH compaction because we are expecting the LOH is already swept at this point or it will be compacted in the compact_phase, but neither will happen if loh_compacted_p is true but got overridden by special_sweep_p.

This fix will make sure the LOH is swept if that happened. This is important to make sure the mark bits are reset for LOH.

As a side effect, this will walk the LOH survivor after the POH survivors - not sure if the profilers are happy with that.

@cshung cshung self-assigned this Aug 26, 2022
@ghost
Copy link

ghost commented Aug 26, 2022

Tagging subscribers to this area: @dotnet/gc
See info in area-owners.md if you want to be subscribed.

Issue Details

For regions, it is possible for special_sweep_p to override the behavior of is_compaction_mandatory when all heaps are joining to make the decision whether to sweep or to compact.

This is causing a problem for LOH compaction because we are expecting the LOH is already swept at this point or it will be compacted in the compact_phase, but neither will happen if loh_compacted_p is true but got overridden by special_sweep_p.

This fix will make sure the LOH is swept if that happened. This is important to make sure the mark bits are reset for LOH.

As a side effect, this will walk the LOH survivor after the POH survivors - not sure if the profilers are happy with that.

Author: cshung
Assignees: cshung
Labels:

area-GC-coreclr

Milestone: -

@Maoni0
Copy link
Member

Maoni0 commented Sep 7, 2022

we shouldn't be reverting the loh_compacted_p here because we would have already done the work of planning loh earlier which would be pointless to do.

I think we should move the code that does plan_loh to be after we've completely made the decision whether we will sweep or compact, and only check settings.loh_compaction if special_sweep_p is not on.

    if (condemned_gen_number == max_generation)
    {
#ifdef FEATURE_LOH_COMPACTION
        if (settings.loh_compaction)
        {
            if (plan_loh())
            {
                should_compact = TRUE;
                get_gc_data_per_heap()->set_mechanism (gc_heap_compact, compact_loh_forced);
                loh_compacted_p = TRUE;
            }
        }
        else
        {
            if ((heap_number == 0) && (loh_pinned_queue))
            {
                loh_pinned_queue_decay--;

                if (!loh_pinned_queue_decay)
                {
                    delete loh_pinned_queue;
                    loh_pinned_queue = 0;
                }
            }
        }

        if (!loh_compacted_p)
#endif //FEATURE_LOH_COMPACTION
        {
            GCToEEInterface::DiagWalkUOHSurvivors(__this, loh_generation);
            sweep_uoh_objects (loh_generation);
        }

        GCToEEInterface::DiagWalkUOHSurvivors(__this, poh_generation);
        sweep_uoh_objects (poh_generation);
    }
    else
    {
        settings.loh_compaction = FALSE;
    }

I noticed this suspicious line -

if ((heap_number == 0) && (loh_pinned_queue))

this should be done for all heaps, not just heap0. I think when I wrote this code I meant just for the decay to be reduced on heap0 but used on all heaps. but that's just not worth it. we might as well make loh_pinned_queue_decay a per heap member and do this for all heaps (they should all have the same value anyway).

Maoni0
Maoni0 approved these changes Sep 7, 2022
Copy link
Member

@Maoni0 Maoni0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accidently approved it... was meant for another PR of yours

@ghost ghost added the needs-author-action An issue or pull request that requires more info or actions from the author. label Sep 7, 2022
@cshung cshung force-pushed the public/special-sweep-loh branch from 7639dbd to 137c82c Compare September 21, 2022 23:33
@ghost ghost removed the needs-author-action An issue or pull request that requires more info or actions from the author. label Sep 21, 2022
@cshung cshung force-pushed the public/special-sweep-loh branch from 137c82c to 173a63d Compare September 22, 2022 02:42
@cshung cshung added the NO-MERGE The PR is not ready for merge yet (see discussion for detailed reasons) label Sep 22, 2022
@cshung cshung force-pushed the public/special-sweep-loh branch from 173a63d to 5427f49 Compare September 27, 2022 18:13
@cshung
Copy link
Contributor Author

cshung commented Sep 27, 2022

Move the deletion of the pin queue later as well

@cshung cshung force-pushed the public/special-sweep-loh branch from 5427f49 to 21c9835 Compare September 28, 2022 22:30
@cshung cshung mentioned this pull request Sep 30, 2022
@cshung cshung closed this Oct 7, 2022
@cshung cshung deleted the public/special-sweep-loh branch October 7, 2022 00:51
@ghost ghost locked as resolved and limited conversation to collaborators Nov 6, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

area-GC-coreclr NO-MERGE The PR is not ready for merge yet (see discussion for detailed reasons)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants