Skip to content
This repository was archived by the owner on Nov 1, 2020. It is now read-only.

Port fixes to ConditionalWeakTable#2322

Merged
jkotas merged 1 commit into
dotnet:masterfrom
stephentoub:port8507
Dec 10, 2016
Merged

Port fixes to ConditionalWeakTable#2322
jkotas merged 1 commit into
dotnet:masterfrom
stephentoub:port8507

Conversation

@stephentoub
Copy link
Copy Markdown
Member

CoreRT's ConditionalWeakTable implementation was ported to CoreCLR in dotnet/coreclr#8507. In the process, several fixes were made; this ports those fixes back.

Two issues addressed:

  • Every time the table resizes, it ends up allocating a new dependency handle for every handle in the table. That's not cheap. Instead, we now transfer still valid handles from the old container to the new container, giving the new container the responsibility of freeing them when it's finalized... and the old container keeps it alive as long as the old container is alive via a reference.
  • There's a race condition present whereby if a ConditionalWeakTable is resurrected, it's possible for a thread to try accessing a dependency handle while or after it's been freed. This is addressed by not freeing during the first finalization, instead using that pass to clear out the table's reference to the container and then re-registering for finalization: the next time it's finalized, we can be sure it won't get resurrected and there won't be a race condition.

cc: @jkotas, @mjp41

internal void Clear()
{
// Remove all handles (as in Remove, just by setting the hashcode and leaving
// actual removal up to the finalizer).
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handles will be still active and keeping the dependent objects alive, until the next rehash happens. It may take a long time, or potentially forever. We should also set the primary to null or something. Was the same problem in the original CoreCLR implementation?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was the same problem in the original CoreCLR implementation

The original CoreCLR implementation doesn't have to worry about lock-free reads, so it frees all of the handles synchronously in Clear.

We should also set the primary to null or something

Can we do that while a reader may be using the dependency handle? i.e. does the handle itself have appropriate synchronization to support reading-while-mutating?

I'd based this Clear implementation on the existing Remove implementation, which also just sets the hashcode to -1. Presumably whatever we do in Clear should also be done in Remove.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(As an aside, I'm not really sure how to test Clear, other than via reflection; it's not exposed externally and it appears to only be used internally via some WinRT functionality in CoreLib.)

Copy link
Copy Markdown
Member Author

@stephentoub stephentoub Dec 9, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I can undo this clear change, here and in coreclr. I changed it when the finalizer was nulling out the parent's container, as then this dropped container could mess with its replacement, but I changed the finalizer to only null it out when it matches this, so it should be fine again.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the handle itself have appropriate synchronization to support reading-while-mutating?

I think so.

Presumably whatever we do in Clear should also be done in Remove.

Yes.

it appears to only be used internally via some WinRT functionality in CoreLib

For Clear, it may be actually best to keep ditching the whole container as before. It should be better to get the handles freed sooner rather than later.

Copy link
Copy Markdown
Member Author

@stephentoub stephentoub Dec 10, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like our comments crossed but we landed in the same place.

I submitted a PR to CoreCLR to just replace the Container, I updated this PR to do the same, and I opened an issue in CoreCLR to look at improving Remove as well. I also added a reflection-based test for Clear to CoreFx; it passes before and after the change.

CoreRT's ConditionalWeakTable implementation was ported to CoreCLR.  In the process, several fixes were made; this ports those fixes back.

Two issues addressed:
- Every time the table resizes, it ends up allocating a new dependency handle for every handle in the table.  That's not cheap.  Instead, we now transfer still valid handles from the old container to the new container, giving the new container the responsibility of freeing them when it's finalized... and the old container keeps it alive as long as the old container is alive via a reference.
- There's a race condition present whereby if a ConditionalWeakTable is resurrected, it's possible for a thread to try accessing a dependency handle while or after it's been freed.  This is addressed by not freeing during the first finalization, instead using that pass to clear out the table's reference to the container and then re-registering for finalization: the next time it's finalized, we can be sure it won't get resurrected and there won't be a race condition.
@jkotas jkotas merged commit 5b3d372 into dotnet:master Dec 10, 2016
@jkotas
Copy link
Copy Markdown
Member

jkotas commented Dec 10, 2016

Thanks!

@stephentoub stephentoub deleted the port8507 branch December 10, 2016 04:12
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants