Skip to content

LibraryImport generated stubs should be implicitly RequiresUnsafe#125802

Draft
Copilot wants to merge 4 commits intomainfrom
copilot/add-requires-unsafe-attribute
Draft

LibraryImport generated stubs should be implicitly RequiresUnsafe#125802
Copilot wants to merge 4 commits intomainfrom
copilot/add-requires-unsafe-attribute

Conversation

Copy link
Contributor

Copilot AI commented Mar 19, 2026

[LibraryImport] stubs call native code whose signature the compiler cannot validate, making them inherently unsafe. Generated stubs were not annotated with [RequiresUnsafe], causing inconsistency with forwarder stubs (raw [DllImport] extern methods) and making the unsafe nature invisible to tools like ILLink's RequiresUnsafeAnalyzer.

Changes

RequiresUnsafeAttributepublic

  • src/libraries/System.Private.CoreLib/src/System/Diagnostics/CodeAnalysis/RequiresUnsafeAttribute.cs: internalpublic so generated user code can reference it
  • src/libraries/System.Runtime/ref/System.Runtime.cs: Added to reference assembly for API compat

Generator infrastructure

  • StubEnvironment: Added RequiresUnsafeAttrType lazy lookup property
  • EnvironmentFlags: Added RequiresUnsafeAvailable = 0x4 flag
  • TypeNames / NameSyntaxes: Added System_Diagnostics_CodeAnalysis_RequiresUnsafeAttribute constant and syntax helper

LibraryImportGenerator stub emission

  • CalculateStubInformation: Sets RequiresUnsafeAvailable flag when attribute type is available in the compilation. Skips if the user already applied [RequiresUnsafe] on their declaration (prevents CS0579 duplicate attribute error).
  • GenerateSource: Injects [RequiresUnsafe] into SignatureContext.AdditionalAttributes for regular (marshalling) stubs.
  • PrintForwarderStub: Explicitly adds [RequiresUnsafe] to forwarder (pure DllImport) stubs.

The flag is the single source of truth for both stub paths. Availability is checked at compile time via Compilation.GetTypeByMetadataName, so older TFMs without the attribute are handled gracefully.

Tests

Added RequiresUnsafeAdded and RequiresUnsafeAddedOnForwardingStub to AdditionalAttributesOnStub.cs, verifying the attribute is emitted on both stub kinds with the expected global:: qualified syntax.

Original prompt

This section details on the original issue you should resolve

<issue_title>LibraryImport methods should be implicitly RequiresUnsafe</issue_title>
<issue_description>Methods attributed with LibraryImportAttribute are implicitly unsafe. Although the marshalling logic could be analyzed to be considered memory-safe, the compiler cannot validate that the underlying extern DllImport is targeting a method with a valid signature.

Today, the stubs generated by LibraryImportGenerator will not generate the RequiresUnsafeAttribute on the produced stubs. This causes inconsistency with stubs that have no marshalling (they are just a raw DllImport, which is extern).

The LibraryImportGenerator should add the RequiresUnsafeAttribute to all generated stubs to provide a consistent experience.</issue_description>

Comments on the Issue (you are @copilot in this section)


📱 Kick off Copilot coding agent tasks wherever you are with GitHub Mobile, available on iOS and Android.

@dotnet-policy-service
Copy link
Contributor

Tagging subscribers to this area: @dotnet/area-meta
See info in area-owners.md if you want to be subscribed.

Copilot AI and others added 2 commits March 19, 2026 22:06
Co-authored-by: jkoritzinsky <1571408+jkoritzinsky@users.noreply.github.com>
…urce

Co-authored-by: jkoritzinsky <1571408+jkoritzinsky@users.noreply.github.com>
@stephentoub
Copy link
Member

What's the rationale behind every [LibraryImport] requiring unsafe?

…for regular stubs

Co-authored-by: jkoritzinsky <1571408+jkoritzinsky@users.noreply.github.com>
Copilot AI changed the title [WIP] Add RequiresUnsafeAttribute to generated LibraryImport methods LibraryImport generated stubs should be implicitly RequiresUnsafe Mar 19, 2026
Copilot AI requested a review from jkoritzinsky March 19, 2026 22:10
@jkoritzinsky
Copy link
Member

LibraryImport methods don't resolve all of the reasons that DllImports are unsafe. Primarily, LibraryImport (like DllImport) cannot guarantee that the provided signature is an accurate match to the target native method.

We could introduce a mechanism to request that a LibraryImport is safe (basically having the user promise that they got it right) but until then LibraryImport doesn't solve all of the reasons that DllImport requires unsafe.

@stephentoub
Copy link
Member

stephentoub commented Mar 20, 2026

LibraryImport methods don't resolve all of the reasons that DllImports are unsafe.

Of course. But this is swinging the pendulum to the other extreme and saying that because we don't know what's on the other end of the FFI, every single consumer must assume every single LibraryImport is unsafe.

Is this what was agreed on with @agocke et al?

My expectation is this is going to lead to developers wrapping [LibraryImport]s with yet another stub that serves purely to contain the virality of the [RequiresUnsafe], and such wrapping stubs are what the LibraryImport generator exists to simplify. If we're going to say that by default every [LibraryImport] requires unsafe, then we should add a mechanism that's part of [LibraryImport] to override that default, and we should do that as part of this.

@tannergooding
Copy link
Member

LibraryImport methods don't resolve all of the reasons that DllImports are unsafe

I actually don't think it fundamentally resolves any of the reasons that DllImports are unsafe. At best the generator is introducing some marshalling support (replacing the older built-in marshalling support on DllImport) and so it is not actually introducing any kinds of bounds checking, lifetime management, or other guarantees that would be needed to make them "safe".

That is, they remain equivalent to a DllImport/extern call in essentially every consideration. The one notable exception is that if you create a very custom marshaller, you might be able to do API specific extended validation, but that's a lot of additional work for something where the wrapper makes it much more clear.

My expectation is this is going to lead to developers wrapping [LibraryImport]s with yet another stub that serves purely to contain the virality of the [RequiresUnsafe], and such wrapping stubs are what the LibraryImport generator exists to simplify. If we're going to say that by default every [LibraryImport] requires unsafe, then we should add a mechanism that's part of [LibraryImport] to override that default, and we should do that as part of this.

I think this is just the expectation of all unsafe code and is why all extern methods, including those that simply call into the runtime like Math.Sin, are unsafe (and so we must introduce wrappers around these APIs -or- change them to recursive calls as well).

We expect the wrapper to exist to make the assertion that there is no memory unsafe operations occurring. That all the potential unsafety of crossing the managed -> unmanaged boundary is handled. Since a LibraryImport can make no more guarantees of this than any other extern method, we should likely treat them the same as well.

@stephentoub
Copy link
Member

stephentoub commented Mar 20, 2026

If I were writing the wrapper that LibraryImport is writing, then I could choose to do so in a way that stops the virality, by using unsafe { ... } inside my method.

But I'm not writing the wrapper, the library import generator is. And there's nothing being exposed that let's me do the equivalent of moving the unsafety into the body. Which means I'm the stuck having to write yet another wrapper around its wrapper.

I'm not sold on the notion that because we don't know what the thing being called is doing it's inherently unsafe. But assuming I buy that premise, we should at least make it trivial to annotate the [LibraryImport] in such a way that the virality is contained so I don't have to write yet another wrapper.

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

all extern methods, including those that simply call into the runtime like Math.Sin, are unsafe

I do not think that's the plan. FCalls and other similar methods in CoreLib implemented using runtime magic are not going to be implicitly unsafe. We can have analyzers that tell you to mark it as unsafe that you can suppress as needed (or have RequiresUnsafe(false) to explicitly mark the methods as safe).

I think similar strategy can work for DllImport/LibraryImport too.

@tannergooding
Copy link
Member

I do not think that's the plan. FCalls and other similar methods in CoreLib implemented using runtime magic are not going to be implicitly unsafe

The language has implemented it such that all extern methods are unsafe. You can see some of their tests validating this here: https://github.com/dotnet/roslyn/blob/main/src/Compilers/CSharp/Test/CSharp15/UnsafeEvolutionTests.cs#L8004-L8098

That is, at the IL level functionally any method which is not abstract and has a null method body (i.e. is extern). So FCalls and similar will be unsafe and in many cases fundamentally are.

If we wanted a way to annotate that LibraryImport was safe, you would likely have the same expectation for extern calls and for the same reasons.

If I were writing the wrapper that LibraryImport is writing, then I could choose to do so in a way that stops the virality, by using unsafe { ... } inside my method.

The exact same logic applies to DllImport using built-in marshalling, there is no real difference between the two. One is just taking the modern approach of a source generator so we can more easily version it and fix bugs over time.

@stephentoub
Copy link
Member

The exact same logic applies to DllImport using built-in marshalling, there is no real difference between the two.

Then we can have the same switch for DllImport as well. I don't know why the same logic applying to DllImport means we can't have nice things.

One is just taking the modern approach

Which is why I'm highlighting it. It's the thing we tell developers to use now, so it's the thing we should focus on ensuring has a good experience.

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

The language has implemented it such that all extern methods are unsafe. You can see some of their tests validating this here: https://github.com/dotnet/roslyn/blob/main/src/Compilers/CSharp/Test/CSharp15/UnsafeEvolutionTests.cs#L8004-L8098

@jjonescz I thought that we agreed on that the compiler is just going to look at RequiresUnsafeAttribute only, and it won't try to be smart about extern. Was there a misunderstanding about the plan?

I do not want the runtime to be dealing with the problems created by safe wrappers over extern low-level extern methods that are required just to make the C# compiler happy.

@tannergooding
Copy link
Member

I'm not against being able to mark extern or LibraryImport methods as being "safe" in some way, I just don't think it actually really buys much of anything and I have a learning towards it ultimately being worse.

I think that escaping managed control (which from a user visible perspective is what LibraryImport does) is fundamentally unsafe and that a wrapper (even if it is a wrapper over a wrapper) at least helps give a place for users to document why it is safe without explicit extra validation.

so it's the thing we should focus on ensuring has a good experience.

I'm not convinced there is a "good experience", or at least not an experience that helps lead users to a pit of success here.

In my experience people do not write correct interop bindings regardless of what approach they use. Even tools like cswin32 have regularly gotten things wrong and have to have the various edges called out and fixed. This also applies to the runtime (one of the reasons we eventually created the LibraryImport generator), and other well known tools that are believed to be "robust".

Rather, I find that devs are likely to assume that using string or Span<T> or ref T rather than T* makes their code "safe", when all the wrapper is doing is fixed (T* ptr = &data) { PInvoke(ptr); } or doing other pinning and fixups for cases like struct S { T[] field; }. So I think that allowing them to annotate it as safe is actually a bit of a potential pit and more of one than the annoyance of needing to create a wrapper. I think that annoyance at least encourages minimal thought on why that might be required and reading of docs if they're annoyed enough to log an issue.

Additionally, due to the way LibraryImport generator is setup today, there is no actual guarantee a wrapper is created. For example, if the signature was already "blittable" (say [LibraryImport] static partial void Sleep(uint milliseconds)) then it is directly generated as [DllImport] static partial extern void Sleep(uint milliseconds), which as is currently spec'd/designed is implicitly unsafe. So we'd definitely have to have it working for both cases

@agocke
Copy link
Member

agocke commented Mar 20, 2026

Tanner's right on the current status of the language feature: extern methods are effectively [RequiresUnsafe].

I also agree that it seems like LibraryImport doesn't meet the currently described rules for when it's OK to suppress the unsafety -- it hasn't discharged the validation obligation. That's still on the user.

However, I also agree w/ Stephen that it seems like it would be nice to have a simple gesture to say "this is fine". My suggestion in my team meeting today was, "LibraryImport can have a Safe = true property that people can set which will automatically suppress the warnings". This wasn't a very popular position 😆 (too ugly). But it seems like a cheap way to mark things safe would be nice. Open to suggestions.

@stephentoub
Copy link
Member

I think that annoyance at least encourages minimal thought on why that might be required and reading of docs if they're annoyed enough to log an issue.

Developers (or AI agents) will look for the path-of-least-resistance to suppressing the failures. I expect all we'll be doing by forcing them to write such a wrapper is increasing their annoyance and the amount of code they need to maintain.

Additionally, due to the way LibraryImport generator is setup today, there is no actual guarantee a wrapper is created. [...] So we'd definitely have to have it working for both cases

Not necessarily. LibraryImport could start emitting wrappers in such cases; it doesn't mean DllImport would need the same mechanism. I don't really care whether DllImport has the same mechanism or not. What I care about is use of LibraryImport being made even harder than it already is, and forcing developers to write wrapper methods is making it harder than it already is.

@stephentoub stephentoub reopened this Mar 20, 2026
@stephentoub
Copy link
Member

(The Comment button is way too close to the Close with comment button.)

@stephentoub
Copy link
Member

stephentoub commented Mar 20, 2026

Open to suggestions.

Since it'll already be everywhere in this system, I like Jan's suggestion of just using what we'll already have: [RequiresUnsafe(false)]. That would not be specific to LibraryImport, but LibraryImport would be a primary beneficiary. Presumably this would require language/compiler updates (in addition to the attribute changing).

(I still don't agree though that [LibraryImport]s should all be implicitly unsafe.)

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

it hasn't discharged the validation obligation

We have split the discharging of the obligations into two independent decisions:

  • (1) Whether the method it unsafe for the caller
  • (2) Whether the method implementation in C# calls unsafe code

There is nothing in the C# language that enforces that these two decisions are coherent. It is going to be on the human review aided by analyzers or AI to make sure that it is right.

extern methods do not have C# implementation so (2) does not apply to them. Given that (1) and (2) are indepedent, (1) should not depend on how the method is implemented and thus it should not be different between methods implemented in C# vs. method implemented via runtime magic.

ugly

IMHO, the [RequiresUnsafe] attribute is the main ugly part - it requires separate line, it is a lot of letters. Implicit [RequiresUnsafe] just complicates the rules, it is not fixing the ugliness.

@jjonescz
Copy link
Member

jjonescz commented Mar 20, 2026

I thought that we agreed on that the compiler is just going to look at RequiresUnsafeAttribute only, and it won't try to be smart about extern. Was there a misunderstanding about the plan?

Yes, we discussed this and the conclusion was that we are not going to be smart about extern coming from metadata (because it's impossible to distinguish ref assemblies from extern methods, both of which can have null method bodies). When you enable updated memory safety rules in your compilation, the compiler will still implicitly synthesize RequiresUnsafe attribute on all your extern methods though. (And then we only look at RequiresUnsafe attribtutes when checking members from metadata.) See dotnet/csharplang#9883 (comment).

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

Seeing the fallout of the design, I think we want to revisit it. make the compiler even less smart and be more like latest Rust:

  • RequiresUnsafe is always spelled out in the sources and never synthetized by the compiler
  • There is some way to tell analyzers that RequiresUnsafe is missing intentionally and the method is intentionally safe. #pragma would be the default out of the box option, but we may want to go for something less ugly like [RequiresUnsafe(false)] or [DoesNotRequireUnsafe].

@jjonescz
Copy link
Member

  • RequiresUnsafe is always spelled out in the sources and never synthetized by the compiler

Opened dotnet/csharplang#10051 for discussion.

  • There is some way to tell analyzers that RequiresUnsafe is missing intentionally and the method is intentionally safe.

That sounds like libraries-only change, right? Compiler wouldn't need that for anything.

@hamarb123
Copy link
Contributor

hamarb123 commented Mar 20, 2026

Rust and other modern languages choose to define 'unsafe'

FWIW, the latest Rust https://doc.rust-lang.org/reference/items/external-blocks.html :

  • Requires you to write unsafe explicitly on extern blocks, extern is not implicitly unsafe.
  • Has safe qualifier to make the extern safe without wrappers

Seeing the fallout of the design, I think we want to revisit it. make the compiler even less smart and be more like latest Rust:

From the doc linked (my emphasis):

Prior to the 2024 edition, the unsafe keyword is optional. The safe and unsafe item qualifiers are only allowed if the external block itself is marked as unsafe.
A function declared in an extern block is implicitly unsafe unless the safe function qualifier is present.
Unless a static item declared in an extern block is qualified as safe, it is unsafe to access that item

So if the idea is to make it like rust, the changes recommended in dotnet/csharplang#10051 should have a way to mark as "safe" somehow, and error (or warn) if not marked [RequiresUnsafe], no? And potentially user-defined attributes should be able to opt-in to this also?

TL;DR, shouldn't the PR have some way to specify "safe" is what I'm saying.

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

should have a way to mark as "safe" somehow, and error (or warn) if not marked [RequiresUnsafe], no?

This does not need to be part of the language spec. It can be left to the analyzers.

@hamarb123
Copy link
Contributor

hamarb123 commented Mar 20, 2026

This does not need to be part of the language spec. It can be left to the analyzers.

Wouldn't it be better to be in the language? Analysers are optional to run and have to be opted-in (in some shape or form) last time I checked (maybe I am wrong here though?). Or would this be a special one that runs always regardless of settings when using new unsafe?

Also, imo, having [RequiresUnsafe(false)] is the better solution than #pragma.

@EgorBo
Copy link
Member

EgorBo commented Mar 20, 2026

Seeing the fallout of the design, I think we want to revisit it. make the compiler even less smart and be more like latest Rust:

  • RequiresUnsafe is always spelled out in the sources and never synthetized by the compiler
  • There is some way to tell analyzers that RequiresUnsafe is missing intentionally and the method is intentionally safe. #pragma would be the default out of the box option, but we may want to go for something less ugly like [RequiresUnsafe(false)] or [DoesNotRequireUnsafe].

What about pinvokes/LibraryImports public APIs from pre-MemorySafetyRulesAttribute compiled assemblies? Do we treat them as unsafe when we call them from new code?

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

Wouldn't it be better to be in the language?

Maybe? If we were to make the analyzer-like rules part of the language, we would want to make all of them part of the language. For example, method with unmanaged pointers in the signature are safe by default as far as C# is concerned with the current proposal. We are going to have an analyzer that warns you about marking the method as RequiresUnsafe.

@hamarb123
Copy link
Contributor

hamarb123 commented Mar 20, 2026

Wouldn't it be better to be in the language?

Maybe? If we were to make the analyzer-like rules part of the language, we would want to make all of them part of the language. For example, method with unmanaged pointers in the signature are safe by default as far as C# is concerned with the current proposal. We are going to have an analyzer that warns you about marking the method as RequiresUnsafe.

I personally would have liked to see it be an error to be unspecified (whether safe or unsafe) if one of the known attributes is applied (which should be done in a user-joinable system imo, e.g., via some other attribute on the attribute type, or similar). But I suppose if just a warning is preferred, then I guess having it as an analyser works as well as anything else.

Also, I personally would think that adding something like [unsafe: DllImport(...)] to require unsafe there would be a good other way to do it, as that way you require unsafe (including in the project settings) to use the unsafe attribute, but then don't need to add something like [RequiresUnsafe(false)] to the new setup. This would obviously require meaningful language changes though (which would be an understandable reason to not want to do it this way). But the benefit would be that I can't work around having to enable AllowUnsafeBlocks by just using p/invokes or unsafe accessors, and just suppressing the warning via a pragma or in NoWarn.

@tannergooding
Copy link
Member

We are going to have an analyzer that warns you about marking the method as RequiresUnsafe.

I thought this analyzer was about providing a fixer to preserve existing behavior when enabling new rules and not some blanket warning on all code about "you have a pointer in the signature, you should mark it RequiresUnsafe"? It feels counter to have the language say "pointers aren't unsafe anymore" and then for us to immediately turn around and have an analyzer that says "you have a pointer, you should be making this unsafe".

I'm still personally not in favor of requiring users to annotate extern methods themselves to indicate they're unsafe either. It being implicit with an explicit opt-out feels much better (and more closely mirrors the rust changes called out). I strongly believe it effectively being "presumed safe by default" will be a big pit of failure for typical users, knowing how they've handled unsafe code, interop, and other concepts for the last 25 years.

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

It feels counter to have the language say "pointers aren't unsafe anymore" and then for us to immediately turn around and have an analyzer that says "you have a pointer, you should be making this unsafe".

It makes the language spec simple. It is the only reason why we have done it this way.

I care about default end-to-end experience that the analyzers are an integral part of. The default end-to-end experience needs to warn about safe methods with pointers in the signatures (unless they are explicitly marked as safe in some way). Dtto for extern methods.

@hamarb123
Copy link
Contributor

hamarb123 commented Mar 20, 2026

I care about default end-to-end experience that the analyzers are an integral part of. The default end-to-end experience needs to warn about safe methods with pointers in the signatures (unless they are explicitly marked as safe in some way).

I would think it makes more sense to do it based on whether the method has an unsafe block in it personally, as that's the real indicator of whether something's potentially unsafe, not whether it has a pointer in the signature. That to me sounds quite error prone & will give people a false sense of security to not bother marking things as [RequiresUnsafe] as "the analyser didn't seem to think it might be necessary" (to directly quote what someone will probably say).

Dtto for extern methods.

The difference for extern methods and pointer parameters, is that to do anything unsafe with a pointer parameter, you need to have unsafe block somewhere, whereas you don't with extern if you just ignore/suppress the analyser suggesting to add [RequiresUnsafe] (and thus do not even need to enable AllowUnsafeBlocks).

@tannergooding
Copy link
Member

tannergooding commented Mar 20, 2026

It feels like we should really have some [DoesNotRequireUnsafe] or [RequiresUnsafe(false)] then.

Arguably the biggest part of the end to end experience is how users writing unsafe code today will see the feature and how they have to think about things when enabling it. If we are forcing them to #pragma warning disable ... followed by a latter #pragma warning restore ... on everything, this hurts the UX. Likewise, having analyzers that counter the new language semantics are likely to just get them blanket disabled or ignored. -- That is, it feels like we're creating an experience that is more likely to cause existing unsafe users to not enable the feature for their libraries.

This feels like the design and lesser pit of failure is that we believe the majority of pointers are still unsafe and so they should be unsafe by default. With users annotating the few exceptions as being safe when they are. The same would then be my expectation of how extern methods are (majority are unsafe, a few exceptions may exist, so have users annotate them).

@EgorBo
Copy link
Member

EgorBo commented Mar 20, 2026

on everything, this hurts the UX.

There is a warning. You either do what it says, or use pragma/msbuild property to silence it -- isn't this the normal UX?

@hamarb123
Copy link
Contributor

hamarb123 commented Mar 20, 2026

on everything, this hurts the UX.

There is a warning. You either do what it says, or use pragma/msbuild property to silence it -- isn't this the normal UX?

No - normal UX for getting "is it unsafe or not" today is adding unsafe keyword somewhere with an error to force you, not adding a pragma to work around a warning. I can tell you now that if it overwarns a lot and the only solution is a pragma, I will probably disable it project-wide making it pointless (as I do with most things that overwarn a lot), and if it underwarns a lot, people will use/interpret it as proof that "it's not unsafe, see it warns I should mark it as [RequiresUnsafe] on this obviously unsafe thing but not on this other code" and people who are none-the-wiser will probably believe it. Imo anyway.

@EgorBo
Copy link
Member

EgorBo commented Mar 20, 2026

No - normal UX for getting "is it unsafe or not" today is adding unsafe keyword somewhere, not adding a pragma.

Presumably, you will not be able to disable it if at some point (when the feature is mature enough) the warning is promoted to an error.

@hamarb123
Copy link
Contributor

hamarb123 commented Mar 20, 2026

No - normal UX for getting "is it unsafe or not" today is adding unsafe keyword somewhere, not adding a pragma.

Presumably, you will not be able to disable it if at some point (when the feature is mature enough) the warning is promoted to an error.

I mean, if I can't disable it via NoWarn, then I also can't disable it via pragma, and it's probably not an analyser. Meaning a different way will have to be added, something like what @tannergooding has suggested.

@tannergooding
Copy link
Member

tannergooding commented Mar 20, 2026

There is a warning. You either do what it says, or use pragma/msbuild property to silence it -- isn't this the normal UX?

That is the normal UX for warnings yes. I'm talking about there being an issue in creating an experience where the default when enabling the feature is the dev having to suppress tons of warnings, which is counterintuitive.

That is, this feels like something where we're changing semantics of the language and then immediately going "no the new language semantics are probably incorrect". This creates confusion and expects users to annotate with [RequiresUnsafe] anyways -or- make the decision that the new language semantics are actually correct so they need to suppress it. However, suppressing it is not easy, it is one of the worst possible experiences for suppressing it.

Given the most popular binding libraries are fairly large, I expect this is going to be a large number of suppressions. Enough so I imagine many users will consider simply not enabling the feature or will just disable the analyzer instead to help keep their code clean and maintainable.

This simply feels like a pit of failure in the design.

I would think a significantly better, but not too different setup would be:

  1. You enable the feature
  2. You see a bunch of errors (not warnings) stating "these methods are extern and these have pointers in the signature but are not annotated as being safe or unsafe"
  3. You apply the fixer which automatically annotates them preserving the existing UX, making it easy to enable the feature without significant effort or breaking changes
  4. You can then consider the ones that are actually safe and any other APIs that are actually unsafe and update those to be correctly annotated at your leisure

This makes it so that the language and tooling are not in disagreement. It helps push users into a pit of success by matching the likely case as the default. It provides an easy to read, write, search, and understand experience for (ideally minority of) cases that do deviate from the default. It does not provide an environment that encourages mass suppression or avoidance of enabling the feature.

@tannergooding
Copy link
Member

Put another and maybe simpler way. We reject many analyzers due to the risk of their "false positive" ratio being too high.

This feels like we're trying to push a design that is maximizing false positive warnings for users how have been writing unsafe code for the last 25 years.

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

You enable the feature
You see a bunch of errors (not warnings)

I expect that the default flow to enable new unsafe is going to be enable the feature and run the auto-fixer. The auto-fixer will add annotations as necessary to make your project compile, and adds a bunch of TODOs for you to go through and review manually.

users how have been writing unsafe code for the last 25 years.

The unsafe evolution is breaking by design. It is different from nearly all other features where we try to minimize the breaks. Our instincts are honed around building non-breaking features. We should watch for our instincts failing us here and leading to poor design choices.

@tannergooding
Copy link
Member

The design we had landed on seemed like a good middle-ground, but some of the points around the analyzer behavior and suggestions for changing what extern means shifts that negatively, leaving ambiguities, questions, and new pits of failure, IMO.

and adds a bunch of TODOs for you to go through and review manually.

I expect this will have a lot of negative feedback from the devs we want to start using the feature to annotate their own unsafe code.

I believe it will be completely unactionable in a repo the size of TerraFX.Interop.Windows or ClangSharp, for example. I maintain these libraries by myself and they have a significant number of bindings. TerraFX.Interop.Windows is millions of lines of code with hundreds of thousands of declared symbols, so that many new TODOs will be completely unmanageable and mean that I cannot use the fixer. I will have to enable the feature by hand or via my own tooling instead.

The rules also make it difficult for alternative tooling, like ClangSharpPInvokeGenerator, to do the "right" thing. The guidance isn't clear and there isn't a trivial way to opt-out for the cases that are special. Particularly when we're introducing effectively conflicting opinions between the language and tooling.

The unsafe evolution is breaking by design. It is different from nearly all other features where we try to minimize the breaks.

It's much less breaking than it was originally and is much closer in design to NRT at this point. It is still opt-in initially and while we may enable it by default in the future, users will still have to be able to opt-out.

We are not changing the semantics of the unsafe keyword anymore and rather requiring an attribute. This leaves the choice of depth that unsafe is used (fine or coarse) to be a stylistic one. Many users will not want the churn and additional nesting (which may also hurt legibility) that comes from being as fine grained as possible with unsafe { }. Many existing unsafe users have large binding libraries where the only avenue towards enablement is turning the feature on while preserving the existing status quo for their API surface, etc.


I think the feature really comes down to two aspects:

  1. The users (who are ideally the majority) just writing regular C# code can now be aware when they are using something that is going to expose memory unsafety
  2. The users (who are ideally the minority) writing unsafe code can now surface to consumers that the API does not handle the memory unsafety and so should itself be considered unsafe

I then predict that those in camp 1 are going to need to enable the feature and likely be unaware what makes things unsafe and possibly even how to handle that unsafety. In many cases they were likely using those APIs to avoid unsafe { } due to policy or because it was unfamiliar to them. I expect, correspondingly, they will likely simply wrap the code in unsafe { } and leave it as is. I expect that the majority of fixups or handling, if any, will occur as part of the initial PR and so they will add // TODO themselves if there is something to handle. So any implicitly inserted TODO by the fixer is just going to be noise that they have to undo.

And then those in camp 2 are going to want to enable the feature and generally have an idea of what needs to be done. They are likely to want to start by keeping things exactly as they are today for their own exposed surface area and to simply add unsafe (likely at the coarsest level because that's how most already use the feature) around the few callsites that weren't already unsafe. They'll then separately go through and annotate the APIs that really should be unsafe and do the ideally minor cleanup.

I expect:

  • we will not be able to have the analyzer only have the option of doing unsafe at the finest-grained level
    • it likely needs to be an .editorconfig option, one where we recommend fine-grained as a best practice
  • we will not be able to blanket insert // TODO comments alongside every new unsafe { } added
    • it likely needs to be an option if we provide it
  • we will need something better than #pragma warning disable ... for deviating from our defaults
    • [RequiresUnsafe(false)] seems like a good option here
  • we will not be able to have the language semantics and tooling disagree on what is unsafe
    • this just creates confusion for users and how downstream tools should react
  • we will need to err on the default of "this is likely memory unsafe, so it should default to memory unsafe"
    • forcing it to be explicitly annotated seems like the next best option
    • if forced, rather than implicit, a fixer allowing users to blanket apply either the recommended or historical default seems then trivial
  • if we don't actively consider how both camps of user will perceive and interact with the feature it will likely fail
    • we need both camps to participate, it doesn't work if only one of them does, especially when large or popular binding libraries will choose to opt out

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

I expect this will have a lot of negative feedback from the devs we want to start using the feature to annotate their own unsafe code.

Sure, but I do not see how we get the current unsafe mess sorted out. If folks do not want to do the work to opt in into unsafe v2, they can do so - but it is likely going to mean that their package is going to be viewed as less trustworthy over time.

@agocke
Copy link
Member

agocke commented Mar 20, 2026

I think there's some confusion on Rust's model as of 2024, so let me put in some more detail:

This is the current state:

unsafe extern "C" {
    // sqrt (from libm) may be called with any `f64`
    pub safe fn sqrt(x: f64) -> f64;

    // strlen (from libc) requires a valid pointer,
    // so we mark it as being an unsafe fn
    pub unsafe fn strlen(p: *const std::ffi::c_char) -> usize;

    // this function doesn't say safe or unsafe, so it defaults to unsafe
    pub fn free(p: *mut core::ffi::c_void);

    pub safe static IMPORTANT_BYTES: [u8; 256];
}

Every "extern" block, which is where you declare extern stubs, now requires the unsafe keyword. This is "inner" unsafe -- it's a specification that the writer of an extern stub must verify that the stub matches the external implementation.

Then, each of the stubs may have safe, unsafe, or nothing.

safe means that the caller is effectively allowed to use the stub with no preconditions. For example, even if the stub takes a pointer, the pointer wouldn't need to be valid (because the implementation doesn't read it).

The stub may also be unsafe, meaning that the caller has special requirements they need to fulfill, otherwise the implementation may violate memory safety.

If the stub is unspecified, it's assumed that the implementation may have important preconditions, i.e. it's unsafe.

If we were to try to map this to our current model I think it would look something like this:

All extern functions must have unsafe, meaning the method level-attribute that specifies that the implementation uses unsafe code. The unsafe code here is basically the same as in Rust -- it's the language-boundary transition where the calling convention is translated. That's unchecked on both sides of the boundary, so the human is the only verification mechanism.

Extern methods could also be marked as RequiresUnsafe. If we use Jan's suggestion to allow RequiresUnsafe to take a boolean, we can represent 'safe' as well (RequiresUnsafe(false)). So Rust's unsafe stubs would be [RequiresUnsafe], their safe stubs would be [RequiresUnsafe(false)]. And if left unspecified, extern methods in C# would default to [RequiresUnsafe].

We could decide differently on any of these points, of course, but I believe this is the closest mapping between the C# model and the Rust model.

@jkoritzinsky
Copy link
Member

I think there's some confusion on Rust's model as of 2024, so let me put in some more detail:

This is the current state:

unsafe extern "C" {
    // sqrt (from libm) may be called with any ``f64``
    pub safe fn sqrt(x: f64) -> f64;

    // strlen (from libc) requires a valid pointer,
    // so we mark it as being an unsafe fn
    pub unsafe fn strlen(p: *const std::ffi::c_char) -> usize;

    // this function doesn't say safe or unsafe, so it defaults to unsafe
    pub fn free(p: *mut core::ffi::c_void);

    pub safe static IMPORTANT_BYTES: [u8; 256];
}

Every "extern" block, which is where you declare extern stubs, now requires the unsafe keyword. This is "inner" unsafe -- it's a specification that the writer of an extern stub must verify that the stub matches the external implementation.

Then, each of the stubs may have safe, unsafe, or nothing.

safe means that the caller is effectively allowed to use the stub with no preconditions. For example, even if the stub takes a pointer, the pointer wouldn't need to be valid (because the implementation doesn't read it).

The stub may also be unsafe, meaning that the caller has special requirements they need to fulfill, otherwise the implementation may violate memory safety.

If the stub is unspecified, it's assumed that the implementation may have important preconditions, i.e. it's unsafe.

If we were to try to map this to our current model I think it would look something like this:

All extern functions must have unsafe, meaning the method level-attribute that specifies that the implementation uses unsafe code. The unsafe code here is basically the same as in Rust -- it's the language-boundary transition where the calling convention is translated. That's unchecked on both sides of the boundary, so the human is the only verification mechanism.

Extern methods could also be marked as RequiresUnsafe. If we use Jan's suggestion to allow RequiresUnsafe to take a boolean, we can represent 'safe' as well (RequiresUnsafe(false)). So Rust's unsafe stubs would be [RequiresUnsafe], their safe stubs would be [RequiresUnsafe(false)]. And if left unspecified, extern methods in C# would default to [RequiresUnsafe].

We could decide differently on any of these points, of course, but I believe this is the closest mapping between the C# model and the Rust model.

I think that this model of RequiresUnsafe/RequiresUnsafe(false) would work great, is clean, and is sufficiently general.

Once we solve how to represent safe/unsafe for extern methods, I think we should have LibraryImport follow the same rules as DllImport (ie require the user to specify RequiresUnsafe(false) to make the LibraryImport method safe, otherwise assuming that it is unsafe like DllImport) with analyzers/code-fixers to make the experience clean.

@jkotas
Copy link
Member

jkotas commented Mar 20, 2026

Wrt Rust 2024 model: I believe that the motivation for the change was that the unsafe annotation is in your face for externs. The mechanics are less interesting, they leverage the dual purpose of unsafe keyword in Rust that we gave up on.

If we make externs RequiresUnsafe by default, we may be copying some of the Rust mechanics, but we are missing the actual point since RequiresUnsafe won't be in your face.

@agocke
Copy link
Member

agocke commented Mar 20, 2026

@jkotas it's not clear to me which part you're commenting on. In the Rust model the new requirement about unsafe on the extern declaration makes it clear that the author of the extern has to get the stubs to match. The [RequiresUnsafe] behavior of extern methods is pre-existing and still hidden: that is, in Rust all externs implicitly have [RequiresUnsafe].

How do you envision the C# model differing? Is it simply the lack of an extern block and therefore the removal of "inner unsafe" behavior?

@tannergooding
Copy link
Member

Sure, but I do not see how we get the current unsafe mess sorted out.

The main issue I see is not accounting enough for the practical experience users in both camps will encounter when turning on this feature.

I think most of this is resolved by addressing 2 main points.

  1. We should have the tooling and language agree about what should be annotated as [RequiresUnsafe]. That is, if we are going to say pointers existing as fields/signatures is safe at the language level, then we should not have an in-box and on by default analyzer coming behind and going "this has a pointer, you should annotate it as [RequiresUnsafe].

  2. We should be in agreement about what is unsafe or is likely to be unsafe and try to lead users into a pit of success around this. I believe this means that either extern must be unsafe by default with some kind of explicit-opt out or we must require all extern methods be explicitly annotated (either as safe or unsafe) and error if they are not.


The thing that seems simplest to me to address these issues, without overly complicating the analyzer/fixer and user experience, is that we provide both [RequiresUnsafe] and [RequiresUnsafe(false)]. When the feature is turned on then anything extern or which can be annotated with this attribute and involves a pointer must be explicitly annotated with the attribute.

We then provide three fixers.

The first fixer is about annotating members with the attribute. It would provide the below 3 options. This then doesn't result in unnecessary warnings to user code, ensures everything that is likely unsafe is definitively attributed, and lets users pick what is most appropriate for their codebase. We could optionally separate this between extern and non-extern methods if desired to give users more control, but I expect most users will simply choose to annotate with the historical behavior by default and then modify a few deviations piecemeal -- likely via a quick follow up PR handling the minor callouts in the PR enabling the feature.

  • Add attribute indicating unsafe
    • marks the member with [RequiresUnsafe]
  • Add attribute indicating safe
    • marks the member with [RequiresUnsafe(true)]
  • Add attribute preserving historical behavior
    • marks members involving pointers as [RequiresUnsafe] and others as [RequiresUnsafe(false)]

The second fixer is then about adding unsafe { } contexts to all the existing code that now requires it due to the new attributes. It's fairly straightforward but I think users will have opinions on whether they want // TODO: comments.

The final fixer is about helping users move towards the recommended/best practices around unsafe and likely needs to be controlled via some csharp_style_unsafe_scope option in the .editorconfig. Some users will want it at the type level, some at the method level, and some at the smallest possible scope. This allows users who want coarser annotations to move unsafe up, for users who want finer grained annotations to move unsafe down, and for all users to remove unnecessary unsafe blocks.


If folks do not want to do the work to opt in into unsafe v2, they can do so - but it is likely going to mean that their package is going to be viewed as less trustworthy over time.

I'm actually doubtful of this, for the same reason that's its not caused packages which didn't opt into features like NRT to be viewed this way. That is to say there are several large/popular packages that still haven't enabled NRT and most users simply do not care or are not significantly impacted.

Libraries that haven't opted-in will continue having their historical experience, which means that consumers of these libraries won't see any new negatives and so they'll have no real reason to push for these libraries to enable the feature. They may not even realize that a package hasn't opted in. So while we'll opt the core libraries in and many packages will also opt in, I fully expect that several large libraries/packages will simply not if the UX and cost to benefit ratio isn't there to justify it.

The historical experience is also largely correct or "good enough" for most code. Unsafe code is safe when used safely after all and many bugs are quickly found. The point of this feature is to help highlight unsafety users might not be aware about which can help expose latent bugs or other issues. -- Much as NRT is about helping surface latent issues where users are incorrectly passing in null or not handling a potential null return.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

LibraryImport methods should be implicitly RequiresUnsafe

9 participants