Add handling for more naga capabilities#9000
Closed
trolleyman wants to merge 4 commits intobevyengine:mainfrom
Closed
Add handling for more naga capabilities#9000trolleyman wants to merge 4 commits intobevyengine:mainfrom
naga capabilities#9000trolleyman wants to merge 4 commits intobevyengine:mainfrom
Conversation
naga capabilities
5edca59 to
84a2584
Compare
Contributor
Author
|
Resolved changes with #5703 |
IceSentry
approved these changes
Sep 13, 2023
Contributor
IceSentry
left a comment
There was a problem hiding this comment.
Just saw this PR because I hit the MULTISAMPLED_SHADING issue. The prepass example wasn't really meant to show this but at the same time I'm not sure how to have an example that only shows this feature so I guess it's fine to have it there.
I haven't tested it yet, but the code looks good to me.
Merged
Contributor
Author
|
@IceSentry Thanks for the response - I've just fixed the merge conflict. |
github-merge-queue bot
pushed a commit
that referenced
this pull request
Apr 25, 2024
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
Contributor
|
This will be superseded by gfx-rs/wgpu#5606 once we upgrade to wgpu 0.20 #13186. |
Member
|
@JMS55 should we pursue this or simply close it out in favor of the linked PR? |
Contributor
|
Close it in favor of the wgpu update PR, but we have to remember to make the change in that PR. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Objective
Following on from #4824, this MR adds the capabilities for:
MULTISAMPLED_SHADINGTEXTURE_FORMAT_16BIT_NORMMULTIVIEWEARLY_DEPTH_TESTSolution
RenderAdapteris passed down, and then downlevel flags are translated into capabilities that are passed intowgpu.shader_prepassexample has been changed to show that multisampled shading now works, by adding a controller for MSAA. The text color changes have also been removed, as otherwise the text couldn't be seen on the motion vectors screen.Changelog
Added support for more
nagacapabilities.Migration Guide
PipelineCache::newnow takes aRenderAdapter.