Skip to content

Add an optional pass that generates a hierarchical Z-buffer, in preparation for GPU occlusion culling.#12899

Closed
pcwalton wants to merge 3 commits intobevyengine:mainfrom
pcwalton:hi-z
Closed

Add an optional pass that generates a hierarchical Z-buffer, in preparation for GPU occlusion culling.#12899
pcwalton wants to merge 3 commits intobevyengine:mainfrom
pcwalton:hi-z

Conversation

@pcwalton
Copy link
Contributor

@pcwalton pcwalton commented Apr 7, 2024

Add an optional component that generates hierarchical Z-buffers, in preparation for GPU occlusion culling.

Two-phase occlusion culling 1, which is generally considered the state-of-the-art occlusion culling technique. We already use two-phase occlusion culling for meshlets, but we don't for other 3D objects. Two-phase occlusion culling requires the construction of a hierarchical Z-buffer. This patch implements an opt-in set of passes to generate that and so is a step along the way to implementing two-phase occlusion culling, alongside GPU frustum culling (#12889).

This commit copies the hierarchical Z-buffer building code from meshlets into bevy_core_pipeline. Adding the new HierarchicalDepthBuffer component to a camera enables the feature. This code should be usable as-is for third-party plugins that might want to implement two-phase occlusion culling, but of course we would like to have two-phase occlusion culling implemented directly in Bevy in the near future. Two-phase occlusion culling will be implemented using the following procedure:

  1. Render all meshes that would have been visible in the previous frame to the depth buffer (with no fragment shader), using the previous frame's hierarchical Z-buffer, the previous frame's view matrix (cf. Add previous_view_uniforms.inverse_view #12902), and each model's previous view input uniform.

  2. Downsample the Z-buffer to produce a hierarchical Z-buffer ("early", in the language of this patch).

  3. Perform occlusion culling of all meshes against the Hi-Z buffer, using a screen space AABB test.

  4. If a prepass is in use, render it now, using the occlusion culling results from (3). Note that if only a depth prepass is in use, then we can avoid rendering meshes that we rendered in phase (1), since they're already in the depth buffer.

  5. Render main passes, using the occlusion culling results from (3).

  6. Downsample the Z-buffer to produce a hierarchical Z-buffer again ("late", in the language of this patch). This readies the Z-buffer for step (1) of the next frame. It differs from the hierarchical Z-buffer produced in (2) because it includes meshes that weren't visible last frame, but became visible this frame.

This commit adds steps (1), (2), and (6) to the pipeline, when the HierarchicalDepthBuffer component is present. It doesn't add step (3), because step (3) depends on #12889 which in turn depends on #12773, and both of those patches are still in review.

Unlike meshlets, we have to handle the case in which the depth buffer is multisampled. This is the source of most of the extra complexity, since we can't use the Vulkan extension 2 that allows us to easily resolve multisampled depth buffers using the min operation.

I'm not filling in the changelog just yet because the forthcoming patch will probably change the HierarchicalDepthBuffer component to OcclusionCulling.

@NthTensor NthTensor added C-Feature A new feature, making something new possible A-Rendering Drawing game state to the screen labels Apr 7, 2024
@pcwalton pcwalton marked this pull request as draft April 7, 2024 07:37
@pcwalton pcwalton force-pushed the hi-z branch 2 times, most recently from fb0c8bd to 91f9397 Compare April 7, 2024 21:16
preparation for GPU occlusion culling.

Two-phase occlusion culling [1], which is generally considered the
state-of-the-art occlusion culling technique. We already use two-phase
occlusion culling for meshlets, but we don't for other 3D objects.
Two-phase occlusion culling requires the construction of a *hierarchical
Z-buffer*. This patch implements an opt-in set of passes to generate
that and so is a step along the way to implementing two-phase occlusion
culling, alongside GPU frustum culling (bevyengine#12889).

This commit copies the hierarchical Z-buffer building code from meshlets
into `bevy_core_pipeline`. Adding the new `HierarchicalDepthBuffer`
component to a camera enables the feature. This code should be usable
as-is for third-party plugins that might want to implement two-phase
occlusion culling, but of course we would like to have two-phase
occlusion culling implemented directly in Bevy in the near future.

Two-phase occlusion culling will be implemented using the following
procedure:

1. Render all meshes that would have been visible in the previous frame
   to the depth buffer (with no fragment shader), using the previous
   frame's hierarchical Z-buffer, the previous frame's view matrix (cf.
   bevyengine#12902), and each model's previous view input uniform.

2. Downsample the Z-buffer to produce a hierarchical Z-buffer ("early",
   in the language of this patch).

3. Perform occlusion culling of all meshes against the Hi-Z buffer,
   using a screen space AABB test.

4. If a prepass is in use, render it now, using the occlusion culling
   results from (3). Note that if *only* a depth prepass is in use, then
   we can avoid rendering meshes that we rendered in phase (1), since
   they're already in the depth buffer.

5. Render main passes, using the occlusion culling results from (3).

6. Downsample the Z-buffer to produce a hierarchical Z-buffer again
   ("late", in the language of this patch). This readies the Z-buffer
   for step (1) of the next frame. It differs from the hierarchical
   Z-buffer produced in (2) because it includes meshes that weren't
   visible last frame, but became visible this frame.

This commit adds steps (1), (2), and (6) to the pipeline, when the
`HierarchicalDepthBuffer` component is present. It doesn't add step (3),
because step (3) depends on bevyengine#12889 which in turn depends on bevyengine#12773, and
both of those patches are still in review.

Unlike meshlets, we have to handle the case in which the depth buffer is
multisampled. This is the source of most of the extra complexity, since
we can't use the Vulkan extension [2] that allows us to easily resolve
multisampled depth buffers using the min operation.

At Jasmine's request, I haven't touched the meshlet code except to do
some very minor refactoring; the code is generally copied in.

[1]: https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501

[2]: https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VkSubpassDescriptionDepthStencilResolveKHR.html
@pcwalton pcwalton marked this pull request as ready for review April 7, 2024 21:24
@pcwalton pcwalton requested a review from JMS55 April 7, 2024 21:24
@pcwalton
Copy link
Contributor Author

This is broken post-merge, so I'm marking it as a draft.

@pcwalton pcwalton marked this pull request as draft April 10, 2024 07:45
@pcwalton
Copy link
Contributor Author

@JMS55 tells me that the Hi-Z code that this is largely copied from has bugs, so this is staying a draft pending fixing those.

pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections *are* supported in WebGL 2.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

Additionally, this patch fixes a random bug I ran across: that the
`TONEMAP_METHOD_ACES_FITTED` shader definition has an extra space.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections *are* supported in WebGL 2.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

Additionally, this patch fixes a random bug I ran across: that the
`"TONEMAP_METHOD_ACES_FITTED"` `#define` is incorrectly supplied to the
shader as `"TONEMAP_METHOD_ACES_FITTED "` (with an extra space) in some
patahs.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections *are* supported in WebGL 2.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

Additionally, this patch fixes a random bug I ran across: that the
`"TONEMAP_METHOD_ACES_FITTED"` `#define` is incorrectly supplied to the
shader as `"TONEMAP_METHOD_ACES_FITTED "` (with an extra space) in some
paths.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
pcwalton added a commit to pcwalton/bevy that referenced this pull request Apr 14, 2024
renderer.

This commit implements *screen-space reflections* (SSR), which
approximate real-time reflections based on raymarching through the depth
buffer and copying samples from the final rendered frame. Numerous
variations and refinements to screen-space reflections exist in the
literature. This patch foregoes all of them in favor of implementing the
bare minimum, so as to provide a flexible base on which to customize and
build in the future.

For a general basic overview of screen-space reflections, see [1]. The
raymarching shader uses the basic algorithm of tracing forward in large
steps (what I call a *major* trace), and then refining that trace in
smaller increments via binary search (what I call a *minor* trace). No
filtering, whether temporal or spatial, is performed at all; for this
reason, SSR currently only operates on very shiny surfaces. No
acceleration via the hierarchical Z-buffer is implemented (though note
that bevyengine#12899 will add the infrastructure for this). Reflections are
traced at full resolution, which is often considered slow. All of these
improvements and more can be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections *are* supported in WebGL 2.

To add screen-space reflections to a camera, use the
`ScreenSpaceReflections` component. `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflections` component contains several settings that
artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean sample], but all the assets are original. Note that the
three.js demo has no screen-space reflections and instead renders a
mirror world.

Additionally, this patch fixes a random bug I ran across: that the
`"TONEMAP_METHOD_ACES_FITTED"` `#define` is incorrectly supplied to the
shader as `"TONEMAP_METHOD_ACES_FITTED "` (with an extra space) in some
paths.

[1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html

[three.js ocean sample]: https://threejs.org/examples/webgl_shaders_ocean.html
@pcwalton pcwalton closed this May 3, 2024
github-merge-queue bot pushed a commit that referenced this pull request May 27, 2024
…erer, with improved raymarching code. (#13418)

This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).

For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
#12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).

To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).

## Changelog

### Added

* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.

![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)

![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants