Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of depth buffers for stereoscopic systems #5

Open
bialpio opened this issue Oct 2, 2020 · 3 comments
Open

Handling of depth buffers for stereoscopic systems #5

bialpio opened this issue Oct 2, 2020 · 3 comments

Comments

@bialpio
Copy link
Contributor

bialpio commented Oct 2, 2020

Splitting @cabanier's question into a new issue:

How is a single depth buffer going to work with stereo? Will each eye have its own depth buffer?

The way I thought about it is that XRDepthInformation that we return must be relevant to the XRView that was used to retrieve it. In case of a stereo system w/ only one depth buffer, there would be 2 options: either reprojecting the buffer so that each of XRViews gets the appropriate XRDepthInformation, or exposing an additional XRView that would be used only to obtain the single depth buffer (but then it'd be up to the app to reproject, there are some XRViews for which XRDepthInformation would be null, and we are creating a synthetic XRView so maybe not ideal). If we were to require the implementation to reproject the depth buffer, how big of a burden would that be?

@cabanier
Copy link
Member

cabanier commented Oct 6, 2020

I agree that it should be per view but I don't know how much of burden it is to calculate that.
It seems that if a stereoscopic device provides depth, it should make it available so it's correct for each eye.

@cabanier
Copy link
Member

cabanier commented Oct 1, 2023

@bialpio @toji Quest 3 will ship with support for GPU depth sensing. This information is returned as a texture array, not side by side.
Maybe we can update the API to make this clear?

/agenda discuss exposing depth buffer as a texture array

@probot-label probot-label bot added the agenda label Oct 1, 2023
@cabanier
Copy link
Member

@toji agreed that we can define that gpu depth sensing always returns a texture array. This would simplify the spec and there would be less of a chance for user confusion.

/agenda should we always expose the depth as a texture array?

@Yonet Yonet removed the agenda label Oct 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants