-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Hi,
After resolving the recent SDK setup issues and starting our experiments
with the ULTRIS X20, we would like to ask for clarification regarding
the level of light-field / angular data that is accessible through the
CUVIS SDK for research purposes.
We are using ULTRIS X20 for hyperspectral 3D reconstruction research.
Since ULTRIS X20 is described as a light-field based snapshot hyperspectral camera, we would like to better understand what data representations are available to the user.
Specifically, we would like to ask the following:
-
Does the CUVIS SDK provide access to raw sensor data prior to hyperspectral cube reconstruction, such as micro-lens based micro-images, sub-aperture views, or angular/light-field samples?
-
If raw micro-image access is not available, is it possible to export sub-aperture views or any form of multi-angular images corresponding to different light-field directions?
-
Is the delivered hyperspectral cube (e.g., 410×410×N spectral bands)
already fully integrated over angular dimensions, or does it still
implicitly encode angular/light-field information that could be decoded
for applications such as depth estimation or digital refocusing? -
Are there calibration models or documentation available that describe
the mapping from sensor pixels to rays (position, angle, wavelength),
which would enable light-field-based depth estimation or refocusing
in research applications? -
If direct access to angular/light-field information is not supported,
would ULTRIS X20 be officially recommended to be used as a snapshot hyperspectral camera only, with 3D reconstruction relying on
camera motion and multi-view geometry instead?
Our goal is to understand whether ULTRIS X20 can support single-shot or multi-angular light-field depth cues, or whether multi-view acquisition
is the intended and recommended approach for 3D reconstruction in
research workflows.
Thank you very much for your time and clarification.