Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

There is a newer version of the record available.

Published December 19, 2023 | Version v3
Dataset Open

CREATTIVE3D multimodal dataset of user behavior in virtual reality

  • 1. Centre Inria d'Université Côte d'Azur
  • 2. Université Côte d'Azur, CNRS, I3S, Institut Universitaire de France
  • 3. Université Côte d'Azur, CNRS, I3S
  • 4. Université Côte d'Azur, CoBTeK, CHU, Institut Claude Pompidou
  • 5. Université Côte d'Azur, LAMHESS

Description

In the context of the ANR CREATTIVE3D project, we join the expertise of computer science, neuroscience, and clinical practitioners, with the aim to analyze the impact that a simulated low-vision condition has on user navigation behavior in complex road crossing scenes: a common daily situation where the difficulty to access and process visual information (e.g., traffic lights, approaching cars) in a timely fashion can lead to serious consequences on a person's safety and well-being. As a secondary objective, we also aim to investigate the potential role virtual reality could play in rehabilitation and training protocols for low-vision patients.

This dataset contains the data as part of the study described in An Integrated Framework for Understanding Multimodal Embodied Experiences in Interactive Virtual Reality.

The dataset is metadata for the pre-print Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset

To use this dataset, please cite:

@unpublished{wu:hal-04429351,
  TITLE = {{Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset}},
  AUTHOR = {Wu, Hui-Yin and Robert, Florent Alain Sauveur and Gallo, Franz Franco and 
Pirkovets, Kateryna and Quere, Cl{\'e}ment and Delachambre, Johanna and
Ramano{\"e}l, Stephen and Gros, Auriane and Winckler, Marco and Sassatelli, Lucile and
Hayotte, Meggy and Menin, Aline and Kornprobst, Pierre}, URL = {https://inria.hal.science/hal-04429351}, NOTE = {working paper or preprint}, YEAR = {2023}, MONTH = Dec, KEYWORDS = {Virtual reality ; Dataset ; Context ; Low vision ; 3D environments ; User study}, PDF = {https://inria.hal.science/hal-04429351/file/2023_CREATTIVE3D_dataset_arxiv_.pdf}, HAL_ID = {hal-04429351}, HAL_VERSION = {v1}, }

@inproceedings{robert2023integrated, title={An integrated framework for understanding multimodal embodied experiences in interactive virtual reality}, author={Robert, Florent and Wu, Hui-Yin and Sassatelli, Lucile and Ramanoel, Stephen and
Gros, Auriane and Winckler, Marco}, booktitle={Proceedings of the 2023 ACM International Conference on Interactive Media Experiences}, pages={14--26}, year={2023} }

Files

Data.zip

Files (7.3 GB)

Name Size Download all
md5:5832486e457f44149b07f39226708934
7.2 GB Preview Download
md5:5cf36d700d5732c0abf46ba9fa64fe6c
6.9 kB Preview Download
md5:c407ebb31c0aba704da3ef3e14fcd47e
22.2 MB Preview Download
md5:de9e3c0f3d2335802b57369c2b0f9dd5
2.1 kB Preview Download
md5:3f11a9cdd8b7ff78490a68af8c4f2042
238.7 kB Preview Download

Additional details

Related works

Cites
Conference paper: 10.1145/3573381.3596150 (DOI)
Is metadata for
Working paper: https://inria.hal.science/hal-04429351 (URL)

Funding

Agence Nationale de la Recherche
CREATTIVE3D – Creating Attention-Driven 3D Contexts for Low Vision ANR-21-CE33-0001
Agence Nationale de la Recherche
UCA JEDI – Idex UCA JEDI ANR-15-IDEX-0001
Grand Équipement National de Calcul Intensif (France)
GENCI – GENCI ANR-17-EQPX-0001