Structure related work
This commit is contained in:
@@ -70,8 +70,14 @@ Yet, the user experience in \AR is still highly dependent on the display used.
|
||||
|
||||
\paragraph{Video See-Through Headsets}
|
||||
|
||||
Vergence-accommodation conflict.
|
||||
|
||||
Using a VST-AR headset have notable consequences, as the "real" view of the environment and the hand is actually a visual stream from a camera, which has a noticeable delay and lower quality (\eg resolution, frame rate, field of view) compared to the direct view of the real environment with OST-AR~\cite{macedo2023occlusion}.1
|
||||
|
||||
\paragraph{Optical See-Through Headsets}
|
||||
|
||||
Distances are underestimated~\cite{adams2022depth,peillard2019studying}.
|
||||
|
||||
|
||||
\subsection{Presence and Embodiment in AR}
|
||||
\label{ar_presence}
|
||||
@@ -115,7 +121,6 @@ Retour à la boucle d'interaction :
|
||||
on a présenté les interfaces haptiques et de RA (rendu du système vers l'utilisateur) pour faire le rendu du VE, qui essaye de recréer des expériences perceptuelles similaires et comparables à celles de la vie de touts les jours, \ie de rendre la meilleure immersion (voir \secref{ar_presence}) possible.
|
||||
Mais il faut pouvoir permettre à l'utilisateur d'interagir avec l'environment et les objets virtuels (interaction), donc détecter et représenter l'utilisateur dans le VE (tracking).
|
||||
|
||||
|
||||
\subsubsection{Interaction Techniques}
|
||||
|
||||
Pour cela il faut des techniques d'interaction, \cite{billinghurst2005designing} : Physical Elements as Input -- Interaction Technique --> Virtual Elements as Output.
|
||||
@@ -145,8 +150,38 @@ Prototypes : HandyAR and HoloDesk
|
||||
\cite{piumsomboon2014graspshell} : direct hand manipulation of virtual objects in immersive AR vs vocal commands.
|
||||
\cite{chan2010touching} : cues for touching (selection) virtual objects.
|
||||
|
||||
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt qu’opaque, soit en
|
||||
affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
|
||||
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt qu’opaque, soit en affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
|
||||
|
||||
|
||||
\subsection{Visual Rendering of Hands in AR}
|
||||
|
||||
Mutual visual occlusion between a virtual object and the real hand, \ie hiding the virtual object when the real hand is in front of it and hiding the real hand when it is behind the virtual object, is often presented as natural and realistic, enhancing the blending of real and virtual environments~\cite{piumsomboon2014graspshell, al-kalbani2016analysis}.
|
||||
In video see-through AR (VST-AR), this could be solved as a masking problem by combining the image of the real world captured by a camera and the generated virtual image~\cite{macedo2023occlusion}.
|
||||
In OST-AR, this is more difficult because the virtual environment is displayed as a transparent 2D image on top of the 3D real world, which cannot be easily masked~\cite{macedo2023occlusion}.
|
||||
Moreover, in VST-AR, the grip aperture and depth positioning of virtual objects often seem to be wrongly estimated~\cite{al-kalbani2016analysis, maisto2017evaluation}.
|
||||
However, this effect has yet to be verified in an OST-AR setup.
|
||||
|
||||
An alternative is to render the virtual objects and the hand semi-transparents, so that they are partially visible even when one is occluding the other, \eg in \figref{hands-none} the real hand is behind the virtual cube but still visible.
|
||||
Although perceived as less natural, this seems to be preferred to a mutual visual occlusion in VST-AR~\cite{buchmann2005interaction, ha2014wearhand, piumsomboon2014graspshell} and VR~\cite{vanveldhuizen2021effect}, but has not yet been evaluated in OST-AR.
|
||||
However, this effect still causes depth conflicts that make it difficult to determine if one's hand is behind or in front of a virtual object, \eg in \figref{hands-none} the thumb is in front of the virtual cube, but it appears to be behind it.
|
||||
|
||||
In VR, as the user is fully immersed in the virtual environment and cannot see their real hands, it is necessary to represent them virtually.
|
||||
Virtual hand rendering is also known to influence how an object is grasped in VR~\cite{prachyabrued2014visual,blaga2020too} and AR, or even how real bumps and holes are perceived in VR~\cite{schwind2018touch}, but its effect on the perception of a haptic texture augmentation has not yet been investigated.
|
||||
It is known that the virtual hand representation has an impact on perception, interaction performance, and preference of users~\cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch}.
|
||||
In a pick-and-place task in VR, \textcite{prachyabrued2014visual} found that the virtual hand representation whose motion was constrained to the surface of the virtual objects performed the worst, while the virtual hand representation following the tracked human hand (thus penetrating the virtual objects), performed the best, even though it was rather disliked.
|
||||
The authors also observed that the best compromise was a double rendering, showing both the tracked hand and a hand rendering constrained by the virtual environment.
|
||||
It has also been shown that over a realistic avatar, a skeleton rendering (similar to \figref{hands-skeleton}) can provide a stronger sense of being in control~\cite{argelaguet2016role} and that minimalistic fingertip rendering (similar to \figref{hands-tips}) can be more effective in a typing task~\cite{grubert2018effects}.
|
||||
|
||||
In AR, as the real hand of a user is visible but not physically constrained by the virtual environment, adding a visual hand rendering that can physically interact with virtual objects would achieve a similar result to the promising double-hand rendering of \textcite{prachyabrued2014visual}.
|
||||
Additionally, \textcite{kahl2021investigation} showed that a virtual object overlaying a tangible object in OST-AR can vary in size without worsening the users' experience nor the performance.
|
||||
This suggests that a visual hand rendering superimposed on the real hand could be helpful, but should not impair users.
|
||||
|
||||
Few works have explored the effect of visual hand rendering in AR~\cite{blaga2017usability, maisto2017evaluation, krichenbauer2018augmented, yoon2020evaluating, saito2021contact}.
|
||||
For example, \textcite{blaga2017usability} evaluated a skeleton rendering in several virtual object manipulations against no visual hand overlay.
|
||||
Performance did not improve, but participants felt more confident with the virtual hand.
|
||||
However, the experiment was carried out on a screen, in a non-immersive AR scenario.
|
||||
\textcite{saito2021contact} found that masking the real hand with a textured 3D opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the virtual object did.
|
||||
To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of virtual object manipulation.
|
||||
|
||||
|
||||
\subsection{Conclusion}
|
||||
|
||||
Reference in New Issue
Block a user