WIP xr-perception

This commit is contained in:
2024-09-27 22:11:28 +02:00
parent a9319210df
commit 344496cbef
19 changed files with 270 additions and 366 deletions

View File

@@ -6,7 +6,7 @@ Immersive systems such as headsets leave the hands free to interact with \VOs, p
%\begin{subfigs}{sutherland1968headmounted}{Photos of the first \AR system \cite{sutherland1968headmounted}. }[
% \item The \AR headset.
% \item Wireframe \ThreeD \VOs were displayed registered in the real environment (as if there were part of it).
% \item Wireframe \ThreeD \VOs were displayed registered in the \RE (as if there were part of it).
% ]
% \subfigsheight{45mm}
% \subfig{sutherland1970computer3}
@@ -237,7 +237,7 @@ Our hands allow us to manipulate real everyday objects with both strength and pr
Initially tracked by active sensing devices such as gloves or controllers, it is now possible to track hands in real time using cameras and computer vision algorithms natively integrated into \AR/\VR headsets \cite{tong2023survey}.
The user's hand is therefore tracked and reconstructed as a \emph{virtual hand} model in the \VE \cite{billinghurst2015survey,laviolajr20173d}.
The simplest models represent the hand as a rigid 3D object that follows the movements of the real hand with \qty{6}{\DoF} (position and orientation in space) \cite{talvas2012novel}.
The simplest models represent the hand as a rigid \ThreeD object that follows the movements of the real hand with \qty{6}{\DoF} (position and orientation in space) \cite{talvas2012novel}.
An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points.
The most common technique is to reconstruct all the phalanges of the hand in an articulated kinematic model (\secref{hand_anatomy}) \cite{borst2006spring}.
@@ -246,7 +246,7 @@ Heuristic techniques use rules to determine the selection, manipulation and rele
However, they produce unrealistic behaviour and are limited to the cases predicted by the rules.
Physics-based techniques simulate forces at the points of contact between the virtual hand and the \VO.
In particular, \textcite{borst2006spring} have proposed an articulated kinematic model in which each phalanx is a rigid body simulated with the god-object method \cite{zilles1995constraintbased}:
The virtual phalanx follows the movements of the real phalanx, but remains constrained to the surface of the virtual objects during contact.
The virtual phalanx follows the movements of the real phalanx, but remains constrained to the surface of the \VOs during contact.
The forces acting on the object are calculated as a function of the distance between the real and virtual hands (\figref{borst2006spring}).
More advanced techniques simulate the friction phenomena \cite{talvas2013godfinger} and finger deformations \cite{talvas2015aggregate}, allowing highly accurate and realistic interactions, but which can be difficult to compute in real time.
@@ -264,7 +264,7 @@ More advanced techniques simulate the friction phenomena \cite{talvas2013godfing
\end{subfigs}
However, the lack of physical constraints on the user's hand movements makes manipulation actions tiring \cite{hincapie-ramos2014consumed}.
While the user's fingers traverse the virtual object, a physics-based virtual hand remains in contact with the object, a discrepancy that may degrade the user's performance in \VR \cite{prachyabrued2012virtual}.
While the user's fingers traverse the \VO, a physics-based virtual hand remains in contact with the object, a discrepancy that may degrade the user's performance in \VR \cite{prachyabrued2012virtual}.
Finally, in the absence of haptic feedback on each finger, it is difficult to estimate the contact and forces exerted by the fingers on the object during grasping and manipulation \cite{maisto2017evaluation,meli2018combining}.
While a visual rendering of the virtual hand in \VR can compensate for these issues \cite{prachyabrued2014visual}, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched.
@@ -303,7 +303,7 @@ In a collaborative task in immersive \OST-\AR \vs \VR, \textcite{yoon2020evaluat
Finally, \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the visual and haptic rendering of the hand in \VST-\AR, as detailed in the next section (\secref{vhar_rings}).
Taken together, these results suggest that a visual rendering of the hand in \AR could improve usability and performance in direct hand manipulation tasks, but the best rendering has yet to be determined.
%\cite{chan2010touching} : cues for touching (selection) \VOs.
%\textcite{saito2021contact} found that masking the real hand with a textured 3D opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did.
%\textcite{saito2021contact} found that masking the real hand with a textured \ThreeD opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did.
%To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of \VO manipulation.
\begin{subfigs}{visual-hands}{Visual hand renderings in \AR. }[][