Replace \autocite => \cite

This commit is contained in:
2024-09-08 10:52:06 +02:00
parent 0c11bb2668
commit e96888afab
19 changed files with 197 additions and 197 deletions

View File

@@ -23,35 +23,35 @@
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
%
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment~\autocite{laviolajr20173d, kim2018revisiting}.
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment~\cite{laviolajr20173d, kim2018revisiting}.
%
Hand tracking technologies~\autocite{xiao2018mrtouch}, grasping techniques~\autocite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real~\autocite{piumsomboon2014graspshell}, without requiring controllers~\autocite{krichenbauer2018augmented}, gloves~\autocite{prachyabrued2014visual}, or predefined gesture techniques~\autocite{piumsomboon2013userdefined, ha2014wearhand}.
Hand tracking technologies~\cite{xiao2018mrtouch}, grasping techniques~\cite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real~\cite{piumsomboon2014graspshell}, without requiring controllers~\cite{krichenbauer2018augmented}, gloves~\cite{prachyabrued2014visual}, or predefined gesture techniques~\cite{piumsomboon2013userdefined, ha2014wearhand}.
%
Optical see-through AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction~\autocite{kim2018revisiting}.
Optical see-through AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction~\cite{kim2018revisiting}.
However, there are still several haptic and visual limitations that affect manipulation in OST-AR, degrading the user experience.
%
For example, it is difficult to estimate the position of one's hand in relation to a virtual content because mutual occlusion between the hand and the virtual object is often lacking~\autocite{macedo2023occlusion}, the depth of virtual content is underestimated~\autocite{diaz2017designing, peillard2019studying}, and hand tracking still has a noticeable latency~\autocite{xiao2018mrtouch}.
For example, it is difficult to estimate the position of one's hand in relation to a virtual content because mutual occlusion between the hand and the virtual object is often lacking~\cite{macedo2023occlusion}, the depth of virtual content is underestimated~\cite{diaz2017designing, peillard2019studying}, and hand tracking still has a noticeable latency~\cite{xiao2018mrtouch}.
%
Similarly, it is challenging to ensure confident and realistic contact with a virtual object due to the lack of haptic feedback and the intangibility of the virtual environment, which of course cannot apply physical constraints on the hand~\autocite{maisto2017evaluation, meli2018combining, lopes2018adding, teng2021touch}.
Similarly, it is challenging to ensure confident and realistic contact with a virtual object due to the lack of haptic feedback and the intangibility of the virtual environment, which of course cannot apply physical constraints on the hand~\cite{maisto2017evaluation, meli2018combining, lopes2018adding, teng2021touch}.
%
These limitations also make it difficult to confidently move a grasped object towards a target~\autocite{maisto2017evaluation, meli2018combining}.
These limitations also make it difficult to confidently move a grasped object towards a target~\cite{maisto2017evaluation, meli2018combining}.
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an AR context: visual hand rendering and delocalized haptic rendering.
%
A few works explored the effect of a visual hand rendering on interactions in AR by simulating mutual occlusion between the real hand and virtual objects~\autocite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent~\autocite{ha2014wearhand, piumsomboon2014graspshell} or opaque~\autocite{blaga2017usability, yoon2020evaluating, saito2021contact}.
A few works explored the effect of a visual hand rendering on interactions in AR by simulating mutual occlusion between the real hand and virtual objects~\cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent~\cite{ha2014wearhand, piumsomboon2014graspshell} or opaque~\cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
%
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible~\autocite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible~\cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
%
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in AR.
%
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in AR~\autocite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in AR~\cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
%
But haptic rendering for AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking~\autocite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment~\autocite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
But haptic rendering for AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking~\cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment~\cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
%
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in AR.
%
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred~\autocite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred~\cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
%
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in AR, or conversely, they can be shown to be complementary.

View File

@@ -11,7 +11,7 @@ We compared a set of the most popular visual hand renderings.%, as also presente
%
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips.
%
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\autocite{yoon2020evaluating, vanveldhuizen2021effect}.
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\cite{yoon2020evaluating, vanveldhuizen2021effect}.
%
All considered hand renderings are drawn following the tracked pose of the user's real hand.
%
@@ -21,7 +21,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
\subsubsection{None~(\figref{method/hands-none})}
\label{hands_none}
As a reference, we considered no visual hand rendering, as is common in AR~\autocite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
As a reference, we considered no visual hand rendering, as is common in AR~\cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
%
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
%
@@ -31,9 +31,9 @@ As virtual content is rendered on top of the real environment, the hand of the u
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
\label{hands_occlusion}
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\autocite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\cite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
%
This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis} .
This approach is frequent in works using VST-AR headsets~\cite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis} .
\subsubsection{Tips (\figref{method/hands-tips})}
@@ -41,7 +41,7 @@ This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009i
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
%
Unlike work using small spheres~\autocite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
Unlike work using small spheres~\cite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
@@ -51,7 +51,7 @@ This rendering is a {1-mm-thick} outline contouring the user's hands, providing
%
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
%
This rendering is not as usual as the previous others in the literature~\autocite{kang2020comparative}.
This rendering is not as usual as the previous others in the literature~\cite{kang2020comparative}.
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
@@ -61,13 +61,13 @@ This rendering schematically renders the joints and phalanges of the fingers wit
%
It can be seen as an extension of the Tips rendering to include the complete fingers articulations.
%
It is widely used in VR~\autocite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\autocite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
It is widely used in VR~\cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
\subsubsection{Mesh (\figref{method/hands-mesh})}
\label{hands_mesh}
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\autocite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
%
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
@@ -88,7 +88,7 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\subfig[0.23]{method/task-grasp}
\end{subfigs}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\autocite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
\subsubsection{Push Task}
@@ -184,7 +184,7 @@ During this training, we did not use any of the six hand renderings we want to t
Participants were asked to carry out the two tasks as naturally and as fast as possible.
%
Similarly to~\autocite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
Similarly to~\cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
%
The experiment took around 1 hour and 20 minutes to complete.
@@ -218,7 +218,7 @@ Finally, (iii) the mean \emph{Time per Contact}, defined as the total time any p
%
Solely for the grasp-and-place task, we also measured the (iv) \emph{Grip Aperture}, defined as the average distance between the thumb's fingertip and the other fingertips during the grasping of the cube;
%
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\autocite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\cite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
%
Taken together, these measures provide an overview of the performance and usability of each of the visual hand renderings tested, as we hypothesized that they should influence the behavior and effectiveness of the participants.

View File

@@ -37,7 +37,7 @@ This result are consistent with \textcite{saito2021contact}, who found that disp
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in AR.
%
These results contrast with similar manipulation studies, but in non-immersive, on-screen AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance~\autocite{blaga2017usability,maisto2017evaluation,meli2018combining}.
These results contrast with similar manipulation studies, but in non-immersive, on-screen AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance~\cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
%
Our results show the most effective visual hand rendering to be the Skeleton one{. Participants appreciated that} it provided a detailed and precise view of the tracking of the real hand{, without} hiding or masking it.
%
@@ -45,7 +45,7 @@ Although the Contour and Mesh hand renderings were also highly rated, some parti
%
This result is in line with the results of virtual object manipulation in VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
%
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in VR~\autocite{argelaguet2016role, schwind2018touch}.
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in VR~\cite{argelaguet2016role, schwind2018touch}.
These results have of course some limitations as they only address limited types of manipulation tasks and visual hand characteristics, evaluated in a specific OST-AR setup.
%