Replace \autocite => \cite

This commit is contained in:
2024-09-08 10:52:06 +02:00
parent 0c11bb2668
commit e96888afab
19 changed files with 197 additions and 197 deletions

View File

@@ -11,7 +11,7 @@ We compared a set of the most popular visual hand renderings.%, as also presente
%
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips.
%
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\autocite{yoon2020evaluating, vanveldhuizen2021effect}.
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\cite{yoon2020evaluating, vanveldhuizen2021effect}.
%
All considered hand renderings are drawn following the tracked pose of the user's real hand.
%
@@ -21,7 +21,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
\subsubsection{None~(\figref{method/hands-none})}
\label{hands_none}
As a reference, we considered no visual hand rendering, as is common in AR~\autocite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
As a reference, we considered no visual hand rendering, as is common in AR~\cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
%
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
%
@@ -31,9 +31,9 @@ As virtual content is rendered on top of the real environment, the hand of the u
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
\label{hands_occlusion}
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\autocite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\cite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
%
This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis} .
This approach is frequent in works using VST-AR headsets~\cite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis} .
\subsubsection{Tips (\figref{method/hands-tips})}
@@ -41,7 +41,7 @@ This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009i
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
%
Unlike work using small spheres~\autocite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
Unlike work using small spheres~\cite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
@@ -51,7 +51,7 @@ This rendering is a {1-mm-thick} outline contouring the user's hands, providing
%
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
%
This rendering is not as usual as the previous others in the literature~\autocite{kang2020comparative}.
This rendering is not as usual as the previous others in the literature~\cite{kang2020comparative}.
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
@@ -61,13 +61,13 @@ This rendering schematically renders the joints and phalanges of the fingers wit
%
It can be seen as an extension of the Tips rendering to include the complete fingers articulations.
%
It is widely used in VR~\autocite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\autocite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
It is widely used in VR~\cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
\subsubsection{Mesh (\figref{method/hands-mesh})}
\label{hands_mesh}
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\autocite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
%
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
@@ -88,7 +88,7 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\subfig[0.23]{method/task-grasp}
\end{subfigs}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\autocite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
\subsubsection{Push Task}
@@ -184,7 +184,7 @@ During this training, we did not use any of the six hand renderings we want to t
Participants were asked to carry out the two tasks as naturally and as fast as possible.
%
Similarly to~\autocite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
Similarly to~\cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
%
The experiment took around 1 hour and 20 minutes to complete.
@@ -218,7 +218,7 @@ Finally, (iii) the mean \emph{Time per Contact}, defined as the total time any p
%
Solely for the grasp-and-place task, we also measured the (iv) \emph{Grip Aperture}, defined as the average distance between the thumb's fingertip and the other fingertips during the grasping of the cube;
%
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\autocite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\cite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
%
Taken together, these measures provide an overview of the performance and usability of each of the visual hand renderings tested, as we hypothesized that they should influence the behavior and effectiveness of the participants.