Simpler section reference labels

This commit is contained in:
2024-08-13 09:57:40 +02:00
parent 611ff38503
commit 6887384e53
29 changed files with 99 additions and 99 deletions

View File

@@ -1,11 +1,11 @@
\section{User Study}
\label{sec:method}
\label{method}
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
\subsection{Visual Hand Renderings}
\label{sec:hands}
\label{hands}
We compared a set of the most popular visual hand renderings.%, as also presented in \secref{hands}.
%
@@ -19,7 +19,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
\subsubsection{None~(\figref{method/hands-none})}
\label{sec:hands_none}
\label{hands_none}
As a reference, we considered no visual hand rendering, as is common in AR~\autocite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
%
@@ -29,7 +29,7 @@ As virtual content is rendered on top of the real environment, the hand of the u
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
\label{sec:hands_occlusion}
\label{hands_occlusion}
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\autocite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
%
@@ -37,7 +37,7 @@ This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009i
\subsubsection{Tips (\figref{method/hands-tips})}
\label{sec:hands_tips}
\label{hands_tips}
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
%
@@ -45,7 +45,7 @@ Unlike work using small spheres~\autocite{maisto2017evaluation, meli2014wearable
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
\label{sec:hands_contour}
\label{hands_contour}
This rendering is a {1-mm-thick} outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
%
@@ -55,7 +55,7 @@ This rendering is not as usual as the previous others in the literature~\autocit
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
\label{sec:hands_skeleton}
\label{hands_skeleton}
This rendering schematically renders the joints and phalanges of the fingers with small spheres and cylinders, respectively, leaving the outside of the hand visible.
%
@@ -65,7 +65,7 @@ It is widely used in VR~\autocite{argelaguet2016role, schwind2018touch, chessa20
\subsubsection{Mesh (\figref{method/hands-mesh})}
\label{sec:hands_mesh}
\label{hands_mesh}
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\autocite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
%
@@ -73,17 +73,17 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\subsection{Manipulation Tasks and Virtual Scene}
\label{sec:tasks}
\label{tasks}
\begin{subfigs}{tasks}{%
Experiment \#1. The two manipulation tasks:
}[
}[
\item pushing a virtual cube along a table towards a target placed on the same surface; %
\item grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane. %
Both pictures show the cube to manipulate in the middle (5-cm-edge and opaque) and the eight possible targets to
reach (7-cm-edge volume and semi-transparent). %
Only one target at a time was shown during the experiments.%
]
]
\subfig[0.23]{method/task-push}
\subfig[0.23]{method/task-grasp}
\end{subfigs}
@@ -92,7 +92,7 @@ Following the guidelines of \textcite{bergstrom2021how} for designing object man
\subsubsection{Push Task}
\label{sec:push-task}
\label{push-task}
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (see \figref{method/task-push}).
%
@@ -110,7 +110,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
\subsubsection{Grasp Task}
\label{sec:grasp-task}
\label{grasp-task}
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (see \figref{method/task-grasp}).
%
@@ -122,7 +122,7 @@ As before, the task is considered completed when the cube is \emph{fully} inside
\subsection{Experimental Design}
\label{sec:design}
\label{design}
We analyzed the two tasks separately. For each of them, we considered two independent, within-subject, variables:
%
@@ -140,7 +140,7 @@ This design led to a total of 2 manipulation tasks \x 6 visual hand renderings \
\subsection{Apparatus and Implementation}
\label{sec:apparatus}
\label{apparatus}
We used the OST-AR headset HoloLens~2.
%
@@ -174,7 +174,7 @@ This setup enabled a good and consistent tracking of the user's fingers.
\subsection{Protocol}
\label{sec:protocol}
\label{protocol}
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
%
@@ -190,7 +190,7 @@ The experiment took around 1 hour and 20 minutes to complete.
\subsection{Participants}
\label{sec:participants}
\label{participants}
Twenty-four subjects participated in the study (eight aged between 18 and 24, fourteen aged between 25 and 34, and two aged between 35 and 44; 22~males, 1~female, 1~preferred not to say).
%
@@ -206,7 +206,7 @@ Participants signed an informed consent, including the declaration of having no
\subsection{Collected Data}
\label{sec:metrics}
\label{metrics}
Inspired by \textcite{laviolajr20173d}, we collected the following metrics during the experiment.
%