Add visual-hand chapter

This commit is contained in:
2024-06-27 22:51:32 +02:00
parent 8b08562b90
commit 4a8ee35ede
9 changed files with 144 additions and 149 deletions

View File

@@ -1,79 +1,79 @@
\section{Experiment \#1: Visual Rendering of the Hand in AR}
\label{method}
\section{User Study}
\label{sec:method}
\noindent This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
\subsection{Visual Hand Renderings}
\label{hands}
\label{sec:hands}
We compared a set of the most popular visual hand renderings, as also presented in \secref{2_hands}.
We compared a set of the most popular visual hand renderings.%, as also presented in \secref{hands}.
%
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips.
%
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\cite{yoon2020evaluating, vanveldhuizen2021effect}.
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\autocite{yoon2020evaluating, vanveldhuizen2021effect}.
%
All considered hand renderings are drawn following the tracked pose of the user's real hand.
%
However, while the real hand can of course penetrate virtual objects, the visual hand is always constrained by the virtual environment.
\subsubsection{None~(\figref{hands-none})}
\label{hands_none}
\subsubsection{None~(\figref{method/hands-none})}
\label{sec:hands_none}
As a reference, we considered no visual hand rendering, as is common in AR~\cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
As a reference, we considered no visual hand rendering, as is common in AR~\autocite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
%
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
%
As virtual content is rendered on top of the real environment, the hand of the user can be hidden by the virtual objects when manipulating them (see \secref{2_hands}).
As virtual content is rendered on top of the real environment, the hand of the user can be hidden by the virtual objects when manipulating them (see \secref{hands}).
\subsubsection{Occlusion (Occl,~\figref{hands-occlusion})}
\label{hands_occlusion}
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
\label{sec:hands_occlusion}
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\cite{macedo2023occlusion}, \eg the thumb of the user in \figref{hands-occlusion}.
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\autocite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
%
This approach is frequent in works using VST-AR headsets~\cite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis} .
This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis} .
\subsubsection{Tips (\figref{hands-tips})}
\label{hands_tips}
\subsubsection{Tips (\figref{method/hands-tips})}
\label{sec:hands_tips}
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
%
Unlike work using small spheres~\cite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
Unlike work using small spheres~\autocite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
\subsubsection{Contour (Cont,~\figref{hands-contour})}
\label{hands_contour}
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
\label{sec:hands_contour}
This rendering is a {1-mm-thick} outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
%
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{hands-contour}.
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
%
This rendering is not as usual as the previous others in the literature~\cite{kang2020comparative}.
This rendering is not as usual as the previous others in the literature~\autocite{kang2020comparative}.
\subsubsection{Skeleton (Skel,~\figref{hands-skeleton})}
\label{hands_skeleton}
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
\label{sec:hands_skeleton}
This rendering schematically renders the joints and phalanges of the fingers with small spheres and cylinders, respectively, leaving the outside of the hand visible.
%
It can be seen as an extension of the Tips rendering to include the complete fingers articulations.
%
It is widely used in VR~\cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
It is widely used in VR~\autocite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\autocite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
\subsubsection{Mesh (\figref{hands-mesh})}
\label{hands_mesh}
\subsubsection{Mesh (\figref{method/hands-mesh})}
\label{sec:hands_mesh}
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\autocite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
%
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
\subsection{Manipulation Tasks and Virtual Scene}
\label{tasks}
\label{sec:tasks}
\begin{subfigs}{tasks}{%
Experiment \#1. The two manipulation tasks: %
@@ -87,11 +87,11 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\subfig[0.23]{method/task-grasp}[Grasp task]
\end{subfigs}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\autocite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
\subsubsection{Push Task}
\label{push-task}
\label{sec:push-task}
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (see \figref{method/task-push}).
%
@@ -109,7 +109,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
\subsubsection{Grasp Task}
\label{grasp-task}
\label{sec:grasp-task}
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (see \figref{method/task-grasp}).
%
@@ -121,7 +121,7 @@ As before, the task is considered completed when the cube is \emph{fully} inside
\subsection{Experimental Design}
\label{design}
\label{sec:design}
We analyzed the two tasks separately. For each of them, we considered two independent, within-subject, variables:
%
@@ -139,7 +139,7 @@ This design led to a total of 2 manipulation tasks \x 6 visual hand renderings \
\subsection{Apparatus and Implementation}
\label{apparatus}
\label{sec:apparatus}
We used the OST-AR headset HoloLens~2.
%
@@ -153,7 +153,7 @@ The compiled application ran directly on the HoloLens~2 at \qty{60}{FPS}.
The default 3D hand model from MRTK was used for all visual hand renderings.
%
By changing the material properties of this hand model, we were able to achieve the six renderings shown in \figref{hands}.
By changing the material properties of this hand model, we were able to achieve the six renderings shown in \figref{method/hands}.
%
A calibration was performed for every participant, so as to best adapt the size of the visual hand rendering to their real hand.
%
@@ -173,7 +173,7 @@ This setup enabled a good and consistent tracking of the user's fingers.
\subsection{Protocol}
\label{protocol}
\label{sec:protocol}
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
%
@@ -183,13 +183,13 @@ During this training, we did not use any of the six hand renderings we want to t
Participants were asked to carry out the two tasks as naturally and as fast as possible.
%
Similarly to~\cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
Similarly to~\autocite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
%
The experiment took around 1 hour and 20 minutes to complete.
\subsection{Participants}
\label{participants}
\label{sec:participants}
Twenty-four subjects participated in the study (eight aged between 18 and 24, fourteen aged between 25 and 34, and two aged between 35 and 44; 22~males, 1~female, 1~preferred not to say).
%
@@ -205,7 +205,7 @@ Participants signed an informed consent, including the declaration of having no
\subsection{Collected Data}
\label{metrics}
\label{sec:metrics}
Inspired by \textcite{laviolajr20173d}, we collected the following metrics during the experiment.
%
@@ -217,7 +217,7 @@ Finally, (iii) the mean \emph{Time per Contact}, defined as the total time any p
%
Solely for the grasp-and-place task, we also measured the (iv) \emph{Grip Aperture}, defined as the average distance between the thumb's fingertip and the other fingertips during the grasping of the cube;
%
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\cite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\autocite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
%
Taken together, these measures provide an overview of the performance and usability of each of the visual hand renderings tested, as we hypothesized that they should influence the behavior and effectiveness of the participants.