Better figures

This commit is contained in:
2024-09-24 15:07:13 +02:00
parent 9ba5d344a5
commit b8b799df3d
18 changed files with 383 additions and 418 deletions

View File

@@ -1,26 +1,3 @@
\section{Introduction}
\label{introduction}
\begin{subfigswide}{hands}{%
Experiment \#1. The six considered visual hand renderings, as seen by the user through the AR headset
during the two-finger grasping of a virtual cube.
%
From left to right: %
no visual rendering \emph{(None)}, %
cropped virtual content to {enable} hand-cube occlusion \emph{(Occlusion, Occl)}, %
rings on the fingertips \emph{(Tips)}, %
thin outline of the hand \emph{(Contour, Cont)}, %
fingers' joints and phalanges \emph{(Skeleton, Skel)}, and %
semi-transparent 3D hand model \emph{(Mesh)}.
}
\subfig[0.15]{method/hands-none}%[None]
\subfig[0.15]{method/hands-occlusion}%[Occlusion (Occl)]
\subfig[0.15]{method/hands-tips}%[Tips]
\subfig[0.15]{method/hands-contour}%[Contour (Cont)]
\subfig[0.15]{method/hands-skeleton}%[Skeleton (Skel)]
\subfig[0.15]{method/hands-mesh}%[Mesh]
\end{subfigswide}
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
%
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
@@ -62,6 +39,23 @@ We consider two representative manipulation tasks: push-and-slide and grasp-and-
The main contributions of this work are:
%
\begin{itemize}
\item a first human subject experiment evaluating the performance and user experience of six visual hand renderings superimposed on the real hand; %
\item a second human subject experiment evaluating the performance and user experience of visuo-haptic hand renderings by comparing two vibrotactile contact techniques provided at four delocalized positions on the hand and combined with the two most representative visual hand renderings established in the first experiment.
\item a first human subject experiment evaluating the performance and user experience of six visual hand renderings superimposed on the real hand;
\end{itemize}
\begin{subfigs}{hands}{The six visual hand renderings}[
Depicted as seen by the user through the AR headset during the two-finger grasping of a virtual cube.
][
\item No visual rendering \emph{(None)}.
\item Cropped virtual content to enable hand-cube occlusion \emph{(Occlusion, Occl)}.
\item Rings on the fingertips \emph{(Tips)}.
\item Thin outline of the hand \emph{(Contour, Cont)}.
\item Fingers' joints and phalanges \emph{(Skeleton, Skel)}.
\item Semi-transparent 3D hand model \emph{(Mesh)}.
]
\subfig[0.15]{method/hands-none}
\subfig[0.15]{method/hands-occlusion}
\subfig[0.15]{method/hands-tips}
\subfig[0.15]{method/hands-contour}
\subfig[0.15]{method/hands-skeleton}
\subfig[0.15]{method/hands-mesh}
\end{subfigs}

View File

@@ -75,17 +75,15 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\subsection{Manipulation Tasks and Virtual Scene}
\label{tasks}
\begin{subfigs}{tasks}{%
Experiment \#1. The two manipulation tasks:
}[
\item pushing a virtual cube along a table towards a target placed on the same surface; %
\item grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane. %
Both pictures show the cube to manipulate in the middle (5-cm-edge and opaque) and the eight possible targets to
reach (7-cm-edge volume and semi-transparent). %
Only one target at a time was shown during the experiments.%
]
\subfig[0.23]{method/task-push}
\subfig[0.23]{method/task-grasp}
\begin{subfigs}{tasks}{The two manipulation tasks of the user study. }[
The cube to manipulate is in the middle of the table (5-cm-edge and opaque) and the eight possible targets to reach are arround (7-cm-edge volume and semi-transparent).
Only one target at a time was shown during the experiments.
][
\item Push task: pushing a virtual cube along a table towards a target placed on the same surface.
\item Grasp task: grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane.
]
\subfig[0.23]{method/task-push}
\subfig[0.23]{method/task-grasp}
\end{subfigs}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.

View File

@@ -1,18 +1,18 @@
\subsection{Ranking}
\label{ranks}
\begin{subfigs}{ranks}{%
Experiment \#1. Boxplots of the ranking (lower is better) of each visual hand rendering
%
and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment:
%
** is \pinf{0.01} and * is \pinf{0.05}.
}
\subfig[0.24]{results/Ranks-Push}
\subfig[0.24]{results/Ranks-Grasp}
\begin{subfigs}{results_ranks}{Boxplots of the ranking for each visual hand rendering. }[
Lower is better.
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
][
\item Push task ranking.
\item Grasp task ranking.
]
\subfig[0.24]{results/Ranks-Push}
\subfig[0.24]{results/Ranks-Grasp}
\end{subfigs}
\figref{ranks} shows the ranking of each visual hand rendering for the Push and Grasp tasks.
\figref{results_ranks} shows the ranking of each visual hand rendering for the Push and Grasp tasks.
%
Friedman tests indicated that both ranking had statistically significant differences (\pinf{0.001}).
%

View File

@@ -1,21 +1,19 @@
\subsection{Questionnaire}
\label{questions}
\begin{subfigswide}{questions}{%
Experiment \#1. Boxplots of the questionnaire results of each visual hand rendering
%
and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
%
Lower is better for Difficulty and Fatigue. Higher is better for Precision, Efficiency, and Rating.
}
\subfig[0.19]{results/Question-Difficulty}
\subfig[0.19]{results/Question-Fatigue}
\subfig[0.19]{results/Question-Precision}
\subfig[0.19]{results/Question-Efficiency}
\subfig[0.19]{results/Question-Rating}
\end{subfigswide}
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each visual hand rendering. }[
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
Lower is better for \textbf{(a)} difficulty and \textbf{(b)} fatigue.
Higher is better for \textbf{(c)} precision, \textbf{(d)} efficiency, and \textbf{(e)} rating.
]
\subfig[0.19]{results/Question-Difficulty}
\subfig[0.19]{results/Question-Fatigue}
\subfig[0.19]{results/Question-Precision}
\subfig[0.19]{results/Question-Efficiency}
\subfig[0.19]{results/Question-Rating}
\end{subfigs}
\figref{questions} presents the questionnaire results for each visual hand rendering.
\figref{results_questions} presents the questionnaire results for each visual hand rendering.
%
Friedman tests indicated that all questions had statistically significant differences (\pinf{0.001}).
%

View File

@@ -1,31 +1,33 @@
\section{Results}
\label{results}
\begin{subfigs}{push_results}{%
Experiment \#1: Push task.
%
Geometric means with bootstrap 95~\% confidence interval for each visual hand rendering
%
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
}
\subfig[0.24]{results/Push-CompletionTime-Hand-Overall-Means}%[Time to complete a trial.]
\subfig[0.24]{results/Push-ContactsCount-Hand-Overall-Means}%[Number of contacts with the cube.]
\hspace*{10mm}
\subfig[0.24]{results/Push-MeanContactTime-Hand-Overall-Means}%[Mean time spent on each contact.]
\begin{subfigs}{push_results}{Results of the push task performance metrics for each visual hand rendering. }[
Geometric means with bootstrap 95~\% confidence interval
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][
\item Time to complete a trial.
\item Number of contacts with the cube.
\item Time spent on each contact.
]
\subfig[0.24]{results/Push-CompletionTime-Hand-Overall-Means}
\subfig[0.24]{results/Push-ContactsCount-Hand-Overall-Means}
\subfig[0.24]{results/Push-MeanContactTime-Hand-Overall-Means}
\end{subfigs}
\begin{subfigswide}{grasp_results}{%
Experiment \#1: Grasp task.
%
Geometric means with bootstrap 95~\% confidence interval for each visual hand rendering
%
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
}
\subfig[0.24]{results/Grasp-CompletionTime-Hand-Overall-Means}%[Time to complete a trial.]
\subfig[0.24]{results/Grasp-ContactsCount-Hand-Overall-Means}%[Number of contacts with the cube.]
\subfig[0.24]{results/Grasp-MeanContactTime-Hand-Overall-Means}%[Mean time spent on each contact.]
\subfig[0.24]{results/Grasp-GripAperture-Hand-Overall-Means}%[\centering Distance between thumb and the other fingertips when grasping.]
\end{subfigswide}
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each visual hand rendering. }[
Geometric means with bootstrap 95~\% confidence interval
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][
\item Time to complete a trial.
\item Number of contacts with the cube.
\item Time spent on each contact.
\item Distance between thumb and the other fingertips when grasping.
]
\subfig[0.24]{results/Grasp-CompletionTime-Hand-Overall-Means}
\subfig[0.24]{results/Grasp-ContactsCount-Hand-Overall-Means}
\subfig[0.24]{results/Grasp-MeanContactTime-Hand-Overall-Means}
\subfig[0.24]{results/Grasp-GripAperture-Hand-Overall-Means}
\end{subfigs}
Results of each trials measure were analyzed with a linear mixed model (LMM), with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
%