Formatting

This commit is contained in:
2024-09-24 15:15:30 +02:00
parent 36fe1dbc15
commit 8aa814597f
13 changed files with 47 additions and 83 deletions

View File

@@ -3,7 +3,6 @@
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
\subsection{Visual Hand Renderings}
\label{hands}
@@ -17,7 +16,6 @@ All considered hand renderings are drawn following the tracked pose of the user'
%
However, while the real hand can of course penetrate virtual objects, the visual hand is always constrained by the virtual environment.
\subsubsection{None~(\figref{method/hands-none})}
\label{hands_none}
@@ -27,7 +25,6 @@ Users have no information about hand tracking and no feedback about contact with
%
As virtual content is rendered on top of the real environment, the hand of the user can be hidden by the virtual objects when manipulating them (\secref{hands}).
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
\label{hands_occlusion}
@@ -35,7 +32,6 @@ To avoid the abovementioned undesired occlusions due to the virtual content bein
%
This approach is frequent in works using VST-AR headsets \cite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis}.
\subsubsection{Tips (\figref{method/hands-tips})}
\label{hands_tips}
@@ -43,7 +39,6 @@ This rendering shows small visual rings around the fingertips of the user, highl
%
Unlike work using small spheres \cite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
\label{hands_contour}
@@ -53,7 +48,6 @@ Unlike the other renderings, it is not occluded by the virtual objects, as shown
%
This rendering is not as usual as the previous others in the literature \cite{kang2020comparative}.
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
\label{hands_skeleton}
@@ -63,7 +57,6 @@ It can be seen as an extension of the Tips rendering to include the complete fin
%
It is widely used in VR \cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR \cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
\subsubsection{Mesh (\figref{method/hands-mesh})}
\label{hands_mesh}
@@ -71,7 +64,6 @@ This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in
%
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
\subsection{Manipulation Tasks and Virtual Scene}
\label{tasks}
@@ -88,7 +80,6 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
\subsubsection{Push Task}
\label{push-task}
@@ -106,7 +97,6 @@ In this task, the cube cannot be lifted.
%
The task is considered completed when the cube is \emph{fully} inside the target volume.
\subsubsection{Grasp Task}
\label{grasp-task}
@@ -118,15 +108,14 @@ Users are asked to grasp, lift, and move the cube towards the target volume usin
%
As before, the task is considered completed when the cube is \emph{fully} inside the volume.
\subsection{Experimental Design}
\label{design}
We analyzed the two tasks separately. For each of them, we considered two independent, within-subject, variables:
%
\begin{itemize}
\item \emph{Visual Hand Renderings}, consisting of the six possible renderings discussed in \secref{hands}: None, Occlusion (Occl), Tips, Contour (Cont), Skeleton (Skel), and Mesh.
\item \emph{Target}, consisting of the eight possible {location} of the target volume, named as the cardinal points and as shown in \figref{tasks}: {E, NE, N, NW, W, SW, S, and SE}.
\item \emph{Visual Hand Renderings}, consisting of the six possible renderings discussed in \secref{hands}: None, Occlusion (Occl), Tips, Contour (Cont), Skeleton (Skel), and Mesh.
\item \emph{Target}, consisting of the eight possible {location} of the target volume, named as the cardinal points and as shown in \figref{tasks}: {E, NE, N, NW, W, SW, S, and SE}.
\end{itemize}
%
@@ -136,7 +125,6 @@ To control learning effects, we counter-balanced the orders of the two manipulat
%
This design led to a total of 2 manipulation tasks \x 6 visual hand renderings \x 8 targets \x 3 repetitions $=$ 288 trials per participant.
\subsection{Apparatus and Implementation}
\label{apparatus}
@@ -170,7 +158,6 @@ The room where the experiment was held had no windows, with one light source of
%
This setup enabled a good and consistent tracking of the user's fingers.
\subsection{Protocol}
\label{protocol}
@@ -186,7 +173,6 @@ Similarly to \cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usabil
%
The experiment took around 1 hour and 20 minutes to complete.
\subsection{Participants}
\label{participants}
@@ -202,7 +188,6 @@ Two subjects had significant experience with AR (\enquote{I use it every week}),
%
Participants signed an informed consent, including the declaration of having no conflict of interest.
\subsection{Collected Data}
\label{metrics}

View File

@@ -18,7 +18,6 @@ Three groups of targets volumes were identified:
%
and (3) back N and NW targets were the slowest (\p{0.04}).
\subsubsection{Contacts}
\label{push_contacts_count}
@@ -36,7 +35,6 @@ This indicates how effective a visual hand rendering is: a lower result indicate
%
Targets on the left (W) and the right (E, SW) were easier to reach than the back ones (N, NW, \pinf{0.001}).
\subsubsection{Time per Contact}
\label{push_time_per_contact}

View File

@@ -19,14 +19,14 @@ Friedman tests indicated that both ranking had statistically significant differe
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then used on both ranking results (\secref{metrics}):
\begin{itemize}
\item \textit{Push Ranking}: Occlusion was ranked lower than Contour (\p{0.005}), Skeleton (\p{0.02}), and Mesh (\p{0.03});
%
Tips was ranked lower than Skeleton (\p{0.02}).
%
This good ranking of the Skeleton rendering for the Push task is consistent with the Push trial results.
\item \textit{Grasp Ranking}: Occlusion was ranked lower than Contour (\p{0.001}), Skeleton (\p{0.001}), and Mesh (\p{0.007});
%
No Hand was ranked lower than Skeleton (\p{0.04}).
%
A complete visual hand rendering seemed to be preferred over no visual hand rendering when grasping.
\item \textit{Push Ranking}: Occlusion was ranked lower than Contour (\p{0.005}), Skeleton (\p{0.02}), and Mesh (\p{0.03});
%
Tips was ranked lower than Skeleton (\p{0.02}).
%
This good ranking of the Skeleton rendering for the Push task is consistent with the Push trial results.
\item \textit{Grasp Ranking}: Occlusion was ranked lower than Contour (\p{0.001}), Skeleton (\p{0.001}), and Mesh (\p{0.007});
%
No Hand was ranked lower than Skeleton (\p{0.04}).
%
A complete visual hand rendering seemed to be preferred over no visual hand rendering when grasping.
\end{itemize}

View File

@@ -18,13 +18,12 @@
Friedman tests indicated that all questions had statistically significant differences (\pinf{0.001}).
%
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then used each question results (\secref{metrics}):
\begin{itemize}
\item \textit{Difficulty}: Occlusion was considered more difficult than Contour (\p{0.02}), Skeleton (\p{0.01}), and Mesh (\p{0.03}).
\item \textit{Fatigue}: None was found more fatiguing than Mesh (\p{0.04}); And Occlusion more than Skeleton (\p{0.02}) and Mesh (\p{0.02}).
\item \textit{Precision}: None was considered less precise than Skeleton (\p{0.02}) and Mesh (\p{0.02}); And Occlusion more than Contour (\p{0.02}), Skeleton (\p{0.006}), and Mesh (\p{0.02}).
\item \textit{Efficiency}: Occlusion was found less efficient than Contour (\p{0.01}), Skeleton (\p{0.02}), and Mesh (\p{0.02}).
\item \textit{{Rating}}: Occlusion was rated lower than Contour (\p{0.02}) and Skeleton (\p{0.03}).
\item \textit{Difficulty}: Occlusion was considered more difficult than Contour (\p{0.02}), Skeleton (\p{0.01}), and Mesh (\p{0.03}).
\item \textit{Fatigue}: None was found more fatiguing than Mesh (\p{0.04}); And Occlusion more than Skeleton (\p{0.02}) and Mesh (\p{0.02}).
\item \textit{Precision}: None was considered less precise than Skeleton (\p{0.02}) and Mesh (\p{0.02}); And Occlusion more than Contour (\p{0.02}), Skeleton (\p{0.006}), and Mesh (\p{0.02}).
\item \textit{Efficiency}: Occlusion was found less efficient than Contour (\p{0.01}), Skeleton (\p{0.02}), and Mesh (\p{0.02}).
\item \textit{Rating}: Occlusion was rated lower than Contour (\p{0.02}) and Skeleton (\p{0.03}).
\end{itemize}
In summary, Occlusion was worse than Skeleton for all questions, and worse than Contour and Mesh on 5 over 6 questions.

View File

@@ -11,4 +11,4 @@
\input{3-3-ranks}
\input{3-4-questions}
\input{4-discussion}
\input{5-conclusion}
\input{5-conclusion}