WIP visual-hand chapter

This commit is contained in:
2024-06-27 18:11:11 +02:00
parent d189781bb9
commit a9c298b434
39 changed files with 41 additions and 32 deletions

View File

@@ -1,5 +1,5 @@
\section{Introduction}
\label{1_introduction}
\label{introduction}
\begin{subfigswide}{hands}{%
Experiment \#1. The six considered visual hand renderings, as seen by the user through the AR headset
@@ -13,15 +13,15 @@
fingers' joints and phalanges \emph{(Skeleton, Skel)}, and %
semi-transparent 3D hand model \emph{(Mesh)}.
}
\subfig[0.15]{3-hands-none}[None]
\subfig[0.15]{3-hands-occlusion}[Occlusion (Occl)]
\subfig[0.15]{3-hands-tips}[Tips]
\subfig[0.15]{3-hands-contour}[Contour (Cont)]
\subfig[0.15]{3-hands-skeleton}[Skeleton (Skel)]
\subfig[0.15]{3-hands-mesh}[Mesh]
\subfig[0.15]{method/hands-none}[None]
\subfig[0.15]{method/hands-occlusion}[Occlusion (Occl)]
\subfig[0.15]{method/hands-tips}[Tips]
\subfig[0.15]{method/hands-contour}[Contour (Cont)]
\subfig[0.15]{method/hands-skeleton}[Skeleton (Skel)]
\subfig[0.15]{method/hands-mesh}[Mesh]
\end{subfigswide}
\noindent \IEEEPARstart{A}{ugmented} reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
%
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment~\cite{laviolajr20173d, kim2018revisiting}.
%

View File

@@ -1,11 +1,11 @@
\section{Experiment \#1: Visual Rendering of the Hand in AR}
\label{3_method}
\label{method}
\noindent This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
\subsection{Visual Hand Renderings}
\label{3_hands}
\label{hands}
We compared a set of the most popular visual hand renderings, as also presented in \secref{2_hands}.
%
@@ -19,7 +19,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
\subsubsection{None~(\figref{hands-none})}
\label{3_hands_none}
\label{hands_none}
As a reference, we considered no visual hand rendering, as is common in AR~\cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
%
@@ -29,7 +29,7 @@ As virtual content is rendered on top of the real environment, the hand of the u
\subsubsection{Occlusion (Occl,~\figref{hands-occlusion})}
\label{3_hands_occlusion}
\label{hands_occlusion}
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\cite{macedo2023occlusion}, \eg the thumb of the user in \figref{hands-occlusion}.
%
@@ -37,7 +37,7 @@ This approach is frequent in works using VST-AR headsets~\cite{knorlein2009influ
\subsubsection{Tips (\figref{hands-tips})}
\label{3_hands_tips}
\label{hands_tips}
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
%
@@ -45,7 +45,7 @@ Unlike work using small spheres~\cite{maisto2017evaluation, meli2014wearable, gr
\subsubsection{Contour (Cont,~\figref{hands-contour})}
\label{3_hands_contour}
\label{hands_contour}
This rendering is a {1-mm-thick} outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
%
@@ -55,7 +55,7 @@ This rendering is not as usual as the previous others in the literature~\cite{ka
\subsubsection{Skeleton (Skel,~\figref{hands-skeleton})}
\label{3_hands_skeleton}
\label{hands_skeleton}
This rendering schematically renders the joints and phalanges of the fingers with small spheres and cylinders, respectively, leaving the outside of the hand visible.
%
@@ -65,7 +65,7 @@ It is widely used in VR~\cite{argelaguet2016role, schwind2018touch, chessa2019gr
\subsubsection{Mesh (\figref{hands-mesh})}
\label{3_hands_mesh}
\label{hands_mesh}
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
%
@@ -73,9 +73,9 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\subsection{Manipulation Tasks and Virtual Scene}
\label{3_tasks}
\label{tasks}
\begin{subfigs}{3_tasks}{%
\begin{subfigs}{tasks}{%
Experiment \#1. The two manipulation tasks: %
(a) pushing a virtual cube along a table towards a target placed on the same surface; %
(b) grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane. %
@@ -83,8 +83,8 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
reach (7-cm-edge volume and semi-transparent). %
Only one target at a time was shown during the experiments.%
}
\subfig[0.23]{3-task-push}[Push task]
\subfig[0.23]{3-task-grasp}[Grasp task]
\subfig[0.23]{method/task-push}[Push task]
\subfig[0.23]{method/task-grasp}[Grasp task]
\end{subfigs}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
@@ -93,13 +93,13 @@ Following the guidelines of \textcite{bergstrom2021how} for designing object man
\subsubsection{Push Task}
\label{push-task}
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (see \figref{3-task-push}).
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (see \figref{method/task-push}).
%
The virtual object to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
%
At every repetition of the task, the cube to manipulate always spawns at the same place, on top of a real table in front of the user.
%
On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centered on the cube, at \qty{45}{\degree} from each other (see again \figref{3-task-push}).
On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centered on the cube, at \qty{45}{\degree} from each other (see again \figref{method/task-push}).
%
Users are asked to push the cube towards the target volume using their fingertips in any way they prefer.
%
@@ -111,7 +111,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
\subsubsection{Grasp Task}
\label{grasp-task}
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (see \figref{3-task-grasp}).
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (see \figref{method/task-grasp}).
%
The cube to manipulate and target volume are the same as in the previous task. However, this time, the target volume can spawn in eight different locations on a plane \qty{10}{\cm} \emph{above} the table, still located on a \qty{20}{\cm} radius circle at \qty{45}{\degree} from each other.
%
@@ -121,13 +121,13 @@ As before, the task is considered completed when the cube is \emph{fully} inside
\subsection{Experimental Design}
\label{3_design}
\label{design}
We analyzed the two tasks separately. For each of them, we considered two independent, within-subject, variables:
%
\begin{itemize}
\item \emph{Visual Hand Renderings}, consisting of the six possible renderings discussed in \secref{3_hands}: None, Occlusion (Occl), Tips, Contour (Cont), Skeleton (Skel), and Mesh.
\item \emph{Target}, consisting of the eight possible {location} of the target volume, named as the cardinal points and as shown in \figref{3_tasks}: {E, NE, N, NW, W, SW, S, and SE}.
\item \emph{Visual Hand Renderings}, consisting of the six possible renderings discussed in \secref{hands}: None, Occlusion (Occl), Tips, Contour (Cont), Skeleton (Skel), and Mesh.
\item \emph{Target}, consisting of the eight possible {location} of the target volume, named as the cardinal points and as shown in \figref{tasks}: {E, NE, N, NW, W, SW, S, and SE}.
\end{itemize}
%
@@ -139,7 +139,7 @@ This design led to a total of 2 manipulation tasks \x 6 visual hand renderings \
\subsection{Apparatus and Implementation}
\label{3_apparatus}
\label{apparatus}
We used the OST-AR headset HoloLens~2.
%
@@ -173,11 +173,11 @@ This setup enabled a good and consistent tracking of the user's fingers.
\subsection{Protocol}
\label{3_protocol}
\label{protocol}
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
%
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in \figref{3_tasks}, perform the calibration of the visual hand size as described in \secref{3_apparatus}, and complete a 2-minutes training to familiarize with the AR rendering and the two considered tasks.
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a 2-minutes training to familiarize with the AR rendering and the two considered tasks.
%
During this training, we did not use any of the six hand renderings we want to test, but rather a fully-opaque white hand rendering that completely occluded the real hand of the user.
@@ -189,7 +189,7 @@ The experiment took around 1 hour and 20 minutes to complete.
\subsection{Participants}
\label{3_participants}
\label{participants}
Twenty-four subjects participated in the study (eight aged between 18 and 24, fourteen aged between 25 and 34, and two aged between 35 and 44; 22~males, 1~female, 1~preferred not to say).
%
@@ -205,7 +205,7 @@ Participants signed an informed consent, including the declaration of having no
\subsection{Collected Data}
\label{3_metrics}
\label{metrics}
Inspired by \textcite{laviolajr20173d}, we collected the following metrics during the experiment.
%

Binary file not shown.

After

Width:  |  Height:  |  Size: 455 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 406 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 427 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

@@ -2,3 +2,12 @@
\mainlabel{visual-hand}
\chaptertoc
\input{1-introduction}
\input{2-method}
\input{3-results}
\input{3-1-push}
\input{3-2-grasp}
\input{3-3-ranks}
\input{3-4-questions}
\input{4-discussion}

View File

@@ -1143,7 +1143,7 @@
doi = {10/gh4tbn}
}
@article{lincoln2017low,
@thesis{lincoln2017low,
title = {Low {{Latency Displays}} for {{Augmented Reality}}},
author = {Lincoln, Peter},
date = {2017}