Remove \VO and \AE acronym
This commit is contained in:
@@ -5,15 +5,15 @@ We compared a set of the most popular visual hand renderings, as found in the li
|
||||
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips (\secref[related_work]{grasp_types}).
|
||||
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in \cite{yoon2020evaluating,vanveldhuizen2021effect}.
|
||||
All considered hand renderings are drawn following the tracked pose of the user's real hand.
|
||||
However, while the real hand can of course penetrate \VOs, the visual hand is always constrained by the \VE (\secref[related_work]{ar_virtual_hands}).
|
||||
However, while the real hand can of course penetrate virtual objects, the visual hand is always constrained by the \VE (\secref[related_work]{ar_virtual_hands}).
|
||||
|
||||
They are shown in \figref{hands} and described below, with an abbreviation in parentheses when needed.
|
||||
|
||||
\paragraph{None}
|
||||
|
||||
As a reference, we considered no visual hand rendering (\figref{method/hands-none}), as is common in \AR \cite{hettiarachchi2016annexing,blaga2017usability,xiao2018mrtouch,teng2021touch}.
|
||||
Users have no information about hand tracking and no feedback about contact with the \VOs, other than their movement when touched.
|
||||
As virtual content is rendered on top of the \RE, the hand of the user can be hidden by the \VOs when manipulating them (\secref[related_work]{ar_displays}).
|
||||
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
|
||||
As virtual content is rendered on top of the \RE, the hand of the user can be hidden by the virtual objects when manipulating them (\secref[related_work]{ar_displays}).
|
||||
|
||||
\paragraph{Occlusion (Occl)}
|
||||
|
||||
@@ -22,13 +22,13 @@ This approach is frequent in works using \VST-\AR headsets \cite{knorlein2009inf
|
||||
|
||||
\paragraph{Tips}
|
||||
|
||||
This rendering shows small visual rings around the fingertips of the user (\figref{method/hands-tips}), highlighting the most important parts of the hand and contact with \VOs during fine manipulation (\secref[related_work]{grasp_types}).
|
||||
This rendering shows small visual rings around the fingertips of the user (\figref{method/hands-tips}), highlighting the most important parts of the hand and contact with virtual objects during fine manipulation (\secref[related_work]{grasp_types}).
|
||||
Unlike work using small spheres \cite{maisto2017evaluation,meli2014wearable,grubert2018effects,normand2018enlarging,schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
|
||||
|
||||
\paragraph{Contour (Cont)}
|
||||
|
||||
This rendering is a \qty{1}{\mm} thick outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
|
||||
Unlike the other renderings, it is not occluded by the \VOs, as shown in \figref{method/hands-contour}.
|
||||
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
|
||||
This rendering is not as usual as the previous others in the literature \cite{kang2020comparative}.
|
||||
|
||||
\paragraph{Skeleton (Skel)}
|
||||
@@ -45,7 +45,7 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
|
||||
\section{User Study}
|
||||
\label{method}
|
||||
|
||||
We aim to investigate whether the chosen visual hand rendering affects the performance and user experience of manipulating \VOs with free hands in \AR.
|
||||
We aim to investigate whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with free hands in \AR.
|
||||
|
||||
\subsection{Manipulation Tasks and Virtual Scene}
|
||||
\label{tasks}
|
||||
@@ -55,8 +55,8 @@ Following the guidelines of \textcite{bergstrom2021how} for designing object man
|
||||
\subsubsection{Push Task}
|
||||
\label{push-task}
|
||||
|
||||
The first manipulation task consists in pushing a \VO along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
|
||||
The \VO to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
|
||||
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
|
||||
The virtual object to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
|
||||
At every repetition of the task, the cube to manipulate always spawns at the same place, on top of a real table in front of the user.
|
||||
On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centred on the cube, at \qty{45}{\degree} from each other (again \figref{method/task-push}).
|
||||
Users are asked to push the cube towards the target volume using their fingertips in any way they prefer.
|
||||
@@ -66,7 +66,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
|
||||
\subsubsection{Grasp Task}
|
||||
\label{grasp-task}
|
||||
|
||||
The second manipulation task consists in grasping, lifting, and placing a \VO in a target placed on a different (higher) plane (\figref{method/task-grasp}).
|
||||
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (\figref{method/task-grasp}).
|
||||
The cube to manipulate and target volume are the same as in the previous task.
|
||||
However, this time, the target volume can spawn in eight different locations on a plane \qty{10}{\cm} \emph{above} the table, still located on a \qty{20}{\cm} radius circle at \qty{45}{\degree} from each other.
|
||||
Users are asked to grasp, lift, and move the cube towards the target volume using their fingertips in any way they prefer.
|
||||
@@ -111,7 +111,7 @@ The compiled application ran directly on the HoloLens~2 at \qty{60}{FPS}.
|
||||
The default \ThreeD hand model from MRTK was used for all visual hand renderings.
|
||||
By changing the material properties of this hand model, we were able to achieve the six renderings shown in \figref{hands}.
|
||||
A calibration was performed for every participant, to best adapt the size of the visual hand rendering to their real hand.
|
||||
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the \VOs and hand renderings, which were applied throughout the experiment.
|
||||
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the virtual objects and hand renderings, which were applied throughout the experiment.
|
||||
|
||||
The hand tracking information provided by MRTK was used to construct a virtual articulated physics-enabled hand (\secref[related_work]{ar_virtual_hands}) using PhysX.
|
||||
It featured 25 DoFs, including the fingers proximal, middle, and distal phalanges.
|
||||
|
||||
Reference in New Issue
Block a user