Fix in acronyms
This commit is contained in:
@@ -115,8 +115,8 @@ Each finger is formed by a chain of 3 phalanges, proximal, middle and distal, ex
|
|||||||
The joints at the base of each phalanx allow flexion and extension, \ie folding and unfolding movements relative to the preceding bone.
|
The joints at the base of each phalanx allow flexion and extension, \ie folding and unfolding movements relative to the preceding bone.
|
||||||
The proximal phalanges can also adduct and abduct, \ie move the fingers towards and away from each other.
|
The proximal phalanges can also adduct and abduct, \ie move the fingers towards and away from each other.
|
||||||
Finally, the metacarpal of the thumb is capable of flexion/extension and adduction/abduction, which allows the thumb to oppose the other fingers.
|
Finally, the metacarpal of the thumb is capable of flexion/extension and adduction/abduction, which allows the thumb to oppose the other fingers.
|
||||||
These axes of movement are called DoFs and can be represented by a \emph{kinematic model} of the hand with 27 DoFs as shown in \figref{blausen2014medical_hand}.
|
These axes of movement are called \DoFs and can be represented by a \emph{kinematic model} of the hand with 27 \DoFs as shown in \figref{blausen2014medical_hand}.
|
||||||
Thus, the thumb has 5 DoFs, each of the other four fingers has 4 DoFs and the wrist has 6 DoFs and can take any position (3 DoFs) or orientation (3 DoFs) in space \cite{erol2007visionbased}.
|
Thus, the thumb has 5 \DoFs, each of the other four fingers has 4 \DoFs and the wrist has 6 \DoFs and can take any position (3 \DoFs) or orientation (3 \DoFs) in space \cite{erol2007visionbased}.
|
||||||
|
|
||||||
This complex structure enables the hand to perform a wide range of movements and gestures. However, the way we explore and grasp objects follows simpler patterns, depending on the object being touched and the aim of the interaction.
|
This complex structure enables the hand to perform a wide range of movements and gestures. However, the way we explore and grasp objects follows simpler patterns, depending on the object being touched and the aim of the interaction.
|
||||||
|
|
||||||
|
|||||||
@@ -251,7 +251,7 @@ Initially tracked by active sensing devices such as gloves or controllers, it is
|
|||||||
Our hands allow us to manipulate real everyday objects (\secref{grasp_types}), hence virtual hand interaction techniques seem to be the most natural way to manipulate virtual objects \cite[p.400]{laviolajr20173d}.
|
Our hands allow us to manipulate real everyday objects (\secref{grasp_types}), hence virtual hand interaction techniques seem to be the most natural way to manipulate virtual objects \cite[p.400]{laviolajr20173d}.
|
||||||
|
|
||||||
The user's hand being tracked is reconstructed as a \emph{virtual hand} model in the \VE \cite[p.405]{laviolajr20173d}.
|
The user's hand being tracked is reconstructed as a \emph{virtual hand} model in the \VE \cite[p.405]{laviolajr20173d}.
|
||||||
The simplest models represent the hand as a rigid \ThreeD object that follows the movements of the real hand with \qty{6}{DoF} (position and orientation in space) \cite{talvas2012novel}.
|
The simplest models represent the hand as a rigid \ThreeD object that follows the movements of the real hand with 6 \DoF (position and orientation in space) \cite{talvas2012novel}.
|
||||||
An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points.
|
An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points.
|
||||||
The most common technique is to reconstruct all the phalanges of the hand in an articulated kinematic model (\secref{hand_anatomy}) \cite{borst2006spring}.
|
The most common technique is to reconstruct all the phalanges of the hand in an articulated kinematic model (\secref{hand_anatomy}) \cite{borst2006spring}.
|
||||||
|
|
||||||
@@ -296,7 +296,7 @@ A visual hand feedback while in \VE also seems to affect how one grasps an objec
|
|||||||
|
|
||||||
Conversely, a user sees their own hands in \AR, and the mutual occlusion between the hands and the virtual objects is a common issue (\secref{ar_displays}), \ie hiding the virtual object when the real hand is in front of it, and hiding the real hand when it is behind the virtual object (\figref{hilliges2012holodesk_2}).
|
Conversely, a user sees their own hands in \AR, and the mutual occlusion between the hands and the virtual objects is a common issue (\secref{ar_displays}), \ie hiding the virtual object when the real hand is in front of it, and hiding the real hand when it is behind the virtual object (\figref{hilliges2012holodesk_2}).
|
||||||
%For example, in \figref{hilliges2012holodesk_2}, the user is pinching a virtual cube in \OST-\AR with their thumb and index fingers, but while the index is behind the cube, it is seen as in front of it.
|
%For example, in \figref{hilliges2012holodesk_2}, the user is pinching a virtual cube in \OST-\AR with their thumb and index fingers, but while the index is behind the cube, it is seen as in front of it.
|
||||||
While in \VST-\AR, this could be solved as a masking problem by combining the real and virtual images \cite{battisti2018seamless}, \eg in \figref{suzuki2014grasping}, in \OST-\AR, this is much more difficult because the \VE is displayed as a transparent \TwoD image on top of the \ThreeD \RE, which cannot be easily masked \cite{macedo2023occlusion}.
|
While in \VST-\AR, this could be solved as a masking problem by combining the real and virtual images \cite{battisti2018seamless}, \eg in \figref{suzuki2014grasping}, in \OST-\AR, this is much more difficult because the \VE is displayed as a transparent 2D image on top of the \ThreeD \RE, which cannot be easily masked \cite{macedo2023occlusion}.
|
||||||
%Yet, even in \VST-\AR,
|
%Yet, even in \VST-\AR,
|
||||||
|
|
||||||
%An alternative is to render the virtual objects and the virtual hand semi-transparents, so that they are partially visible even when one is occluding the other (\figref{buchmann2005interaction}).
|
%An alternative is to render the virtual objects and the virtual hand semi-transparents, so that they are partially visible even when one is occluding the other (\figref{buchmann2005interaction}).
|
||||||
|
|||||||
@@ -157,7 +157,7 @@ Yet, they differ greatly in the actuators used (\secref{wearable_haptic_devices}
|
|||||||
|
|
||||||
Other wearable haptic actuators have been proposed for \AR, but are not discussed here.
|
Other wearable haptic actuators have been proposed for \AR, but are not discussed here.
|
||||||
A first reason is that they permanently cover the fingertip and affect the interaction with the \RE, such as thin-skin tactile interfaces \cite{withana2018tacttoo,teng2024haptic} or fluid-based interfaces \cite{han2018hydroring}.
|
A first reason is that they permanently cover the fingertip and affect the interaction with the \RE, such as thin-skin tactile interfaces \cite{withana2018tacttoo,teng2024haptic} or fluid-based interfaces \cite{han2018hydroring}.
|
||||||
Another category of actuators relies on systems that cannot be considered as portable, such as REVEL \cite{bau2012revel}, which provide friction sensations with reverse electrovibration that must modify the real objects to augment, or Electrical Muscle Stimulation (EMS) devices \cite{lopes2018adding}, which provide kinesthetic feedback by contracting the muscles.
|
Another category of actuators relies on systems that cannot be considered as portable, such as REVEL \cite{bau2012revel}, which provide friction sensations with reverse electrovibration that must modify the real objects to augment, or electrical muscle stimulation (EMS) devices \cite{lopes2018adding}, which provide kinesthetic feedback by contracting the muscles.
|
||||||
|
|
||||||
\subsubsection{Nail-Mounted Devices}
|
\subsubsection{Nail-Mounted Devices}
|
||||||
\label{vhar_nails}
|
\label{vhar_nails}
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ In order not to influence the perception, as vision is an important source of in
|
|||||||
\label{apparatus}
|
\label{apparatus}
|
||||||
|
|
||||||
An experimental environment was created to ensure a similar visual rendering in \AR and \VR (\figref{renderings}).
|
An experimental environment was created to ensure a similar visual rendering in \AR and \VR (\figref{renderings}).
|
||||||
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside and a \qtyproduct{50 x 15}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
|
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard box with a paper sheet glued inside and a \qtyproduct{50 x 15}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
|
||||||
A single light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table fully illuminated the inside of the box.
|
A single light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table fully illuminated the inside of the box.
|
||||||
Participants rated the roughness of the paper (without any texture augmentation) before the experiment on a 7-point Likert scale (1~=~Extremely smooth, 7~=~Extremely rough) as quite smooth (\mean{2.5}, \sd{1.3}).
|
Participants rated the roughness of the paper (without any texture augmentation) before the experiment on a 7-point Likert scale (1~=~Extremely smooth, 7~=~Extremely rough) as quite smooth (\mean{2.5}, \sd{1.3}).
|
||||||
|
|
||||||
@@ -51,7 +51,7 @@ They also wore headphones with a brown noise masking the sound of the voice-coil
|
|||||||
The user study was held in a quiet room with no windows.
|
The user study was held in a quiet room with no windows.
|
||||||
|
|
||||||
\begin{subfigs}{setup}{Visuo-haptic textures rendering setup. }[][
|
\begin{subfigs}{setup}{Visuo-haptic textures rendering setup. }[][
|
||||||
\item HoloLens~2 \OST-\AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the \ThreeD-printed piece for attaching the masks to the headset.
|
\item HoloLens~2 \OST-\AR headset, the two cardboard masks to switch the real or virtual environments with the same \FoV, and the \ThreeD-printed piece for attaching the masks to the headset.
|
||||||
\item User exploring a virtual vibrotactile texture on a real sheet of paper.
|
\item User exploring a virtual vibrotactile texture on a real sheet of paper.
|
||||||
]
|
]
|
||||||
\subfigsheight{48.5mm}
|
\subfigsheight{48.5mm}
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ All pairwise differences were statistically significant.
|
|||||||
\label{response_time}
|
\label{response_time}
|
||||||
|
|
||||||
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effect on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}).
|
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effect on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}).
|
||||||
Reported response times are \GM.
|
Reported response times are geometric means (GM).
|
||||||
Participants took longer on average to respond with the \level{Virtual} rendering (\geomean{1.65}{\s} \ci{1.59}{1.72}) than with the \level{Real} rendering (\geomean{1.38}{\s} \ci{1.32}{1.43}), which is the only statistically significant difference (\ttest{19}{0.3}, \p{0.005}).
|
Participants took longer on average to respond with the \level{Virtual} rendering (\geomean{1.65}{\s} \ci{1.59}{1.72}) than with the \level{Real} rendering (\geomean{1.38}{\s} \ci{1.32}{1.43}), which is the only statistically significant difference (\ttest{19}{0.3}, \p{0.005}).
|
||||||
The \level{Mixed} rendering was in between (\geomean{1.56}{\s} \ci{1.49}{1.63}).
|
The \level{Mixed} rendering was in between (\geomean{1.56}{\s} \ci{1.49}{1.63}).
|
||||||
|
|
||||||
|
|||||||
@@ -114,7 +114,7 @@ A calibration was performed for every participant, to best adapt the size of the
|
|||||||
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the virtual objects and hand renderings, which were applied throughout the experiment.
|
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the virtual objects and hand renderings, which were applied throughout the experiment.
|
||||||
|
|
||||||
The hand tracking information provided by MRTK was used to construct a virtual articulated physics-enabled hand (\secref[related_work]{ar_virtual_hands}) using PhysX.
|
The hand tracking information provided by MRTK was used to construct a virtual articulated physics-enabled hand (\secref[related_work]{ar_virtual_hands}) using PhysX.
|
||||||
It featured 25 DoFs, including the fingers proximal, middle, and distal phalanges.
|
It featured 25 \DoFs, including the fingers proximal, middle, and distal phalanges.
|
||||||
To allow effective (and stable) physical interactions between the hand and the virtual cube to manipulate, we implemented an approach similar to that of \textcite{borst2006spring}, where a series of virtual springs with high stiffness are used to couple the physics-enabled hand with the tracked hand.
|
To allow effective (and stable) physical interactions between the hand and the virtual cube to manipulate, we implemented an approach similar to that of \textcite{borst2006spring}, where a series of virtual springs with high stiffness are used to couple the physics-enabled hand with the tracked hand.
|
||||||
As before, a set of empirical tests have been used to select the most effective physical characteristics in terms of mass, elastic constant, friction, damping, colliders size, and shape for the (tracked) virtual hand interaction model.
|
As before, a set of empirical tests have been used to select the most effective physical characteristics in terms of mass, elastic constant, friction, damping, colliders size, and shape for the (tracked) virtual hand interaction model.
|
||||||
|
|
||||||
|
|||||||
@@ -36,7 +36,6 @@
|
|||||||
\renewcommand*{\glstextformat}[1]{\textcolor{black}{#1}}% Hyperlink in black
|
\renewcommand*{\glstextformat}[1]{\textcolor{black}{#1}}% Hyperlink in black
|
||||||
|
|
||||||
\acronym[TIFC]{2IFC}{two-interval forced choice}
|
\acronym[TIFC]{2IFC}{two-interval forced choice}
|
||||||
\acronym[TwoD]{2D}{two-dimensional}
|
|
||||||
\acronym[ThreeD]{3D}{three-dimensional}
|
\acronym[ThreeD]{3D}{three-dimensional}
|
||||||
\acronym{ANOVA}{analysis of variance}
|
\acronym{ANOVA}{analysis of variance}
|
||||||
\acronym{ART}{aligned rank transform}
|
\acronym{ART}{aligned rank transform}
|
||||||
@@ -46,7 +45,6 @@
|
|||||||
\acronym{ERM}{eccentric rotating mass}
|
\acronym{ERM}{eccentric rotating mass}
|
||||||
\acronym{FoV}{field of view}
|
\acronym{FoV}{field of view}
|
||||||
\acronym{GLMM}{generalized linear mixed model}
|
\acronym{GLMM}{generalized linear mixed model}
|
||||||
\acronym{GM}{geometric mean}
|
|
||||||
\acronym{HaTT}{Penn Haptic Texture Toolkit}
|
\acronym{HaTT}{Penn Haptic Texture Toolkit}
|
||||||
\acronym{HSD}{honest significant difference}
|
\acronym{HSD}{honest significant difference}
|
||||||
\acronym{JND}{just noticeable difference}
|
\acronym{JND}{just noticeable difference}
|
||||||
|
|||||||
Reference in New Issue
Block a user