Remove "see" before section or figure reference

This commit is contained in:
2024-09-16 12:57:05 +02:00
parent 8705affcc4
commit 3b66b69fa1
21 changed files with 145 additions and 133 deletions

View File

@@ -108,7 +108,7 @@ The most mature devices are \HMDs, which are portable headsets worn directly on
\AR/\VR can also be extended to render for sensory modalities other than vision. \AR/\VR can also be extended to render for sensory modalities other than vision.
% %
\textcite{jeon2009haptic} proposed extending the \RV continuum to include haptic feedback by decoupling into two orthogonal haptic and visual axes (see \figref{visuo-haptic-rv-continuum3}). \textcite{jeon2009haptic} proposed extending the \RV continuum to include haptic feedback by decoupling into two orthogonal haptic and visual axes (\figref{visuo-haptic-rv-continuum3}).
% %
The combination of the two axes defines 9 types of \vh environments, with 3 possible levels of \RV for each \v or \h axis: real, augmented and virtual. The combination of the two axes defines 9 types of \vh environments, with 3 possible levels of \RV for each \v or \h axis: real, augmented and virtual.
% %

View File

@@ -94,7 +94,7 @@ As illustrated in the \figref{sensorimotor_continuum}, \Citeauthor{jones2006huma
] ]
This classification has been further refined by \textcite{bullock2013handcentric} into 15 categories of possible hand interactions with an object. This classification has been further refined by \textcite{bullock2013handcentric} into 15 categories of possible hand interactions with an object.
In this thesis, we are interested in exploring \vh augmentations (see \partref{perception}) and grasping of \VOs (see \partref{manipulation}) in the context of \AR and \WHs. In this thesis, we are interested in exploring \vh augmentations (\partref{perception}) and grasping of \VOs (\partref{manipulation}) in the context of \AR and \WHs.
\subsubsection{Hand Anatomy and Motion} \subsubsection{Hand Anatomy and Motion}
\label{hand_anatomy} \label{hand_anatomy}
@@ -143,8 +143,8 @@ It takes only \qtyrange{2}{3}{\s} to perform these procedures, except for contou
\subsubsection{Grasp Types} \subsubsection{Grasp Types}
\label{grasp_types} \label{grasp_types}
Thanks to the degrees of freedom of its skeleton, the hand can take many postures to grasp an object (see \secref{hand_anatomy}). Thanks to the degrees of freedom of its skeleton, the hand can take many postures to grasp an object (\secref{hand_anatomy}).
By placing the thumb or palm against the other fingers (pad or palm grasps respectively), or by placing the fingers against each other as if holding a cigarette (side grasp), the hand can hold the object securely. By placing the thumb or palm against the other fingers (pad or palm opposition respectively), or by placing the fingers against each other as if holding a cigarette (side opposition), the hand can hold the object securely.
Grasping adapts to the shape of the object and the task to be performed, \eg grasping a pen with the fingertips then holding it to write, or taking a mug by the body to fill it and by the handle to drink it~\cite{cutkosky1986modeling}. Grasping adapts to the shape of the object and the task to be performed, \eg grasping a pen with the fingertips then holding it to write, or taking a mug by the body to fill it and by the handle to drink it~\cite{cutkosky1986modeling}.
Three types of grasp are differentiated according to their degree of strength and precision. Three types of grasp are differentiated according to their degree of strength and precision.
In \emph{power grasps}, the object is held firmly and follows the movements of the hand rigidly. In \emph{power grasps}, the object is held firmly and follows the movements of the hand rigidly.
@@ -154,7 +154,7 @@ In \emph{precision grasps}, the fingers can move the object within the hand but
For all possible objects and tasks, the number of grasp types can be reduced to 34 and classified as the taxonomy on \figref{gonzalez2014analysis}~\cite{gonzalez2014analysis}.\footnote{An updated taxonomy was then proposed by \textcite{feix2016grasp}: it is more complete but harder to present.} For all possible objects and tasks, the number of grasp types can be reduced to 34 and classified as the taxonomy on \figref{gonzalez2014analysis}~\cite{gonzalez2014analysis}.\footnote{An updated taxonomy was then proposed by \textcite{feix2016grasp}: it is more complete but harder to present.}
For everyday objects, this number is even smaller, with between 5 and 10 grasp types depending on the activity~\cite{bullock2013grasp}. For everyday objects, this number is even smaller, with between 5 and 10 grasp types depending on the activity~\cite{bullock2013grasp}.
Furthermore, the fingertips are the most involved areas of the hand, both in terms of frequency of use and time spent in contact: In particular, the thumb is almost always used, as well as the index and middle fingers, but the other fingers are used less frequently~\cite{gonzalez2014analysis}. Furthermore, the fingertips are the most involved areas of the hand, both in terms of frequency of use and time spent in contact: In particular, the thumb is almost always used, as well as the index and middle fingers, but the other fingers are used less frequently~\cite{gonzalez2014analysis}.
This can be explained by the sensitivity of the fingertips (see \secref{haptic_sense}) and the ease with which the thumb can be opposed to the index and middle fingers compared to the other fingers. This can be explained by the sensitivity of the fingertips (\secref{haptic_sense}) and the ease with which the thumb can be opposed to the index and middle fingers compared to the other fingers.
\fig{gonzalez2014analysis}{Taxonomy of grasp types of~\textcite{gonzalez2014analysis}}[, classified according to their type (power, precision or intermediate) and the shape of the grasped object. Each grasp shows the area of the palm and fingers in contact with the object and the grasp with an example of object.] \fig{gonzalez2014analysis}{Taxonomy of grasp types of~\textcite{gonzalez2014analysis}}[, classified according to their type (power, precision or intermediate) and the shape of the grasped object. Each grasp shows the area of the palm and fingers in contact with the object and the grasp with an example of object.]
@@ -162,7 +162,7 @@ This can be explained by the sensitivity of the fingertips (see \secref{haptic_s
\subsection{Haptic Perception of Object Properties} \subsection{Haptic Perception of Object Properties}
\label{object_properties} \label{object_properties}
The active exploration of an object with the hand is performed as a sensorimotor loop: The exploratory movements (see \secref{exploratory_procedures}) guide the search for and adapt to sensory information (see \secref{haptic_sense}), allowing to construct a haptic perception of the object's properties. The active exploration of an object with the hand is performed as a sensorimotor loop: The exploratory movements (\secref{exploratory_procedures}) guide the search for and adapt to sensory information (\secref{haptic_sense}), allowing to construct a haptic perception of the object's properties.
There are two main types of \emph{perceptual properties}. There are two main types of \emph{perceptual properties}.
The \emph{material properties} are the perception of the roughness, hardness, temperature and friction of the surface of the object~\cite{bergmanntiest2010tactual}. The \emph{material properties} are the perception of the roughness, hardness, temperature and friction of the surface of the object~\cite{bergmanntiest2010tactual}.
The \emph{spatial properties} are the perception of the weight, shape and size of the object~\cite{lederman2009haptic}. The \emph{spatial properties} are the perception of the weight, shape and size of the object~\cite{lederman2009haptic}.
@@ -181,7 +181,7 @@ It is, for example, the perception of the fibers of fabric or wood and the textu
Roughness is what essentially characterises the perception of the \emph{texture} of the surface~\cite{hollins1993perceptual,baumgartner2013visual}. Roughness is what essentially characterises the perception of the \emph{texture} of the surface~\cite{hollins1993perceptual,baumgartner2013visual}.
When touching a surface in static touch, the asperities deform the skin and cause pressure sensations that allow a good perception of coarse roughness. When touching a surface in static touch, the asperities deform the skin and cause pressure sensations that allow a good perception of coarse roughness.
But when running the finger over the surface with a lateral movement (see \secref{exploratory_procedures}), vibrations are alos caused which give a better discrimination range and precision of roughness~\cite{bensmaia2005pacinian}. But when running the finger over the surface with a lateral movement (\secref{exploratory_procedures}), vibrations are alos caused which give a better discrimination range and precision of roughness~\cite{bensmaia2005pacinian}.
In particular, when the asperities are smaller than \qty{0.1}{mm}, such as paper fibers, the pressure cues are no longer captured and only the movement, \ie the vibrations, can be used to detect the roughness~\cite{hollins2000evidence}. In particular, when the asperities are smaller than \qty{0.1}{mm}, such as paper fibers, the pressure cues are no longer captured and only the movement, \ie the vibrations, can be used to detect the roughness~\cite{hollins2000evidence}.
This limit distinguishes \emph{macro-roughness} from \emph{micro-roughness}. This limit distinguishes \emph{macro-roughness} from \emph{micro-roughness}.
@@ -211,7 +211,7 @@ A larger spacing between elements increases the perceived roughness, but reaches
It is also possible to perceive the roughness of a surface by \emph{indirect touch}, with a tool held in the hand, for example by writing with a pen on paper~\cite{klatzky2003feeling}. It is also possible to perceive the roughness of a surface by \emph{indirect touch}, with a tool held in the hand, for example by writing with a pen on paper~\cite{klatzky2003feeling}.
The skin is no longer deformed and only the vibrations of the tool are transmitted. The skin is no longer deformed and only the vibrations of the tool are transmitted.
But this information is sufficient to feel the roughness, which perceived intensity follows the same quadratic law. But this information is sufficient to feel the roughness, which perceived intensity follows the same quadratic law.
The intensity peak varies with the size of the contact surface of the tool, \eg a small tool allows to perceive finer spaces between the elements than with the finger (see \figref{klatzky2003feeling_2}). The intensity peak varies with the size of the contact surface of the tool, \eg a small tool allows to perceive finer spaces between the elements than with the finger (\figref{klatzky2003feeling_2}).
However, as the speed of exploration changes the transmitted vibrations, a faster speed shifts the perceived intensity peak slightly to the right, \ie decreasing perceived roughness for fine spacings and increasing it for large spacings~\cite{klatzky2003feeling}. However, as the speed of exploration changes the transmitted vibrations, a faster speed shifts the perceived intensity peak slightly to the right, \ie decreasing perceived roughness for fine spacings and increasing it for large spacings~\cite{klatzky2003feeling}.
\begin{subfigs}{klatzky2003feeling}{Estimation of haptic roughness of a surface of conical micro-elements by active exploration~\cite{klatzky2003feeling}. }[ \begin{subfigs}{klatzky2003feeling}{Estimation of haptic roughness of a surface of conical micro-elements by active exploration~\cite{klatzky2003feeling}. }[
@@ -248,7 +248,7 @@ The perceived softness of a fruit allows us to judge its ripeness, while ceramic
By tapping on a surface, metal will be perceived as harder than wood. By tapping on a surface, metal will be perceived as harder than wood.
If the surface returns to its original shape after being deformed, the object is elastic (like a spring), otherwise it is plastic (like clay). If the surface returns to its original shape after being deformed, the object is elastic (like a spring), otherwise it is plastic (like clay).
When the finger presses on an object (see \figref{exploratory_procedures}), its surface will move and deform with some resistance, and the contact area of the skin will also expand, changing the pressure distribution. When the finger presses on an object (\figref{exploratory_procedures}), its surface will move and deform with some resistance, and the contact area of the skin will also expand, changing the pressure distribution.
When the surface is touched or tapped, vibrations are also transmitted to the skin. When the surface is touched or tapped, vibrations are also transmitted to the skin.
Passive touch (without voluntary hand movements) and tapping allow a perception of hardness as good as active touch~\cite{friedman2008magnitude}. Passive touch (without voluntary hand movements) and tapping allow a perception of hardness as good as active touch~\cite{friedman2008magnitude}.
@@ -290,7 +290,7 @@ Friction (or slipperiness) is the perception of \emph{resistance to movement} on
Sandpaper is typically perceived as sticky because it has a strong resistance to sliding on its surface, while glass is perceived as more slippery. Sandpaper is typically perceived as sticky because it has a strong resistance to sliding on its surface, while glass is perceived as more slippery.
This perceptual property is closely related to the perception of roughness~\cite{hollins1993perceptual,baumgartner2013visual}. This perceptual property is closely related to the perception of roughness~\cite{hollins1993perceptual,baumgartner2013visual}.
When running the finger on a surface with a lateral movement (see \secref{exploratory_procedures}), the skin-surface contacts generate frictional forces in the opposite direction to the finger movement, giving kinesthetic cues, and also stretch the skin, giving cutaneous cues. When running the finger on a surface with a lateral movement (\secref{exploratory_procedures}), the skin-surface contacts generate frictional forces in the opposite direction to the finger movement, giving kinesthetic cues, and also stretch the skin, giving cutaneous cues.
As illustrated in \figref{smith1996subjective_1}, a stick-slip phenomenon can also occur, where the finger is intermittently slowed by friction before continuing to move, on both rough and smooth surfaces~\cite{derler2013stick}. As illustrated in \figref{smith1996subjective_1}, a stick-slip phenomenon can also occur, where the finger is intermittently slowed by friction before continuing to move, on both rough and smooth surfaces~\cite{derler2013stick}.
The amplitude of the frictional force $F_s$ is proportional to the normal force of the finger $F_n$, \ie the force perpendicular to the surface, according to a coefficient of friction $\mu$: The amplitude of the frictional force $F_s$ is proportional to the normal force of the finger $F_n$, \ie the force perpendicular to the surface, according to a coefficient of friction $\mu$:
\begin{equation} \begin{equation}
@@ -340,7 +340,7 @@ For example, a larger object or a smoother surface, which increases the contact
Weight, size and shape are haptic spatial properties that are independent of the material properties described above. Weight, size and shape are haptic spatial properties that are independent of the material properties described above.
Weight (or heaviness/lightness) is the perceived \emph{mass} of the object~\cite{bergmanntiest2010haptic}. Weight (or heaviness/lightness) is the perceived \emph{mass} of the object~\cite{bergmanntiest2010haptic}.
It is typically estimated by holding the object statically in the palm of the hand to feel the gravitational force (see \secref{exploratory_procedures}). It is typically estimated by holding the object statically in the palm of the hand to feel the gravitational force (\secref{exploratory_procedures}).
A relative weight difference of \percent{8} is then required to be perceptible~\cite{brodie1985jiggling}. A relative weight difference of \percent{8} is then required to be perceptible~\cite{brodie1985jiggling}.
By lifting the object, it is also possible to feel the object's force of inertia, \ie its resistance to velocity. By lifting the object, it is also possible to feel the object's force of inertia, \ie its resistance to velocity.
This provides an additional perceptual cue to its mass and slightly improves weight discrimination. This provides an additional perceptual cue to its mass and slightly improves weight discrimination.
@@ -348,15 +348,15 @@ For both gravity and inertia, kinesthetic cues to force are much more important
%Le lien entre le poids physique et l'intensité perçue est variable selon les individus~\cite{kappers2013haptic}. %Le lien entre le poids physique et l'intensité perçue est variable selon les individus~\cite{kappers2013haptic}.
Size can be perceived as the object's \emph{length} (in one dimension) or its \emph{volume} (in three dimensions)~\cite{kappers2013haptic}. Size can be perceived as the object's \emph{length} (in one dimension) or its \emph{volume} (in three dimensions)~\cite{kappers2013haptic}.
In both cases, and if the object is small enough, a precision grip (see \figref{gonzalez2014analysis}) between the thumb and index finger can discriminate between sizes with an accuracy of \qty{1}{\mm}, but with an overestimation of length (power law with exponent \qty{1.3}). In both cases, and if the object is small enough, a precision grip (\figref{gonzalez2014analysis}) between the thumb and index finger can discriminate between sizes with an accuracy of \qty{1}{\mm}, but with an overestimation of length (power law with exponent \qty{1.3}).
Alternatively, it is necessary to follow the contours of the object with the fingers to estimate its length (see \secref{exploratory_procedures}), but with ten times less accuracy and an underestimation of length (power law with an exponent of \qty{0.9})~\cite{bergmanntiest2011cutaneous}. Alternatively, it is necessary to follow the contours of the object with the fingers to estimate its length (\secref{exploratory_procedures}), but with ten times less accuracy and an underestimation of length (power law with an exponent of \qty{0.9})~\cite{bergmanntiest2011cutaneous}.
The perception of the volume of an object that is not small is typically done by hand enclosure, but the estimate is strongly influenced by the size, shape and mass of the object, for an identical volume~\cite{kahrimanovic2010haptic}. The perception of the volume of an object that is not small is typically done by hand enclosure, but the estimate is strongly influenced by the size, shape and mass of the object, for an identical volume~\cite{kahrimanovic2010haptic}.
The shape of an object can be defined as the perception of its \emph{global geometry}, \ie its shape and contours. The shape of an object can be defined as the perception of its \emph{global geometry}, \ie its shape and contours.
This is the case, for example, when looking for a key in a pocket. This is the case, for example, when looking for a key in a pocket.
The exploration of contours and enclosure are then employed, as for the estimation of length and volume. The exploration of contours and enclosure are then employed, as for the estimation of length and volume.
If the object is not known in advance, object identification is rather slow, taking several seconds~\cite{norman2004visual}. If the object is not known in advance, object identification is rather slow, taking several seconds~\cite{norman2004visual}.
Therefore, the exploration of other properties is favoured to recognize the object more quickly, in particular marked edges~\cite{klatzky1987there}, \eg a screw among nails (see \figref{plaisier2009salient_2}), or certain material properties~\cite{lakatos1999haptic,plaisier2009salient}, \eg a metal object among plastic objects. Therefore, the exploration of other properties is favoured to recognize the object more quickly, in particular marked edges~\cite{klatzky1987there}, \eg a screw among nails (\figref{plaisier2009salient_2}), or certain material properties~\cite{lakatos1999haptic,plaisier2009salient}, \eg a metal object among plastic objects.
\begin{subfigs}{plaisier2009salient}{Identifcation of a sphere among cubes~\cite{plaisier2009salient}. }[ \begin{subfigs}{plaisier2009salient}{Identifcation of a sphere among cubes~\cite{plaisier2009salient}. }[
\item The shape has a significant effect on the perception of the volume of an object, \eg a sphere is perceived smaller than a cube of the same volume. \item The shape has a significant effect on the perception of the volume of an object, \eg a sphere is perceived smaller than a cube of the same volume.

View File

@@ -26,17 +26,17 @@ An increasing \emph{wearability} resulting in the loss of the system's kinesthet
\subfig{pacchierotti2017wearable_3} \subfig{pacchierotti2017wearable_3}
\end{subfigs} \end{subfigs}
Haptic research comes from robotics and teleoperation, and historically led to the design of haptic systems that are \emph{grounded} to an external support in the environment, such as a table (see \figref{pacchierotti2017wearable_1}). Haptic research comes from robotics and teleoperation, and historically led to the design of haptic systems that are \emph{grounded} to an external support in the environment, such as a table (\figref{pacchierotti2017wearable_1}).
These are robotic arms whose end-effector is either held in the hand or worn on a finger and which simulate interactions with a \VE by providing kinesthetic forces and torques feedback (see \figref{pacchierotti2015cutaneous}). These are robotic arms whose end-effector is either held in the hand or worn on a finger and which simulate interactions with a \VE by providing kinesthetic forces and torques feedback (\figref{pacchierotti2015cutaneous}).
They provide high fidelity haptic feedback but are heavy, bulky and limited to small workspaces~\cite{culbertson2018haptics}. They provide high fidelity haptic feedback but are heavy, bulky and limited to small workspaces~\cite{culbertson2018haptics}.
More portable designs have been developed by moving the grounded part to the user's body. More portable designs have been developed by moving the grounded part to the user's body.
The entire robotic system is thus mounted on the user, forming an exoskeleton capable of providing kinesthetic feedback to the finger, \eg in \figref{achibet2017flexifingers}. The entire robotic system is thus mounted on the user, forming an exoskeleton capable of providing kinesthetic feedback to the finger, \eg in \figref{achibet2017flexifingers}.
However, it cannot constrain the movements of the wrist and the reaction force is transmitted to the user where the device is grounded (see \figref{pacchierotti2017wearable_2}). However, it cannot constrain the movements of the wrist and the reaction force is transmitted to the user where the device is grounded (\figref{pacchierotti2017wearable_2}).
They are often heavy and bulky and cannot be considered wearable. They are often heavy and bulky and cannot be considered wearable.
\textcite{pacchierotti2017wearable} defined that : \enquote{A wearable haptic interface should also be small, easy to carry, comfortable, and it should not impair the motion of the wearer}. \textcite{pacchierotti2017wearable} defined that : \enquote{A wearable haptic interface should also be small, easy to carry, comfortable, and it should not impair the motion of the wearer}.
An approach is then to move the grounding point very close to the end-effector (see \figref{pacchierotti2017wearable_3}): the interface is limited to cutaneous haptic feedback, but its design is more compact, lightweight and comfortable, \eg in \figref{leonardis20173rsr}, and the system is wearable. An approach is then to move the grounding point very close to the end-effector (\figref{pacchierotti2017wearable_3}): the interface is limited to cutaneous haptic feedback, but its design is more compact, lightweight and comfortable, \eg in \figref{leonardis20173rsr}, and the system is wearable.
Moreover, as detailed in \secref{object_properties}, cutaneous sensations are necessary and often sufficient for the perception of the haptic properties of an object explored with the hand, as also argued by \textcite{pacchierotti2017wearable}. Moreover, as detailed in \secref{object_properties}, cutaneous sensations are necessary and often sufficient for the perception of the haptic properties of an object explored with the hand, as also argued by \textcite{pacchierotti2017wearable}.
\begin{subfigs}{grounded_to_wearable}{ \begin{subfigs}{grounded_to_wearable}{
@@ -134,8 +134,8 @@ They are small, lightweight and can be placed directly on any part of the hand.
All vibrotactile actuators are based on the same principle: generating an oscillating motion from an electric current with a frequency and amplitude high enough to be perceived by cutaneous mechanoreceptors. All vibrotactile actuators are based on the same principle: generating an oscillating motion from an electric current with a frequency and amplitude high enough to be perceived by cutaneous mechanoreceptors.
Several types of vibrotactile actuators are used in haptics, with different trade-offs between size, proposed \DoFs and application constraints: Several types of vibrotactile actuators are used in haptics, with different trade-offs between size, proposed \DoFs and application constraints:
\begin{itemize} \begin{itemize}
\item An \ERM is a \DC motor that rotates an off-center mass when a voltage or current is applied (see \figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}. \item An \ERM is a \DC motor that rotates an off-center mass when a voltage or current is applied (\figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
\item A \LRA consists of a coil that creates a magnetic field from an \AC to oscillate a magnet attached to a spring, as an audio loudspeaker (see \figref{precisionmicrodrives_lra}). They are more complex to control and a bit larger than \ERMs. Each \LRA is designed to vibrate with maximum amplitude at a given frequency, but won't vibrate efficiently at other frequencies, \ie their bandwidth is narrow, as shown on \figref{azadi2014vibrotactile}. \item A \LRA consists of a coil that creates a magnetic field from an \AC to oscillate a magnet attached to a spring, as an audio loudspeaker (\figref{precisionmicrodrives_lra}). They are more complex to control and a bit larger than \ERMs. Each \LRA is designed to vibrate with maximum amplitude at a given frequency, but won't vibrate efficiently at other frequencies, \ie their bandwidth is narrow, as shown on \figref{azadi2014vibrotactile}.
\item A \VCA is a \LRA but capable of generating vibration at two \DoF, with an independent control of the frequency and amplitude of the vibration on a wide bandwidth. They are larger in size than \ERMs and \LRAs, but can generate more complex renderings. \item A \VCA is a \LRA but capable of generating vibration at two \DoF, with an independent control of the frequency and amplitude of the vibration on a wide bandwidth. They are larger in size than \ERMs and \LRAs, but can generate more complex renderings.
\item Piezoelectric actuators deform a solid material when a voltage is applied. They are very small and thin, and allow two \DoFs of amplitude and frequency control. However, they require high voltages to operate thus limiting their use in wearable devices. \item Piezoelectric actuators deform a solid material when a voltage is applied. They are very small and thin, and allow two \DoFs of amplitude and frequency control. However, they require high voltages to operate thus limiting their use in wearable devices.
\end{itemize} \end{itemize}
@@ -169,8 +169,8 @@ Therefore, the visual rendering of a touched object can also greatly influence t
\textcite{bhatia2024augmenting} categorize the tactile augmentations of real objects into three types: direct touch, touch-through, and tool mediated. \textcite{bhatia2024augmenting} categorize the tactile augmentations of real objects into three types: direct touch, touch-through, and tool mediated.
In direct touch, the haptic device does not cover the interior of the hand to not impair the user to interact with the \RE. In direct touch, the haptic device does not cover the interior of the hand to not impair the user to interact with the \RE.
We are interested in direct touch augmentations with wearable haptic devices (see \secref{wearable_haptic_devices}), as their integration with \AR is particularly promising for direct hand interaction with visuo-haptic augmentations. We are interested in direct touch augmentations with wearable haptic devices (\secref{wearable_haptic_devices}), as their integration with \AR is particularly promising for direct hand interaction with visuo-haptic augmentations.
We also focus tactile augmentations stimulating the mechanoreceptors of the skin (see \secref{haptic_sense}), thus excluding temperature perception, as they are the most common existing haptic interfaces. We also focus tactile augmentations stimulating the mechanoreceptors of the skin (\secref{haptic_sense}), thus excluding temperature perception, as they are the most common existing haptic interfaces.
% \cite{bhatia2024augmenting}. Types of interfaces : direct touch, through touch, through tool. Focus on direct touch, but when no rendering done, % \cite{bhatia2024augmenting}. Types of interfaces : direct touch, through touch, through tool. Focus on direct touch, but when no rendering done,
% \cite{klatzky2003feeling} : rendering roughness, friction, deformation, temperatures % \cite{klatzky2003feeling} : rendering roughness, friction, deformation, temperatures

View File

@@ -1,7 +1,7 @@
\section{Principles and Capabilities of AR} \section{Principles and Capabilities of AR}
\label{augmented_reality} \label{augmented_reality}
The first \AR headset was invented by \textcite{sutherland1968headmounted}: With the technology available at the time, it was already capable of displaying virtual objects at a fixed point in space in real time, giving the user the illusion that the content was present in the room (see \figref{sutherland1968headmounted}). The first \AR headset was invented by \textcite{sutherland1968headmounted}: With the technology available at the time, it was already capable of displaying virtual objects at a fixed point in space in real time, giving the user the illusion that the content was present in the room (\figref{sutherland1968headmounted}).
Fixed to the ceiling, the headset displayed a stereoscopic (one image per eye) perspective projection of the virtual content on a transparent screen, taking into account the user's position, and thus already following the interaction loop presented in \figref[introduction]{interaction-loop}. Fixed to the ceiling, the headset displayed a stereoscopic (one image per eye) perspective projection of the virtual content on a transparent screen, taking into account the user's position, and thus already following the interaction loop presented in \figref[introduction]{interaction-loop}.
\begin{subfigs}{sutherland1968headmounted}{Photos of the first \AR system~\cite{sutherland1968headmounted}. }[ \begin{subfigs}{sutherland1968headmounted}{Photos of the first \AR system~\cite{sutherland1968headmounted}. }[
@@ -90,14 +90,14 @@ Despite the clear and acknowledged definition presented in \secref{ar_definition
Presence is one of the key concept to characterize a \VR experience. Presence is one of the key concept to characterize a \VR experience.
\AR and \VR are both essentially illusions as the virtual content does not physically exist but is just digitally simulated and rendered to the user's perception through a user interface and the user's senses. \AR and \VR are both essentially illusions as the virtual content does not physically exist but is just digitally simulated and rendered to the user's perception through a user interface and the user's senses.
Such experience of disbelief suspension in \VR is what is called presence, and it can be decomposed into two dimensions: \PI and \PSI~\cite{slater2009place}. Such experience of disbelief suspension in \VR is what is called presence, and it can be decomposed into two dimensions: \PI and \PSI~\cite{slater2009place}.
\PI is the sense of the user of \enquote{being there} in the \VE (see \figref{presence-vr}). \PI is the sense of the user of \enquote{being there} in the \VE (\figref{presence-vr}).
It emerges from the real time rendering of the \VE from the user's perspective: to be able to move around inside the \VE and look from different point of views. It emerges from the real time rendering of the \VE from the user's perspective: to be able to move around inside the \VE and look from different point of views.
\PSI is the illusion that the virtual events are really happening, even if the user knows that they are not real. \PSI is the illusion that the virtual events are really happening, even if the user knows that they are not real.
It doesn't mean that the virtual events are realistic, but that they are plausible and coherent with the user's expectations. It doesn't mean that the virtual events are realistic, but that they are plausible and coherent with the user's expectations.
A third strong illusion in \VR is the \SoE, which is the illusion that the virtual body is one's own~\cite{slater2022separate,guy2023sense}. A third strong illusion in \VR is the \SoE, which is the illusion that the virtual body is one's own~\cite{slater2022separate,guy2023sense}.
The \AR presence is far less defined and studied than for \VR~\cite{tran2024survey}, but it will be useful to design, evaluate and discuss our contributions in the next chapters. The \AR presence is far less defined and studied than for \VR~\cite{tran2024survey}, but it will be useful to design, evaluate and discuss our contributions in the next chapters.
Thereby, \textcite{slater2022separate} proposed to invert \PI to what we can call \enquote{object illusion}, \ie the sense of the virtual object to \enquote{feels here} in the \RE (see \figref{presence-ar}). Thereby, \textcite{slater2022separate} proposed to invert \PI to what we can call \enquote{object illusion}, \ie the sense of the virtual object to \enquote{feels here} in the \RE (\figref{presence-ar}).
As with VR, \VOs must be able to be seen from different angles by moving the head but also, this is more difficult, be consistent with the \RE, \eg occlude or be occluded by real objects~\cite{macedo2023occlusion}, cast shadows or reflect lights. As with VR, \VOs must be able to be seen from different angles by moving the head but also, this is more difficult, be consistent with the \RE, \eg occlude or be occluded by real objects~\cite{macedo2023occlusion}, cast shadows or reflect lights.
The \PSI can be applied to \AR as is, but the \VOs must additionally have knowledge of the \RE and react accordingly to it. The \PSI can be applied to \AR as is, but the \VOs must additionally have knowledge of the \RE and react accordingly to it.
\textcite{skarbez2021revisiting} also named \PI for \AR as \enquote{immersion} and \PSI as \enquote{coherence}, and these terms will be used in the remainder of this thesis. \textcite{skarbez2021revisiting} also named \PI for \AR as \enquote{immersion} and \PSI as \enquote{coherence}, and these terms will be used in the remainder of this thesis.
@@ -120,12 +120,12 @@ As presence, \SoE in \AR is a recent topic and little is known about its percept
Both \AR/\VR and haptic systems are able to render virtual objects and environments as sensations displayed to the user's senses. Both \AR/\VR and haptic systems are able to render virtual objects and environments as sensations displayed to the user's senses.
However, as presented in \figref[introduction]{interaction-loop}, the user must be able to manipulate the virtual objects and environments to complete the loop, \eg through a hand-held controller, a tangible object, or even directly with the hands. However, as presented in \figref[introduction]{interaction-loop}, the user must be able to manipulate the virtual objects and environments to complete the loop, \eg through a hand-held controller, a tangible object, or even directly with the hands.
An interaction technique is then required to map user inputs to actions on the \VE~\cite{laviola20173d}. An \emph{interaction technique} is then required to map user inputs to actions on the \VE~\cite{laviola20173d}.
\subsubsection{Interaction Techniques} \subsubsection{Interaction Techniques}
For a user to interact with a computer system, they first perceive the state of the system and then act on it using an input interface. For a user to interact with a computer system, they first perceive the state of the system and then act on it using an input interface.
An input interface can be either an active sensing, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a passive sensing, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking. An input interface can be either an \emph{active sensing}, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a \emph{passive sensing}, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking.
The sensors' information gathered by the input interface are then translated into actions within the computer system by an interaction technique. The sensors' information gathered by the input interface are then translated into actions within the computer system by an interaction technique.
For example, a cursor on a screen can be moved either with a mouse or with arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image. For example, a cursor on a screen can be moved either with a mouse or with arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image.
Choosing useful and efficient input interfaces and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}. Choosing useful and efficient input interfaces and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}.
@@ -145,7 +145,7 @@ These three tasks are geometric (rigid) manipulations of the object: they do not
The \emph{navigation tasks} are the movements of the user within the \VE. The \emph{navigation tasks} are the movements of the user within the \VE.
Travel is the control of the position and orientation of the viewpoint in the \VE, \eg physical walking, velocity control, or teleportation. Travel is the control of the position and orientation of the viewpoint in the \VE, \eg physical walking, velocity control, or teleportation.
Wayfinding is the cognitive planning of the movement such as pathfinding or route following (see \figref{grubert2017pervasive}). Wayfinding is the cognitive planning of the movement such as pathfinding or route following (\figref{grubert2017pervasive}).
The \emph{system control tasks} are changes in the system state through commands or menus such as creation, deletion, or modification of objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols. The \emph{system control tasks} are changes in the system state through commands or menus such as creation, deletion, or modification of objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
@@ -161,7 +161,7 @@ As of today, an immersive \AR system track itself with the user in \ThreeD, usin
It enables to register the \VE with the \RE and the user simply moves themselves to navigate within the virtual content. It enables to register the \VE with the \RE and the user simply moves themselves to navigate within the virtual content.
%This tracking and mapping of the user and \RE into the \VE is named the \enquote{extent of world knowledge} by \textcite{skarbez2021revisiting}, \ie to what extent the \AR system knows about the \RE and is able to respond to changes in it. %This tracking and mapping of the user and \RE into the \VE is named the \enquote{extent of world knowledge} by \textcite{skarbez2021revisiting}, \ie to what extent the \AR system knows about the \RE and is able to respond to changes in it.
However, direct hand manipulation of the virtual content is a challenge that requires specific interaction techniques~\cite{billinghurst2021grand}. However, direct hand manipulation of the virtual content is a challenge that requires specific interaction techniques~\cite{billinghurst2021grand}.
This is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{hertel2021taxonomy}. Such \emph{reality based interaction}~\cite{jacob2008realitybased} in immersive \AR is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{billinghurst2015survey,hertel2021taxonomy}.
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[ \begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[
\item Spatial selection of virtual item of an extended display using a hand-held smartphone~\cite{grubert2015multifi}. \item Spatial selection of virtual item of an extended display using a hand-held smartphone~\cite{grubert2015multifi}.
@@ -176,24 +176,6 @@ This is often achieved using two interaction techniques: \emph{tangible objects}
\subfig{newcombe2011kinectfusion} \subfig{newcombe2011kinectfusion}
\end{subfigs} \end{subfigs}
\paragraph{Manipulating with Virtual Hands}
Dans le cas de la RA immersive avec une interaction "naturelles" (cf \cite{billinghurst2005designing}), la sélection consiste à toucher l'objet virtuel avec les mains, et la manipulation à le saisir et le déplacer avec les mains.
C'est ce qu'on appelle les "virtual hands" : les mains virtuelles de l'utilisateur dans le \VE.
Le dispositif d'entrée n'est pas une manette comme c'est souvent le cas en VR, mais directement les mains.
Les mains sont donc détectées et reproduites dans le \VE.
Maglré tout, le principal problème de l'interaction naturelle avec les mains dans un \VE, outre la détection des mains, est le manque de contrainte physique sur le mouvement de la main et des doigts, ce qui rend les actions fatiguantes (\cite{hincapie-ramos2014consumed}), imprécises (on ne sait pas si on touche l'objet virtuel sans retour haptique) et difficile (idem, sans retour haptique on ne sent pas l'objet glisser, et on a pas de confirmation qu'il est bien en main). Des techniques d'interactions d'une part sont toujours nécessaire,et un retour haptique adapté aux contraintes d'interactions de la RA est indispensable pour une bonne expérience utilisateur.
Cela peut être aussi difficile à comprendre : "\cite{chan2010touching} proposent la combinaison de retours continus, pour que lutilisateur situe le suivi de son corps, et de retours discrets pour confirmer ses actions." Un rendu et affichage visuel des mains est un retour continu, un bref changement de couleur ou un retour haptique est un retour discret. Mais cette combinaison n'a pas été évaluée.
\cite{hilliges2012holodesk}
\cite{piumsomboon2013userdefined} : user-defined gestures for manipulation of virtual objects in AR.
\cite{piumsomboon2014graspshell} : direct hand manipulation of virtual objects in immersive AR vs vocal commands.
\cite{chan2010touching} : cues for touching (selection) virtual objects.
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt quopaque, soit en affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
\paragraph{Manipulating with Tangibles} \paragraph{Manipulating with Tangibles}
\cite{issartel2016tangible} \cite{issartel2016tangible}
@@ -206,6 +188,33 @@ et l'objet visuellement peut ne pas correspondre aux sensations haptiques du tan
C'est pourquoi utiliser du wearable pour modifier les sensations cutanées du tangible est une solution qui fonctionne en VR (\cite{detinguy2018enhancing,salazar2020altering}) et pourrait être adaptée à la RA. C'est pourquoi utiliser du wearable pour modifier les sensations cutanées du tangible est une solution qui fonctionne en VR (\cite{detinguy2018enhancing,salazar2020altering}) et pourrait être adaptée à la RA.
Mais, spécifique à la RA vs RV, le tangible et la main sont visibles, du moins partiellement, même si caché par un objet virtuel : comment va fonctionner l'augmentation haptique en RA vs RV ? Biais perceptuels ? Le fait de voir toucher avec sa propre main le tangible vs en RV où il est caché, donc illusion potentiellement plus forte en RV ? Mais, spécifique à la RA vs RV, le tangible et la main sont visibles, du moins partiellement, même si caché par un objet virtuel : comment va fonctionner l'augmentation haptique en RA vs RV ? Biais perceptuels ? Le fait de voir toucher avec sa propre main le tangible vs en RV où il est caché, donc illusion potentiellement plus forte en RV ?
\paragraph{Manipulating with Virtual Hands}
Les techniques d'interactions dites \enquote{naturelles} sont celles qui permettent à l'utilisateur d'utiliser directement les mouvements de son corps comme interface d'entrée avec le système de \AR/\VR~\cite{billinghurst2015survey}.
C'est la main qui nous permet de manipuler avec force et précision les objets réels de la vie de tous les jours (\secref{hand_anatomy}), et c'est donc les techniques d'interactions de mains virtuelles qui sont les plus naturelles pour manipuler des objets virtuels~\cite{laviola20173d}.
Initialement suivi par des dispositifs de capture de mouvement sous forme de gants ou de contrôleurs, il est maintenant possible de suivre les mains d'un utilisateur en temps réel avec des caméra et algorithmes de vision par ordinateur intégrés nativement dans les casques de \AR~\cite{tong2023survey}.
La main de l'utilisateur est donc suivie et reconstruite dans le \VE sous forme d'une \emph{main virtuelle}~\cite{billinghurst2015survey,laviola20173d}.
Les modèles les plus simples représentent la main sous forme d'un objet 3D rigide suivant les mouvements de la main réelle avec \qty{6}{\DoF} (position et orientation dans l'espace)~\cite{talvas2012novel}.
Une alternative est de représenter seulement les bouts des doigts, ce qui permet de réaliser des oppositions entre les doigts (\secref{grasp_types}).
Enfin, les techniques les plus courantes représentent l'ensemble du squelette de la main sous forme d'un modèle kinématique articulé:
Chaque phalange virtuelle est alors représentée avec certain \DoFs par rapport à la phalange précédente (\secref{hand_anatomy}).
Il existe plusieurs techniques pour simuler les contacts et l'interaction du modèle de main virtuelle avec les objets virtuels~\cite{laviola20173d}.
Les techniques avec une approche heuristique utilisent des règles pour déterminer la sélection, la manipulation et le lâcher d'un objet~\cite{kim2015physicsbased}.
Une sélection se fait par exemple en réalisant avec la main un geste prédéfini sur l'objet comme un type de grasping (\secref{grasp_types})~\cite{piumsomboon2013userdefined}.
Les techniques basées sur la physique simulent les forces aux points de contact du modèle avec l'objet.
Maglré tout, le principal problème de l'interaction naturelle avec les mains dans un \VE, outre la détection des mains, est le manque de contrainte physique sur le mouvement de la main et des doigts, ce qui rend les actions fatiguantes (\cite{hincapie-ramos2014consumed}), imprécises (on ne sait pas si on touche l'objet virtuel sans retour haptique) et difficile (idem, sans retour haptique on ne sent pas l'objet glisser, et on a pas de confirmation qu'il est bien en main). Des techniques d'interactions d'une part sont toujours nécessaire,et un retour haptique adapté aux contraintes d'interactions de la RA est indispensable pour une bonne expérience utilisateur.
Cela peut être aussi difficile à comprendre : "\cite{chan2010touching} proposent la combinaison de retours continus, pour que lutilisateur situe le suivi de son corps, et de retours discrets pour confirmer ses actions." Un rendu et affichage visuel des mains est un retour continu, un bref changement de couleur ou un retour haptique est un retour discret. Mais cette combinaison n'a pas été évaluée.
\cite{piumsomboon2013userdefined} : user-defined gestures for manipulation of virtual objects in AR.
\cite{piumsomboon2014graspshell} : direct hand manipulation of virtual objects in immersive AR vs vocal commands.
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt quopaque, soit en affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
\subsection{Visual Rendering of Hands in AR} \subsection{Visual Rendering of Hands in AR}
@@ -218,6 +227,9 @@ It has also been shown that over a realistic avatar, a skeleton rendering can p
\fig{prachyabrued2014visual}{Effect of different hand renderings on a pick-and-place task in VR~\cite{prachyabrued2014visual}.} \fig{prachyabrued2014visual}{Effect of different hand renderings on a pick-and-place task in VR~\cite{prachyabrued2014visual}.}
\cite{hilliges2012holodesk}
\cite{chan2010touching} : cues for touching (selection) virtual objects.
Mutual visual occlusion between a virtual object and the real hand, \ie hiding the virtual object when the real hand is in front of it and hiding the real hand when it is behind the virtual object, is often presented as natural and realistic, enhancing the blending of real and virtual environments~\cite{piumsomboon2014graspshell, al-kalbani2016analysis}. Mutual visual occlusion between a virtual object and the real hand, \ie hiding the virtual object when the real hand is in front of it and hiding the real hand when it is behind the virtual object, is often presented as natural and realistic, enhancing the blending of real and virtual environments~\cite{piumsomboon2014graspshell, al-kalbani2016analysis}.
In video see-through AR (VST-AR), this could be solved as a masking problem by combining the image of the real world captured by a camera and the generated virtual image~\cite{macedo2023occlusion}. In video see-through AR (VST-AR), this could be solved as a masking problem by combining the image of the real world captured by a camera and the generated virtual image~\cite{macedo2023occlusion}.
In OST-AR, this is more difficult because the virtual environment is displayed as a transparent 2D image on top of the 3D real world, which cannot be easily masked~\cite{macedo2023occlusion}. In OST-AR, this is more difficult because the virtual environment is displayed as a transparent 2D image on top of the 3D real world, which cannot be easily masked~\cite{macedo2023occlusion}.

View File

@@ -67,10 +67,10 @@ Some studies have investigated the visuo-haptic perception of virtual objects in
They have shown how the latency of the visual rendering of an object with haptic feedback or the type of environment (\VE or \RE) can affect the perception of an identical haptic rendering. They have shown how the latency of the visual rendering of an object with haptic feedback or the type of environment (\VE or \RE) can affect the perception of an identical haptic rendering.
Indeed, there are indeed inherent and unavoidable latencies in the visual and haptic rendering of virtual objects, and the visual-haptic feedback may not appear to be simultaneous. Indeed, there are indeed inherent and unavoidable latencies in the visual and haptic rendering of virtual objects, and the visual-haptic feedback may not appear to be simultaneous.
In an immersive \VST-\AR setup, \textcite{knorlein2009influence} rendered a virtual piston using force-feedback haptics that participants pressed directly with their hand (see \figref{visuo-haptic-stiffness}). In an immersive \VST-\AR setup, \textcite{knorlein2009influence} rendered a virtual piston using force-feedback haptics that participants pressed directly with their hand (\figref{visuo-haptic-stiffness}).
In a \TAFC task, participants pressed two pistons and indicated which was stiffer. In a \TAFC task, participants pressed two pistons and indicated which was stiffer.
One had a reference stiffness but an additional visual or haptic delay, while the other varied with a comparison stiffness but had no delay. \footnote{Participants were not told about the delays and stiffness tested, nor which piston was the reference or comparison. The order of the pistons (which one was pressed first) was also randomized.}% One had a reference stiffness but an additional visual or haptic delay, while the other varied with a comparison stiffness but had no delay. \footnote{Participants were not told about the delays and stiffness tested, nor which piston was the reference or comparison. The order of the pistons (which one was pressed first) was also randomized.}%
Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (see \figref{knorlein2009influence_2}). Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (\figref{knorlein2009influence_2}).
\begin{subfigs}{visuo-haptic-stiffness}{Perception of haptic stiffness in \VST-\AR ~\cite{knorlein2009influence}. }[ \begin{subfigs}{visuo-haptic-stiffness}{Perception of haptic stiffness in \VST-\AR ~\cite{knorlein2009influence}. }[
\item Participant pressing a virtual piston rendered by a force-feedback device with their hand. \item Participant pressing a virtual piston rendered by a force-feedback device with their hand.
@@ -91,7 +91,7 @@ where $t_B = t_A + \Delta t$.
Therefore, a haptic delay (positive $\Delta t$) increases the perceived stiffness $k$, while a visual delay in displacement (negative $\Delta t$) decreases perceived $k$~\cite{diluca2011effects}. Therefore, a haptic delay (positive $\Delta t$) increases the perceived stiffness $k$, while a visual delay in displacement (negative $\Delta t$) decreases perceived $k$~\cite{diluca2011effects}.
In a similar \TAFC user study, participants compared perceived stiffness of virtual pistons in \OST-\AR and \VR~\cite{gaffary2017ar}. In a similar \TAFC user study, participants compared perceived stiffness of virtual pistons in \OST-\AR and \VR~\cite{gaffary2017ar}.
However, the force-feedback device and the participant's hand were not visible (see \figref{gaffary2017ar}). However, the force-feedback device and the participant's hand were not visible (\figref{gaffary2017ar}).
The reference piston was judged to be stiffer when seen in \VR than in \AR, without participants noticing this difference, and more force was exerted on the piston overall in \VR. The reference piston was judged to be stiffer when seen in \VR than in \AR, without participants noticing this difference, and more force was exerted on the piston overall in \VR.
This suggests that the haptic stiffness of virtual objects feels \enquote{softer} in an \AE than in a full \VE. This suggests that the haptic stiffness of virtual objects feels \enquote{softer} in an \AE than in a full \VE.
%Two differences that could be worth investigating with the two previous studies are the type of \AR (visuo or optical) and to see the hand touching the virtual object. %Two differences that could be worth investigating with the two previous studies are the type of \AR (visuo or optical) and to see the hand touching the virtual object.
@@ -118,7 +118,7 @@ No participant (out of 19) was able to detect a \qty{50}{\ms} visual lag and a \
A few wearable haptic devices have been specifically designed or experimentally tested for direct hand interaction in immersive \AR. A few wearable haptic devices have been specifically designed or experimentally tested for direct hand interaction in immersive \AR.
The main challenge of wearable haptics for \AR is to provide haptic sensations of virtual or augmented objects that are touched and manipulated directly with the fingers while keeping the fingertips free to interact with the \RE. The main challenge of wearable haptics for \AR is to provide haptic sensations of virtual or augmented objects that are touched and manipulated directly with the fingers while keeping the fingertips free to interact with the \RE.
Several approaches have been proposed to move the actuator away to another location on the hand. Several approaches have been proposed to move the actuator away to another location on the hand.
Yet, they differ greatly in the actuators used (see \secref{wearable_haptic_devices}) thus the haptic feedback (see \secref{tactile_rendering}), and the placement of the haptic rendering. Yet, they differ greatly in the actuators used (\secref{wearable_haptic_devices}) thus the haptic feedback (\secref{tactile_rendering}), and the placement of the haptic rendering.
Other wearable haptic actuators have been proposed for \AR but are not detailed here. Other wearable haptic actuators have been proposed for \AR but are not detailed here.
A first reason is that they permanently cover the fingertip and affect the interaction with the \RE, such as thin-skin tactile interfaces~\cite{withana2018tacttoo,teng2024haptic} or fluid-based interfaces~\cite{han2018hydroring}. A first reason is that they permanently cover the fingertip and affect the interaction with the \RE, such as thin-skin tactile interfaces~\cite{withana2018tacttoo,teng2024haptic} or fluid-based interfaces~\cite{han2018hydroring}.
@@ -126,12 +126,12 @@ Another category of actuators relies on systems that cannot be considered as por
\subsubsection{Nail-Mounted Devices} \subsubsection{Nail-Mounted Devices}
\textcite{ando2007fingernailmounted} were the first to propose this approach that they experimented with a voice-coil mounted on the index nail (see \figref{ando2007fingernailmounted}). \textcite{ando2007fingernailmounted} were the first to propose this approach that they experimented with a voice-coil mounted on the index nail (\figref{ando2007fingernailmounted}).
The sensation of crossing edges of a virtual patterned texture (see \secref{texture_rendering}) on a real sheet of paper were rendered with \qty{20}{\ms} vibration impulses at \qty{130}{\Hz}. The sensation of crossing edges of a virtual patterned texture (\secref{texture_rendering}) on a real sheet of paper were rendered with \qty{20}{\ms} vibration impulses at \qty{130}{\Hz}.
Participants were able to match the virtual patterns to their real counterparts of height \qty{0.25}{\mm} and width \qtyrange{1}{10}{\mm}, but systematically overestimated the virtual width to be \qty{4}{\mm} longer. Participants were able to match the virtual patterns to their real counterparts of height \qty{0.25}{\mm} and width \qtyrange{1}{10}{\mm}, but systematically overestimated the virtual width to be \qty{4}{\mm} longer.
This approach was later extended by \textcite{teng2021touch} with Touch\&Fold, a haptic device mounted on the nail but able to unfold its end-effector on demand to make contact with the fingertip when touching virtual objects (see \figref{teng2021touch}). This approach was later extended by \textcite{teng2021touch} with Touch\&Fold, a haptic device mounted on the nail but able to unfold its end-effector on demand to make contact with the fingertip when touching virtual objects (\figref{teng2021touch}).
This moving platform also contains a \LRA (see \secref{moving_platforms}) and provides contact pressure (\qty{0.34}{\N} force) and texture (\qtyrange{150}{190}{\Hz} bandwidth) sensations. This moving platform also contains a \LRA (\secref{moving_platforms}) and provides contact pressure (\qty{0.34}{\N} force) and texture (\qtyrange{150}{190}{\Hz} bandwidth) sensations.
%The whole system is very compact (\qtyproduct{24 x 24 x 41}{\mm}), lightweight (\qty{9.5}{\g}), and fully portable by including a battery and Bluetooth wireless communication. \qty{20}{\ms} for the Bluetooth %The whole system is very compact (\qtyproduct{24 x 24 x 41}{\mm}), lightweight (\qty{9.5}{\g}), and fully portable by including a battery and Bluetooth wireless communication. \qty{20}{\ms} for the Bluetooth
When touching virtual objects in \OST-\AR with the index finger, this device was found to be more realistic overall (5/7) than vibrations with a \LRA at \qty{170}{\Hz} on the nail (3/7). When touching virtual objects in \OST-\AR with the index finger, this device was found to be more realistic overall (5/7) than vibrations with a \LRA at \qty{170}{\Hz} on the nail (3/7).
Still, there is a high (\qty{92}{\ms}) latency for the folding mechanism and this design is not suitable for augmenting real tangible objects. Still, there is a high (\qty{92}{\ms}) latency for the folding mechanism and this design is not suitable for augmenting real tangible objects.
@@ -139,12 +139,12 @@ Still, there is a high (\qty{92}{\ms}) latency for the folding mechanism and thi
% teng2021touch: (5.27+3.03+5.23+5.5+5.47)/5 = 4.9 % teng2021touch: (5.27+3.03+5.23+5.5+5.47)/5 = 4.9
% ando2007fingernailmounted: (2.4+2.63+3.63+2.57+3.2)/5 = 2.9 % ando2007fingernailmounted: (2.4+2.63+3.63+2.57+3.2)/5 = 2.9
To always keep the fingertip, \textcite{maeda2022fingeret} with Fingeret proposed to adapt the belt actuators (see \secref{belt_actuators}) to design a \enquote{finger-side actuator} instead (see \figref{maeda2022fingeret}). To always keep the fingertip, \textcite{maeda2022fingeret} with Fingeret proposed to adapt the belt actuators (\secref{belt_actuators}) to design a \enquote{finger-side actuator} instead (\figref{maeda2022fingeret}).
Mounted on the nail, the device actuates two rollers, one on each side of the fingertip, to deform the skin: When the rollers both rotate inwards (towards the pad) they pull the skin, simulating a contact sensation, and when they both rotate outwards (towards the nail) they push the skin, simulating a release sensation. Mounted on the nail, the device actuates two rollers, one on each side of the fingertip, to deform the skin: When the rollers both rotate inwards (towards the pad) they pull the skin, simulating a contact sensation, and when they both rotate outwards (towards the nail) they push the skin, simulating a release sensation.
By doing quick rotations, the rollers can also simulate a texture sensation. By doing quick rotations, the rollers can also simulate a texture sensation.
%The device is also very compact (\qty{60 x 25 x 36}{\mm}), lightweight (\qty{18}{\g}), and portable with a battery and Bluetooth wireless communication with \qty{83}{\ms} latency. %The device is also very compact (\qty{60 x 25 x 36}{\mm}), lightweight (\qty{18}{\g}), and portable with a battery and Bluetooth wireless communication with \qty{83}{\ms} latency.
In a user study not in \AR, but involving touching different images on a tablet, Fingeret was found to be more realistic (4/7) than a \LRA at \qty{100}{\Hz} on the nail (3/7) for rendering buttons and a patterned texture (see \secref{texture_rendering}), but not different from vibrations for rendering high-frequency textures (3.5/7 for both). In a user study not in \AR, but involving touching different images on a tablet, Fingeret was found to be more realistic (4/7) than a \LRA at \qty{100}{\Hz} on the nail (3/7) for rendering buttons and a patterned texture (\secref{texture_rendering}), but not different from vibrations for rendering high-frequency textures (3.5/7 for both).
However, as for \textcite{teng2021touch}, finger speed was not taken into account for rendering vibrations, which may have been detrimental to texture perception (see \secref{texture_rendering}). However, as for \textcite{teng2021touch}, finger speed was not taken into account for rendering vibrations, which may have been detrimental to texture perception (\secref{texture_rendering}).
\begin{subfigs}{ar_wearable}{Nail-mounted wearable haptic devices designed for \AR. }[ \begin{subfigs}{ar_wearable}{Nail-mounted wearable haptic devices designed for \AR. }[
\item A voice-coil rendering a virtual haptic texture on a real sheet of paper~\cite{ando2007fingernailmounted}. \item A voice-coil rendering a virtual haptic texture on a real sheet of paper~\cite{ando2007fingernailmounted}.
@@ -161,14 +161,14 @@ However, as for \textcite{teng2021touch}, finger speed was not taken into accoun
The haptic ring belt devices of \textcite{minamizawa2007gravity} and \textcite{pacchierotti2016hring}, presented in \secref{belt_actuators}, have been employed to improve the manipulation of real and virtual objects in \AR. The haptic ring belt devices of \textcite{minamizawa2007gravity} and \textcite{pacchierotti2016hring}, presented in \secref{belt_actuators}, have been employed to improve the manipulation of real and virtual objects in \AR.
In a \VST-\AR setup, \textcite{scheggi2010shape} explored the effect of rendering the weight (see \secref{weight_rendering}) of a virtual cube placed on a real surface hold with the thumb, index, and middle fingers (see \figref{scheggi2010shape}). In a \VST-\AR setup, \textcite{scheggi2010shape} explored the effect of rendering the weight (\secref{weight_rendering}) of a virtual cube placed on a real surface hold with the thumb, index, and middle fingers (\figref{scheggi2010shape}).
The middle phalanx of each of these fingers was equipped with a haptic ring of \textcite{minamizawa2007gravity}. The middle phalanx of each of these fingers was equipped with a haptic ring of \textcite{minamizawa2007gravity}.
However, no proper user study was conducted to evaluate this feedback.% on the manipulation of the cube. However, no proper user study was conducted to evaluate this feedback.% on the manipulation of the cube.
%that simulated the weight of the cube. %that simulated the weight of the cube.
%A virtual cube that could push on the cube was manipulated with the other hand through a force-feedback device. %A virtual cube that could push on the cube was manipulated with the other hand through a force-feedback device.
%\textcite{scheggi2010shape} report that \percent{80} of the participants appreciated the weight feedback. %\textcite{scheggi2010shape} report that \percent{80} of the participants appreciated the weight feedback.
In pick-and-place tasks in non-immersive \VST-\AR involving both virtual and real objects (see \figref{maisto2017evaluation}), \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the effects of providing haptic feedback about contacts at the fingertips using either the haptic ring of \textcite{pacchierotti2016hring}, or on the proximal phalanx, the moving platform of \textcite{chinello2020modular} on the fingertip. In pick-and-place tasks in non-immersive \VST-\AR involving both virtual and real objects (\figref{maisto2017evaluation}), \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the effects of providing haptic feedback about contacts at the fingertips using either the haptic ring of \textcite{pacchierotti2016hring}, or on the proximal phalanx, the moving platform of \textcite{chinello2020modular} on the fingertip.
They showed that the haptic feedback improved the performance (completion time), reduced the exerted force on the cubes over a visual feedback alone. They showed that the haptic feedback improved the performance (completion time), reduced the exerted force on the cubes over a visual feedback alone.
The haptic ring was also perceived by users to be more effective than the moving platform. The haptic ring was also perceived by users to be more effective than the moving platform.
However, the measured difference in performance could be attributed to either the device or the device position (proximal vs fingertip), or both. However, the measured difference in performance could be attributed to either the device or the device position (proximal vs fingertip), or both.
@@ -188,10 +188,10 @@ These two studies were also conducted in non-immersive setups, where users looke
With their \enquote{Tactile And Squeeze Bracelet Interface} (Tasbi), already mentioned in \secref{belt_actuators}, \textcite{pezent2019tasbi} and \textcite{pezent2022design} explored the use of a wrist-worn bracelet actuator. With their \enquote{Tactile And Squeeze Bracelet Interface} (Tasbi), already mentioned in \secref{belt_actuators}, \textcite{pezent2019tasbi} and \textcite{pezent2022design} explored the use of a wrist-worn bracelet actuator.
It is capable of providing a uniform pressure sensation (up to \qty{15}{\N} and \qty{10}{\Hz}) and vibration with six \LRAs (\qtyrange{150}{200}{\Hz} bandwidth). It is capable of providing a uniform pressure sensation (up to \qty{15}{\N} and \qty{10}{\Hz}) and vibration with six \LRAs (\qtyrange{150}{200}{\Hz} bandwidth).
A user study was conducted in \VR to compare the perception of visuo-haptic stiffness rendering~\cite{pezent2019tasbi}. A user study was conducted in \VR to compare the perception of visuo-haptic stiffness rendering~\cite{pezent2019tasbi}.
In a \TAFC task, participants pressed a virtual button with different levels of stiffness via a virtual hand constrained by the \VE (see \figref{pezent2019tasbi_2}). In a \TAFC task, participants pressed a virtual button with different levels of stiffness via a virtual hand constrained by the \VE (\figref{pezent2019tasbi_2}).
A higher visual stiffness required a larger physical displacement to press the button (C/D ratio, see \secref{pseudo_haptic}), while the haptic stiffness control the rate of the pressure feedback when pressing. A higher visual stiffness required a larger physical displacement to press the button (C/D ratio, see \secref{pseudo_haptic}), while the haptic stiffness control the rate of the pressure feedback when pressing.
When the visual and haptic stiffness were coherent or when only the haptic stiffness changed, participants easily discriminated two buttons with different stiffness levels (see \figref{pezent2019tasbi_3}). When the visual and haptic stiffness were coherent or when only the haptic stiffness changed, participants easily discriminated two buttons with different stiffness levels (\figref{pezent2019tasbi_3}).
However, if only the visual stiffness changed, participants were not able to discriminate the different stiffness levels (see \figref{pezent2019tasbi_4}). However, if only the visual stiffness changed, participants were not able to discriminate the different stiffness levels (\figref{pezent2019tasbi_4}).
This suggests that in \VR, the haptic pressure is more important perceptual cue than the visual displacement to render stiffness. This suggests that in \VR, the haptic pressure is more important perceptual cue than the visual displacement to render stiffness.
A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered when contacting the button, but kept constant across all conditions: It may have affected the overall perception when only the visual stiffness changed. A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered when contacting the button, but kept constant across all conditions: It may have affected the overall perception when only the visual stiffness changed.
@@ -211,5 +211,5 @@ A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered w
\label{visuo_haptic_conclusion} \label{visuo_haptic_conclusion}
% the type of rendered object (real or virtual), the rendered haptic property (contact, hardness, texture, see \secref{tactile_rendering}), and . % the type of rendered object (real or virtual), the rendered haptic property (contact, hardness, texture, see \secref{tactile_rendering}), and .
%In this context of integrating \WHs with \AR to create a \vh-\AE (see \chapref{introduction}), the definition of \textcite{pacchierotti2017wearable} can be extended to an additional criterion: The wearable haptic interface should not impair the interaction with the \RE, \ie the user should be able to touch and manipulate objects in the real world while wearing the haptic device. %In this context of integrating \WHs with \AR to create a \vh-\AE (\chapref{introduction}), the definition of \textcite{pacchierotti2017wearable} can be extended to an additional criterion: The wearable haptic interface should not impair the interaction with the \RE, \ie the user should be able to touch and manipulate objects in the real world while wearing the haptic device.
% The haptic feedback is thus rendered de-localized from the point of contact of the finger on the rendered object. % The haptic feedback is thus rendered de-localized from the point of contact of the finger on the rendered object.

View File

@@ -33,4 +33,4 @@ Being able to coherently substitute the visuo-haptic texture of an everyday surf
In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through AR (OST-AR) and wearable vibrotactile stimuli provided on the index. In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through AR (OST-AR) and wearable vibrotactile stimuli provided on the index.
% %
In a user study, twenty participants freely explored and evaluated the coherence, realism and roughness of the combination of nine representative pairs of visuo-haptic texture augmentations (see \figref{setup}, left) from the HaTT database~\cite{culbertson2014one}. In a user study, twenty participants freely explored and evaluated the coherence, realism and roughness of the combination of nine representative pairs of visuo-haptic texture augmentations (\figref{setup}, left) from the HaTT database~\cite{culbertson2014one}.

View File

@@ -34,7 +34,7 @@ The 100 visuo-haptic texture pairs of the HaTT database~\cite{culbertson2014one}
% %
These texture models were chosen as they are visuo-haptic representations of a wide range of real textures that are publicly available online. These texture models were chosen as they are visuo-haptic representations of a wide range of real textures that are publicly available online.
% %
Nine texture pairs were selected (see \figref{setup}, left) to cover various perceived roughness, from rough to smooth, as listed: Metal Mesh, Sandpaper~100, Brick~2, Cork, Sandpaper~320, Velcro Hooks, Plastic Mesh~1, Terra Cotta, Coffee Filter. Nine texture pairs were selected (\figref{setup}, left) to cover various perceived roughness, from rough to smooth, as listed: Metal Mesh, Sandpaper~100, Brick~2, Cork, Sandpaper~320, Velcro Hooks, Plastic Mesh~1, Terra Cotta, Coffee Filter.
% %
All these visual and haptic textures are isotropic: their rendering (appearance or roughness) is the same whatever the direction of the movement on the surface, \ie there are no local deformations (holes, bumps, or breaks). All these visual and haptic textures are isotropic: their rendering (appearance or roughness) is the same whatever the direction of the movement on the surface, \ie there are no local deformations (holes, bumps, or breaks).
@@ -52,7 +52,7 @@ Similarly, a 2-cm-square fiducial marker was glued on top of the vibrotactile ac
% %
Positioned \qty{20}{\cm} above the surfaces, a webcam (StreamCam, Logitech) filmed the markers to track finger movements relative to the surfaces. Positioned \qty{20}{\cm} above the surfaces, a webcam (StreamCam, Logitech) filmed the markers to track finger movements relative to the surfaces.
% %
The visual textures were displayed on the tangible surfaces using the HoloLens~2 OST-AR headset (see \figref{setup}, middle and right) within a \qtyproduct{43 x 29}{\degree} field of view at \qty{60}{\Hz}; a set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the visual textures, that were used throughout the user study. The visual textures were displayed on the tangible surfaces using the HoloLens~2 OST-AR headset (\figref{setup}, middle and right) within a \qtyproduct{43 x 29}{\degree} field of view at \qty{60}{\Hz}; a set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the visual textures, that were used throughout the user study.
% %
When a haptic texture was touched, a \qty{48}{kHz} audio signal was generated using the corresponding HaTT haptic texture model and the measured tangential speed of the finger, using the rendering procedure described in Culbertson \etal~\cite{culbertson2014modeling}. When a haptic texture was touched, a \qty{48}{kHz} audio signal was generated using the corresponding HaTT haptic texture model and the measured tangential speed of the finger, using the rendering procedure described in Culbertson \etal~\cite{culbertson2014modeling}.
% %

View File

@@ -86,11 +86,11 @@ These results indicate, with \figref{results_matching_ranking} (right), that the
\label{results_similarity} \label{results_similarity}
\begin{subfigs}{results_similarity}{% \begin{subfigs}{results_similarity}{%
(Left) Correspondence analysis of the matching task confusion matrix (see \figref{results_matching_ranking}, left). (Left) Correspondence analysis of the matching task confusion matrix (\figref{results_matching_ranking}, left).
The visual textures are represented as blue squares, the haptic textures as red circles. % The visual textures are represented as blue squares, the haptic textures as red circles. %
The closer the textures are, the more similar they were judged. % The closer the textures are, the more similar they were judged. %
The first dimension (horizontal axis) explains 60~\% of the variance, the second dimension (vertical axis) explains 30~\% of the variance. The first dimension (horizontal axis) explains 60~\% of the variance, the second dimension (vertical axis) explains 30~\% of the variance.
(Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (see \figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. % (Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (\figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. %
The height of the dendrograms represents the distance between the clusters. % The height of the dendrograms represents the distance between the clusters. %
} }
\begin{minipage}[c]{0.50\linewidth}% \begin{minipage}[c]{0.50\linewidth}%
@@ -105,15 +105,15 @@ These results indicate, with \figref{results_matching_ranking} (right), that the
\end{minipage}% \end{minipage}%
\end{subfigs} \end{subfigs}
The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (see \secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (see \secref{results_matching}). The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (\secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (\secref{results_matching}).
% %
To further investigate the perceived similarity of the haptic and visual textures and to identify groups of textures that were perceived as similar on the matching task, a correspondence analysis and a hierarchical clustering were performed on the matching task confusion matrix (see \figref{results_matching_ranking}, left). To further investigate the perceived similarity of the haptic and visual textures and to identify groups of textures that were perceived as similar on the matching task, a correspondence analysis and a hierarchical clustering were performed on the matching task confusion matrix (\figref{results_matching_ranking}, left).
The correspondence analysis captured 60~\% and 29~\% of the variance in the first and second dimensions, respectively, with the remaining dimensions each accounting for less than 5~\% each. The correspondence analysis captured 60~\% and 29~\% of the variance in the first and second dimensions, respectively, with the remaining dimensions each accounting for less than 5~\% each.
% %
\figref{results_similarity} (left) shows the first two dimensions with the 18 haptic and visual textures. \figref{results_similarity} (left) shows the first two dimensions with the 18 haptic and visual textures.
% %
The first dimension was similar to the rankings (see \figref{results_matching_ranking}, right), distributing the textures according to their perceived roughness. The first dimension was similar to the rankings (\figref{results_matching_ranking}, right), distributing the textures according to their perceived roughness.
% %
It seems that the second dimension opposed textures that were perceived as hard with those perceived as softer, as also reported by participants. It seems that the second dimension opposed textures that were perceived as hard with those perceived as softer, as also reported by participants.
% %
@@ -121,13 +121,13 @@ Stiffness is indeed an important perceptual dimension of a material~\cite{okamot
\figref{results_similarity} (right) shows the dendrograms of the two hierarchical clusterings of the haptic and visual textures, constructed using the Euclidean distance and the Ward's method on squared distance. \figref{results_similarity} (right) shows the dendrograms of the two hierarchical clusterings of the haptic and visual textures, constructed using the Euclidean distance and the Ward's method on squared distance.
% %
The four identified haptic texture clusters were: "Roughest" \{Metal Mesh, Sandpaper~100, Brick~2, Cork\}; "Rougher" \{Sandpaper~320, Velcro Hooks\}; "Smoother" \{Plastic Mesh~1, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (see \figref{results_similarity}, top-right). The four identified haptic texture clusters were: "Roughest" \{Metal Mesh, Sandpaper~100, Brick~2, Cork\}; "Rougher" \{Sandpaper~320, Velcro Hooks\}; "Smoother" \{Plastic Mesh~1, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (\figref{results_similarity}, top-right).
% %
Similar to the haptic ranks (see \figref{results_matching_ranking}, right), the clusters could have been named according to their perceived roughness. Similar to the haptic ranks (\figref{results_matching_ranking}, right), the clusters could have been named according to their perceived roughness.
% %
It also shows that the participants compared and ranked the haptic textures during the matching task to select the one that best matched the given visual texture. It also shows that the participants compared and ranked the haptic textures during the matching task to select the one that best matched the given visual texture.
% %
The five identified visual texture clusters were: "Roughest" \{Metal Mesh\}; "Rougher" \{Sandpaper~100, Brick~2, Velcro Hooks\}; "Medium" \{Cork, Plastic Mesh~1\}; "Smoother" \{Sandpaper~320, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (see \figref{results_similarity}, bottom-right). The five identified visual texture clusters were: "Roughest" \{Metal Mesh\}; "Rougher" \{Sandpaper~100, Brick~2, Velcro Hooks\}; "Medium" \{Cork, Plastic Mesh~1\}; "Smoother" \{Sandpaper~320, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (\figref{results_similarity}, bottom-right).
% %
They are also easily identifiable on the visual ranking results, which also made it possible to name them. They are also easily identifiable on the visual ranking results, which also made it possible to name them.

View File

@@ -11,7 +11,7 @@ The visual textures were displayed statically on the tangible surface, while the
% %
In addition, the interaction with the textures was designed to be as natural as possible, without imposing a specific speed of finger movement, as in similar studies~\cite{asano2015vibrotactile,friesen2024perceived}. In addition, the interaction with the textures was designed to be as natural as possible, without imposing a specific speed of finger movement, as in similar studies~\cite{asano2015vibrotactile,friesen2024perceived}.
In the matching task, participants were not able to effectively match the original visual and haptic texture pairs (see \figref{results_matching_ranking}, left), except for the Coffee Filter texture, which was the smoothest both visually and haptically. In the matching task, participants were not able to effectively match the original visual and haptic texture pairs (\figref{results_matching_ranking}, left), except for the Coffee Filter texture, which was the smoothest both visually and haptically.
% %
However, almost all visual textures, except Sandpaper~100, were matched with at least one haptic texture at a level above chance. However, almost all visual textures, except Sandpaper~100, were matched with at least one haptic texture at a level above chance.
% %
@@ -23,13 +23,13 @@ Indeed, the majority of users explained that, based on the roughness, granularit
% %
Several strategies were used, as some participants reported using vibration frequency and/or amplitude to match a haptic texture. Several strategies were used, as some participants reported using vibration frequency and/or amplitude to match a haptic texture.
% %
It should be noted that the task was rather difficult (see \figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the ranking task. It should be noted that the task was rather difficult (\figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the ranking task.
The correspondence analysis (see \figref{results_similarity}, left) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (60\% of variance), which is in line with previous perception studies on real~\cite{baumgartner2013visual} and virtual~\cite{culbertson2014modeling} textures. The correspondence analysis (\figref{results_similarity}, left) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (60\% of variance), which is in line with previous perception studies on real~\cite{baumgartner2013visual} and virtual~\cite{culbertson2014modeling} textures.
% %
The rankings (see \figref{results_matching_ranking}, right) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures~\cite{bergmanntiest2007haptic}. The rankings (\figref{results_matching_ranking}, right) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures~\cite{bergmanntiest2007haptic}.
% %
These results made it possible to identify and name groups of textures in the form of clusters, and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters, showing that participants consistently identified and matched haptic and visual textures (see \figref{results_clusters}). These results made it possible to identify and name groups of textures in the form of clusters, and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters, showing that participants consistently identified and matched haptic and visual textures (\figref{results_clusters}).
% %
Interestingly, 30\% of the matching variance was captured with a second dimension, opposing the roughest textures (Metal Mesh, Sandpaper~100), and to a lesser extent the smoothest (Coffee Filter, Sandpaper~320), with all other textures. Interestingly, 30\% of the matching variance was captured with a second dimension, opposing the roughest textures (Metal Mesh, Sandpaper~100), and to a lesser extent the smoothest (Coffee Filter, Sandpaper~320), with all other textures.
% %
@@ -37,7 +37,7 @@ One hypothesis is that this dimension could be the perceived stiffness of the te
% %
Stiffness is, with roughness, one of the main characteristics perceived by the vision and touch of real materials~\cite{baumgartner2013visual,vardar2019fingertip}, but also on virtual haptic textures~\cite{culbertson2014modeling,degraen2019enhancing}. Stiffness is, with roughness, one of the main characteristics perceived by the vision and touch of real materials~\cite{baumgartner2013visual,vardar2019fingertip}, but also on virtual haptic textures~\cite{culbertson2014modeling,degraen2019enhancing}.
% %
The last visuo-haptic roughness ranking (see \figref{results_matching_ranking}, right) showed that both haptic and visual sensory information were well integrated as the resulting roughness ranking was being in between the two individual haptic and visual rankings. The last visuo-haptic roughness ranking (\figref{results_matching_ranking}, right) showed that both haptic and visual sensory information were well integrated as the resulting roughness ranking was being in between the two individual haptic and visual rankings.
% %
Several strategies were reported: some participants first classified visually and then corrected with haptics, others classified haptically and then integrated visuals. Several strategies were reported: some participants first classified visually and then corrected with haptics, others classified haptically and then integrated visuals.
% %
@@ -51,7 +51,7 @@ A few participants even reported that they clearly sensed patterns on haptic tex
% %
However, the visual and haptic textures used were isotropic and homogeneous models of real texture captures, \ie their rendered roughness was constant and did not depend on the direction of movement but only on the speed of the finger. However, the visual and haptic textures used were isotropic and homogeneous models of real texture captures, \ie their rendered roughness was constant and did not depend on the direction of movement but only on the speed of the finger.
% %
Overall, the haptic device was judged to be comfortable, and the visual and haptic textures were judged to be fairly realistic and to work well together (see \figref{results_questions}). Overall, the haptic device was judged to be comfortable, and the visual and haptic textures were judged to be fairly realistic and to work well together (\figref{results_questions}).
These results have of course some limitations as they addressed a small set of visuo-haptic textures augmenting the perception of smooth white tangible surfaces. These results have of course some limitations as they addressed a small set of visuo-haptic textures augmenting the perception of smooth white tangible surfaces.
% %

View File

@@ -15,7 +15,7 @@
% %
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$. If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
% %
The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (see \eqref{signal}). The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
% %
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier. The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
% %
@@ -56,9 +56,9 @@ The system is composed of three main components: the pose estimation of the trac
\subfig[0.992]{method/apparatus} \subfig[0.992]{method/apparatus}
\end{subfigs} \end{subfigs}
A fiducial marker (AprilTag) is glued to the top of the actuator (see \figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (see \figref{method/apparatus}). A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{method/apparatus}).
% %
Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (see \figref{setup}). Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (\figref{setup}).
% %
Contrary to similar work which either constrained hand to a constant speed to keep the signal frequency constant~\cite{asano2015vibrotactile,friesen2024perceived}, or used mechanical sensors attached to the hand~\cite{friesen2024perceived,strohmeier2017generating}, using vision-based tracking allows both to free the hand movements and to augment any tangible surface. Contrary to similar work which either constrained hand to a constant speed to keep the signal frequency constant~\cite{asano2015vibrotactile,friesen2024perceived}, or used mechanical sensors attached to the hand~\cite{friesen2024perceived,strohmeier2017generating}, using vision-based tracking allows both to free the hand movements and to augment any tangible surface.
% %
@@ -75,13 +75,13 @@ The velocity of the marker is estimated using the discrete derivative of the pos
To be able to compare virtual and augmented realities, we then create a virtual environment that closely replicate the real one. To be able to compare virtual and augmented realities, we then create a virtual environment that closely replicate the real one.
%Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the real environment during the experiment. %Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the real environment during the experiment.
% %
Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (see \figref{renderings}). Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (\figref{renderings}).
% %
In addition, the pose and size of the virtual textures are defined on the virtual replicas. In addition, the pose and size of the virtual textures are defined on the virtual replicas.
% %
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested. During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
% %
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (see \figref{renderings}), using the considered AR or VR headset. This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (\figref{renderings}), using the considered AR or VR headset.
In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK). In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK).
% %
@@ -89,9 +89,9 @@ The visual rendering is achieved using the Microsoft HoloLens~2, an OST-AR heads
% %
It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment~\cite{macedo2023occlusion}. It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment~\cite{macedo2023occlusion}.
% %
Indeed, one of our objectives (see \secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations. Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
% %
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (see \figref{method/headset}). To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{method/headset}).
\subsection{Vibrotactile Signal Generation and Rendering} \subsection{Vibrotactile Signal Generation and Rendering}
@@ -99,7 +99,7 @@ To simulate a VR headset, a cardboard mask (with holes for sensors) is attached
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}. A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
% %
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (see \figref{method/device}). The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{method/device}).
% %
The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil~\cite{mcmahan2014dynamic}. The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil~\cite{mcmahan2014dynamic}.
% %
@@ -131,7 +131,7 @@ Note that the finger position and velocity are transformed from the camera frame
% %
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}. However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
% %
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (see \figref{method/phase_adjustment}) and, contrary to previous work~\cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed. This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (\figref{method/phase_adjustment}) and, contrary to previous work~\cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
% %
Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures~\cite{unger2011roughness}. Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures~\cite{unger2011roughness}.
% %

View File

@@ -26,7 +26,7 @@ Our visuo-haptic rendering system, described in \secref{method}, allows free exp
% %
The user study aimed to investigate the effect of visual hand rendering in AR or VR on the perception of roughness texture augmentation. % of a touched tangible surface. The user study aimed to investigate the effect of visual hand rendering in AR or VR on the perception of roughness texture augmentation. % of a touched tangible surface.
% %
In a two-alternative forced choice (2AFC) task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (see \figref{renderings}, \level{Real}), in AR with a realistic virtual hand superimposed on the real hand (see \figref{renderings}, \level{Mixed}), and in VR with the same virtual hand as an avatar (see \figref{renderings}, \level{Virtual}). In a two-alternative forced choice (2AFC) task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\figref{renderings}, \level{Real}), in AR with a realistic virtual hand superimposed on the real hand (\figref{renderings}, \level{Mixed}), and in VR with the same virtual hand as an avatar (\figref{renderings}, \level{Virtual}).
% %
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture~\cite{bergmanntiest2007haptic,yanagisawa2015effects,normand2024augmenting,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed. In order not to influence the perception, as vision is an important source of information and influence for the perception of texture~\cite{bergmanntiest2007haptic,yanagisawa2015effects,normand2024augmenting,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
@@ -52,7 +52,7 @@ They all signed an informed consent form before the user study and were unaware
\subsection{Apparatus} \subsection{Apparatus}
\label{apparatus} \label{apparatus}
An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (see \figref{renderings}). An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (\figref{renderings}).
% %
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside, and a \qtyproduct{15 x 5}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered. It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside, and a \qtyproduct{15 x 5}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
% %
@@ -62,7 +62,7 @@ Participants rated the roughness of the paper (without any texture augmentation)
%The visual rendering of the virtual hand and environment was achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV) and a \qty{60}{\Hz} refresh rate, running a custom application made with Unity 2021.1.0f1 and Mixed Reality Toolkit (MRTK) 2.7.2. %The visual rendering of the virtual hand and environment was achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV) and a \qty{60}{\Hz} refresh rate, running a custom application made with Unity 2021.1.0f1 and Mixed Reality Toolkit (MRTK) 2.7.2.
%f %f
The virtual environment was carefully reproducing the real environment including the geometry of the box, the textures, the lighting, and the shadows (see \figref{renderings}, \level{Virtual}). The virtual environment was carefully reproducing the real environment including the geometry of the box, the textures, the lighting, and the shadows (\figref{renderings}, \level{Virtual}).
% %
The virtual hand model was a gender-neutral human right hand with realistic skin texture, similar to the one used by \textcite{schwind2017these}. The virtual hand model was a gender-neutral human right hand with realistic skin texture, similar to the one used by \textcite{schwind2017these}.
% %
@@ -72,17 +72,17 @@ Its size was adjusted to match the real hand of the participants before the expe
% %
The visual rendering of the virtual hand and environment is described in \secref{virtual_real_alignment}. The visual rendering of the virtual hand and environment is described in \secref{virtual_real_alignment}.
% %
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (see \figref{method/headset}). %In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (\figref{method/headset}).
% %
To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the AR headset (see \figref{method/headset}). To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the AR headset (\figref{method/headset}).
% %
In the \level{Virtual} rendering, the mask had only holes for sensors to block the view of the real environment and simulate a VR headset. In the \level{Virtual} rendering, the mask had only holes for sensors to block the view of the real environment and simulate a VR headset.
% %
In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the FoV of the HoloLens~2 (see \figref{method/headset}). In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the FoV of the HoloLens~2 (\figref{method/headset}).
% %
\figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions. \figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions.
%A vibrotactile voice-coil device (HapCoil-One, Actronika), incased in a 3D-printed plastic shell, was firmly attached to the right index finger of the participants using a Velcro strap (see \figref{method/device}), was used to render the textures %A vibrotactile voice-coil device (HapCoil-One, Actronika), incased in a 3D-printed plastic shell, was firmly attached to the right index finger of the participants using a Velcro strap (\figref{method/device}), was used to render the textures
% %
%This voice-coil was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnotemark[1]. %This voice-coil was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnotemark[1].
% %
@@ -110,7 +110,7 @@ The user study was held in a quiet room with no windows.
Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire. Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire.
% %
%They were then asked to sit in front of the box and wear the HoloLens~2 and headphones while the experimenter firmly attached the vibrotactile device to the middle phalanx of their right index finger (see \figref{method/apparatus}). %They were then asked to sit in front of the box and wear the HoloLens~2 and headphones while the experimenter firmly attached the vibrotactile device to the middle phalanx of their right index finger (\figref{method/apparatus}).
% %
A calibration was then performed to adjust the HoloLens~2 to the participant's interpupillary distance, the virtual hand to the real hand size, and the fiducial marker to the finger position. A calibration was then performed to adjust the HoloLens~2 to the participant's interpupillary distance, the virtual hand to the real hand size, and the fiducial marker to the finger position.
% %
@@ -147,7 +147,7 @@ Preliminary studies allowed us to determine a range of amplitudes that could be
The user study was a within-subjects design with two factors: The user study was a within-subjects design with two factors:
% %
\begin{itemize} \begin{itemize}
\item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (see \figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (see \figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (see \figref{renderings}, \level{Virtual}). \item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (\figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (\figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (\figref{renderings}, \level{Virtual}).
\item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}. \item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}.
\end{itemize} \end{itemize}

View File

@@ -18,7 +18,7 @@ Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci
\subsubsection{Discrimination Accuracy} \subsubsection{Discrimination Accuracy}
\label{discrimination_accuracy} \label{discrimination_accuracy}
A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (see \figref{results/trial_predictions}). A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (\figref{results/trial_predictions}).
% %
The points of subjective equality (PSEs, see \figref{results/trial_pses}) and just-noticeable differences (JNDs, see \figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95\% CI, using a non-parametric bootstrap procedure (1000 samples). The points of subjective equality (PSEs, see \figref{results/trial_pses}) and just-noticeable differences (JNDs, see \figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95\% CI, using a non-parametric bootstrap procedure (1000 samples).
% %
@@ -95,7 +95,7 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire. %\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire.
% %
Friedman tests were employed to compare the ratings to the questions (see \tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests. Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
% %
\figref{question_plots} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation): \figref{question_plots} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
% %

View File

@@ -8,15 +8,15 @@
The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions. The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions.
% %
Given the estimated point of subjective equality (PSE), the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (see \figref{results/trial_pses}). Given the estimated point of subjective equality (PSE), the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (\figref{results/trial_pses}).
% %
\textcite{gaffary2017ar} found a PSE difference in the same range between AR and VR for perceived stiffness, with the VR perceived as \enquote{stiffer} and the AR as \enquote{softer}. \textcite{gaffary2017ar} found a PSE difference in the same range between AR and VR for perceived stiffness, with the VR perceived as \enquote{stiffer} and the AR as \enquote{softer}.
% %
%However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant. %However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant.
% %
Surprisingly, the PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (see \figref{results/trial_predictions}). Surprisingly, the PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (\figref{results/trial_predictions}).
% %
The sensitivity of participants to roughness differences (just-noticeable differences, JND) also varied between all the visual renderings, with the \level{Real} rendering having the best JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (see \figref{results/trial_jnds}). The sensitivity of participants to roughness differences (just-noticeable differences, JND) also varied between all the visual renderings, with the \level{Real} rendering having the best JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (\figref{results/trial_jnds}).
% %
These JND values are in line with and at the upper end of the range of previous studies~\cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the middle phalanx of the finger, being less sensitive to vibration than the fingertip. These JND values are in line with and at the upper end of the range of previous studies~\cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the middle phalanx of the finger, being less sensitive to vibration than the fingertip.
% %
@@ -24,15 +24,15 @@ Thus, compared to no visual rendering (\level{Real}), the addition of a visual r
Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures). Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures).
% %
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in VR (\level{Virtual} rendering) (see \figref{results_finger}). On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in VR (\level{Virtual} rendering) (\figref{results_finger}).
% %
The \level{Mixed} rendering, displaying both the real and virtual hands, was always in between, with no significant difference from the other two renderings. The \level{Mixed} rendering, displaying both the real and virtual hands, was always in between, with no significant difference from the other two renderings.
% %
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in VR. This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in VR.
% %
This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (see \secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings. This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (\secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings.
% %
Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (see \secref{questions}). Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (\secref{questions}).
% %
However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the PSEs). However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the PSEs).
% %
@@ -40,7 +40,7 @@ The \level{Mixed} rendering had the lowest PSE and highest perceived latency, th
Our visuo-haptic augmentation system aimed to provide a coherent multimodal virtual rendering integrated with the real environment. Our visuo-haptic augmentation system aimed to provide a coherent multimodal virtual rendering integrated with the real environment.
% %
Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (see \figref{method/diagram}), which are subject to different latencies and may not be in synchronised with each other, or may even being inconsistent with other sensory modalities such as proprioception. Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (\figref{method/diagram}), which are subject to different latencies and may not be in synchronised with each other, or may even being inconsistent with other sensory modalities such as proprioception.
% %
When a user runs their finger over a vibrotactile virtual texture, the haptic sensations and eventual display of the virtual hand lag behind the visual displacement and proprioceptive sensations of the real hand. When a user runs their finger over a vibrotactile virtual texture, the haptic sensations and eventual display of the virtual hand lag behind the visual displacement and proprioceptive sensations of the real hand.
% %

View File

@@ -25,7 +25,7 @@ As a reference, we considered no visual hand rendering, as is common in AR~\cite
% %
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched. Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
% %
As virtual content is rendered on top of the real environment, the hand of the user can be hidden by the virtual objects when manipulating them (see \secref{hands}). As virtual content is rendered on top of the real environment, the hand of the user can be hidden by the virtual objects when manipulating them (\secref{hands}).
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})} \subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
@@ -94,13 +94,13 @@ Following the guidelines of \textcite{bergstrom2021how} for designing object man
\subsubsection{Push Task} \subsubsection{Push Task}
\label{push-task} \label{push-task}
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (see \figref{method/task-push}). The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
% %
The virtual object to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume. The virtual object to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
% %
At every repetition of the task, the cube to manipulate always spawns at the same place, on top of a real table in front of the user. At every repetition of the task, the cube to manipulate always spawns at the same place, on top of a real table in front of the user.
% %
On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centered on the cube, at \qty{45}{\degree} from each other (see again \figref{method/task-push}). On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centered on the cube, at \qty{45}{\degree} from each other (again \figref{method/task-push}).
% %
Users are asked to push the cube towards the target volume using their fingertips in any way they prefer. Users are asked to push the cube towards the target volume using their fingertips in any way they prefer.
% %
@@ -112,7 +112,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
\subsubsection{Grasp Task} \subsubsection{Grasp Task}
\label{grasp-task} \label{grasp-task}
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (see \figref{method/task-grasp}). The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (\figref{method/task-grasp}).
% %
The cube to manipulate and target volume are the same as in the previous task. However, this time, the target volume can spawn in eight different locations on a plane \qty{10}{\cm} \emph{above} the table, still located on a \qty{20}{\cm} radius circle at \qty{45}{\degree} from each other. The cube to manipulate and target volume are the same as in the previous task. However, this time, the target volume can spawn in eight different locations on a plane \qty{10}{\cm} \emph{above} the table, still located on a \qty{20}{\cm} radius circle at \qty{45}{\degree} from each other.
% %

View File

@@ -16,7 +16,7 @@
% %
Friedman tests indicated that both ranking had statistically significant differences (\pinf{0.001}). Friedman tests indicated that both ranking had statistically significant differences (\pinf{0.001}).
% %
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then used on both ranking results (see \secref{metrics}): Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then used on both ranking results (\secref{metrics}):
\begin{itemize} \begin{itemize}
\item \textit{Push Ranking}: Occlusion was ranked lower than Contour (\p{0.005}), Skeleton (\p{0.02}), and Mesh (\p{0.03}); \item \textit{Push Ranking}: Occlusion was ranked lower than Contour (\p{0.005}), Skeleton (\p{0.02}), and Mesh (\p{0.03});

View File

@@ -19,7 +19,7 @@
% %
Friedman tests indicated that all questions had statistically significant differences (\pinf{0.001}). Friedman tests indicated that all questions had statistically significant differences (\pinf{0.001}).
% %
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then used each question results (see \secref{metrics}): Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then used each question results (\secref{metrics}):
\begin{itemize} \begin{itemize}
\item \textit{Difficulty}: Occlusion was considered more difficult than Contour (\p{0.02}), Skeleton (\p{0.01}), and Mesh (\p{0.03}). \item \textit{Difficulty}: Occlusion was considered more difficult than Contour (\p{0.02}), Skeleton (\p{0.01}), and Mesh (\p{0.03}).

View File

@@ -3,19 +3,19 @@
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in AR. We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in AR.
During the Push task, the Skeleton hand rendering was the fastest (see \figref{results/Push-CompletionTime-Hand-Overall-Means}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (see \figref{results/Push-ContactsCount-Hand-Overall-Means} and \figref{results/Push-MeanContactTime-Hand-Overall-Means}). During the Push task, the Skeleton hand rendering was the fastest (\figref{results/Push-CompletionTime-Hand-Overall-Means}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (\figref{results/Push-ContactsCount-Hand-Overall-Means} and \figref{results/Push-MeanContactTime-Hand-Overall-Means}).
% %
Participants consistently used few and continuous contacts for all visual hand renderings (see Fig. 3b), with only less than ten trials, carried out by two participants, quickly completed with multiple discrete touches. Participants consistently used few and continuous contacts for all visual hand renderings (Fig. 3b), with only less than ten trials, carried out by two participants, quickly completed with multiple discrete touches.
% %
However, during the Grasp task, despite no difference in completion time, providing no visible hand rendering (None and Occlusion renderings) led to more failed grasps or cube drops (see \figref{results/Grasp-CompletionTime-Hand-Overall-Means} and \figref{results/Grasp-MeanContactTime-Hand-Overall-Means}). However, during the Grasp task, despite no difference in completion time, providing no visible hand rendering (None and Occlusion renderings) led to more failed grasps or cube drops (\figref{results/Grasp-CompletionTime-Hand-Overall-Means} and \figref{results/Grasp-MeanContactTime-Hand-Overall-Means}).
% %
Indeed, participants found the None and Occlusion renderings less effective (see \figref{results/Ranks-Grasp}) and less precise (see \figref{questions}). Indeed, participants found the None and Occlusion renderings less effective (\figref{results/Ranks-Grasp}) and less precise (\figref{questions}).
% %
To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering VR experience as an additional between-subjects factor, \ie VR novices vs. VR experts (\enquote{I use it every week}, see \secref{participants}). To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering VR experience as an additional between-subjects factor, \ie VR novices vs. VR experts (\enquote{I use it every week}, see \secref{participants}).
% %
We found no statistically significant differences when comparing the considered metrics between VR novices and experts. We found no statistically significant differences when comparing the considered metrics between VR novices and experts.
Interestingly, all visual hand renderings showed grip apertures very close to the size of the virtual cube, except for the None rendering (see \figref{results/Grasp-GripAperture-Hand-Overall-Means}), with which participants applied stronger grasps, \ie less distance between the fingertips. Interestingly, all visual hand renderings showed grip apertures very close to the size of the virtual cube, except for the None rendering (\figref{results/Grasp-GripAperture-Hand-Overall-Means}), with which participants applied stronger grasps, \ie less distance between the fingertips.
% %
Having no visual hand rendering, but only the reaction of the cube to the interaction as feedback, made participants less confident in their grip. Having no visual hand rendering, but only the reaction of the cube to the interaction as feedback, made participants less confident in their grip.
% %
@@ -23,7 +23,7 @@ This result contrasts with the wrongly estimated grip apertures observed by \tex
% %
Also, while some participants found the absence of visual hand rendering more natural, many of them commented on the importance of having feedback on the tracking of their hands, as observed by \textcite{xiao2018mrtouch} in a similar immersive OST-AR setup. Also, while some participants found the absence of visual hand rendering more natural, many of them commented on the importance of having feedback on the tracking of their hands, as observed by \textcite{xiao2018mrtouch} in a similar immersive OST-AR setup.
Yet, participants' opinions of the visual hand renderings were mixed on many questions, except for the Occlusion one, which was perceived less effective than more \enquote{complete} visual hands such as Contour, Skeleton, and Mesh hands (see \figref{questions}). Yet, participants' opinions of the visual hand renderings were mixed on many questions, except for the Occlusion one, which was perceived less effective than more \enquote{complete} visual hands such as Contour, Skeleton, and Mesh hands (\figref{questions}).
% %
However, due to the latency of the hand tracking and the visual hand reacting to the cube, almost all participants thought that the Occlusion rendering to be a \enquote{shadow} of the real hand on the cube. However, due to the latency of the hand tracking and the visual hand reacting to the cube, almost all participants thought that the Occlusion rendering to be a \enquote{shadow} of the real hand on the cube.

View File

@@ -5,7 +5,7 @@ Providing haptic feedback during free-hand manipulation in AR is not trivial, as
% %
Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm. Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm.
% %
For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand.% (see \secref{haptics}). For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand.% (\secref{haptics}).
This second experiment aims to evaluate whether a visuo-haptic hand rendering affects the performance and user experience of manipulation of virtual objects with bare hands in AR. This second experiment aims to evaluate whether a visuo-haptic hand rendering affects the performance and user experience of manipulation of virtual objects with bare hands in AR.
% %
@@ -102,7 +102,7 @@ We considered the same two tasks as in Experiment \#1, described in \secref[visu
\item \emph{{Vibrotactile Positioning}:} the five positionings for providing vibrotactile hand rendering of the virtual contacts, as described in \secref{positioning}. \item \emph{{Vibrotactile Positioning}:} the five positionings for providing vibrotactile hand rendering of the virtual contacts, as described in \secref{positioning}.
\item \emph{Contact Vibration Technique}: the two contact vibration techniques, as described in \secref{technique}. \item \emph{Contact Vibration Technique}: the two contact vibration techniques, as described in \secref{technique}.
\item \emph{visual Hand rendering}: two visual hand renderings from the first experiment, Skeleton (Skel) and None, as described in \secref[visual_hand]{hands}; we considered Skeleton as it performed the best in terms of performance and perceived effectiveness and None as reference. \item \emph{visual Hand rendering}: two visual hand renderings from the first experiment, Skeleton (Skel) and None, as described in \secref[visual_hand]{hands}; we considered Skeleton as it performed the best in terms of performance and perceived effectiveness and None as reference.
\item \emph{Target}: we considered target volumes located at NW and SW during the Push task, and at NE, NW, SW, and SE during the Grasp task (see \figref{tasks}); we considered these targets because they presented different difficulties. \item \emph{Target}: we considered target volumes located at NW and SW during the Push task, and at NE, NW, SW, and SE during the Grasp task (\figref{tasks}); we considered these targets because they presented different difficulties.
\end{itemize} \end{itemize}
To account for learning and fatigue effects, the positioning of the vibrotactile hand rendering (positioning) was counter-balanced using a balanced \numproduct{10 x 10} Latin square. To account for learning and fatigue effects, the positioning of the vibrotactile hand rendering (positioning) was counter-balanced using a balanced \numproduct{10 x 10} Latin square.

View File

@@ -32,7 +32,7 @@ Although the Distance technique provided additional feedback on the interpenetra
\figref{questions} shows the questionnaire results for each vibrotactile positioning. \figref{questions} shows the questionnaire results for each vibrotactile positioning.
% %
Questionnaire results were analyzed using Aligned Rank Transform (ART) non-parametric analysis of variance (see \secref{metrics}). Questionnaire results were analyzed using Aligned Rank Transform (ART) non-parametric analysis of variance (\secref{metrics}).
% %
Statistically significant effects were further analyzed with post-hoc pairwise comparisons with Holm-Bonferroni adjustment. Statistically significant effects were further analyzed with post-hoc pairwise comparisons with Holm-Bonferroni adjustment.
% %

View File

@@ -14,7 +14,7 @@
\subfig[0.24]{results/Grasp-GripAperture-Location-Overall-Means}%[\centering Distance between thumb and the other fingertips when grasping.] \subfig[0.24]{results/Grasp-GripAperture-Location-Overall-Means}%[\centering Distance between thumb and the other fingertips when grasping.]
\end{subfigswide} \end{subfigswide}
Results were analyzed similarly as for the first experiment (see \secref{results}). Results were analyzed similarly as for the first experiment (\secref{results}).
% %
The LMM were fitted with the order of the five vibrotactile positionings (Order), the vibrotactile positionings (Positioning), the visual hand rendering (Hand), the {contact vibration techniques} (Technique), and the target volume position (Target), and their interactions as fixed effects and Participant as random intercept. The LMM were fitted with the order of the five vibrotactile positionings (Order), the vibrotactile positionings (Positioning), the visual hand rendering (Hand), the {contact vibration techniques} (Technique), and the target volume position (Target), and their interactions as fixed effects and Participant as random intercept.

View File

@@ -3,7 +3,7 @@
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment. We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
In the Push task, vibrotactile haptic hand rendering has been proven beneficial with the Proximal positioning, which registered a low completion time, but detrimental with the Fingertips positioning, which performed worse (see \figref{results/Push-CompletionTime-Location-Overall-Means}) than the Proximal and Opposite (on the contralateral hand) positionings. In the Push task, vibrotactile haptic hand rendering has been proven beneficial with the Proximal positioning, which registered a low completion time, but detrimental with the Fingertips positioning, which performed worse (\figref{results/Push-CompletionTime-Location-Overall-Means}) than the Proximal and Opposite (on the contralateral hand) positionings.
% %
The cause might be the intensity of vibrations, which many participants found rather strong and possibly distracting when provided at the fingertips. The cause might be the intensity of vibrations, which many participants found rather strong and possibly distracting when provided at the fingertips.
% %
@@ -13,9 +13,9 @@ Another reason could be the visual impairment caused by the vibrotactile motors
We observed different strategies than in the first experiment for the two tasks. We observed different strategies than in the first experiment for the two tasks.
% %
During the Push task, participants made more and shorter contacts to adjust the cube inside the target volume (see \figref{results/Push-Contacts-Location-Overall-Means} and \figref{results/Push-TimePerContact-Location-Overall-Means}). During the Push task, participants made more and shorter contacts to adjust the cube inside the target volume (\figref{results/Push-Contacts-Location-Overall-Means} and \figref{results/Push-TimePerContact-Location-Overall-Means}).
% %
During the Grasp task, participants pressed the cube 25~\% harder on average (see \figref{results/Grasp-GripAperture-Location-Overall-Means}). During the Grasp task, participants pressed the cube 25~\% harder on average (\figref{results/Grasp-GripAperture-Location-Overall-Means}).
% %
The Fingertips and Proximal positionings led to a slightly larger grip aperture than the others. The Fingertips and Proximal positionings led to a slightly larger grip aperture than the others.
% %
@@ -23,23 +23,23 @@ We think that the proximity of the vibrotactile rendering to the point of contac
% %
This could also be the cause of the higher number of failed grasps or cube drops: indeed, we observed that the larger the grip aperture, the higher the number of contacts. This could also be the cause of the higher number of failed grasps or cube drops: indeed, we observed that the larger the grip aperture, the higher the number of contacts.
% %
Consequently, the Fingertips positioning was slower (see \figref{results/Grasp-CompletionTime-Location-Overall-Means}) and more prone to error (see \figref{results/Grasp-Contacts-Location-Overall-Means}) than the Opposite and Nowhere positionings. Consequently, the Fingertips positioning was slower (\figref{results/Grasp-CompletionTime-Location-Overall-Means}) and more prone to error (\figref{results/Grasp-Contacts-Location-Overall-Means}) than the Opposite and Nowhere positionings.
In both tasks, the Opposite positioning also seemed to be faster (see \figref{results/Push-CompletionTime-Location-Overall-Means}) than having no vibrotactile hand rendering (Nowhere positioning). In both tasks, the Opposite positioning also seemed to be faster (\figref{results/Push-CompletionTime-Location-Overall-Means}) than having no vibrotactile hand rendering (Nowhere positioning).
% %
However, participants also felt more workload (see \figref{questions}) with this positioning opposite to the site of the interaction. However, participants also felt more workload (\figref{questions}) with this positioning opposite to the site of the interaction.
% %
This result might mean that participants focused more on learning to interpret these sensations, which led to better performance in the long run. This result might mean that participants focused more on learning to interpret these sensations, which led to better performance in the long run.
Overall, many participants appreciated the vibrotactile hand renderings, commenting that they made the tasks more realistic and easier. Overall, many participants appreciated the vibrotactile hand renderings, commenting that they made the tasks more realistic and easier.
% %
However, the closer to the contact point, the better the vibrotactile rendering was perceived (see \figref{questions}). However, the closer to the contact point, the better the vibrotactile rendering was perceived (\figref{questions}).
% %
This seemed inversely correlated with the performance, except for the Nowhere positioning, \eg both the Fingertips and Proximal positionings were perceived as more effective, useful, and realistic than the other positionings despite lower performance. This seemed inversely correlated with the performance, except for the Nowhere positioning, \eg both the Fingertips and Proximal positionings were perceived as more effective, useful, and realistic than the other positionings despite lower performance.
Considering the two tasks, no clear difference in performance or appreciation was found between the two contact vibration techniques. Considering the two tasks, no clear difference in performance or appreciation was found between the two contact vibration techniques.
% %
While the majority of participants discriminated the two different techniques, only a minority identified them correctly (see \secref{technique_results}). While the majority of participants discriminated the two different techniques, only a minority identified them correctly (\secref{technique_results}).
% %
It seemed that the Impact technique was sufficient to provide contact information compared to the Distance technique, which provided additional feedback on interpenetration, as reported by participants. It seemed that the Impact technique was sufficient to provide contact information compared to the Distance technique, which provided additional feedback on interpenetration, as reported by participants.