Better figures
This commit is contained in:
@@ -31,7 +31,7 @@ Instead, wearable interfaces are directly mounted on the body to provide cutaneo
|
|||||||
|
|
||||||
\begin{subfigs}{haptic-categories}{
|
\begin{subfigs}{haptic-categories}{
|
||||||
Haptic devices can be classified into three categories according to their interface with the user:
|
Haptic devices can be classified into three categories according to their interface with the user:
|
||||||
}[
|
}[][
|
||||||
\item graspable,
|
\item graspable,
|
||||||
\item touchable, and
|
\item touchable, and
|
||||||
\item wearable. Adapted from \textcite{culbertson2018haptics}.
|
\item wearable. Adapted from \textcite{culbertson2018haptics}.
|
||||||
@@ -48,7 +48,7 @@ But their use in combination with \AR has been little explored so far.
|
|||||||
|
|
||||||
\begin{subfigs}{wearable-haptics}{
|
\begin{subfigs}{wearable-haptics}{
|
||||||
Wearable haptic devices can render sensations on the skin as feedback to real or virtual objects being touched.
|
Wearable haptic devices can render sensations on the skin as feedback to real or virtual objects being touched.
|
||||||
}[
|
}[][
|
||||||
\item Wolverine, a wearable exoskeleton that simulate contact and grasping of virtual objects with force feedback on the fingers \cite{choi2016wolverine}.
|
\item Wolverine, a wearable exoskeleton that simulate contact and grasping of virtual objects with force feedback on the fingers \cite{choi2016wolverine}.
|
||||||
\item Touch\&Fold, a wearable haptic device mounted on the nail that fold on demand to render contact, normal force and vibrations to the fingertip \cite{teng2021touch}.
|
\item Touch\&Fold, a wearable haptic device mounted on the nail that fold on demand to render contact, normal force and vibrations to the fingertip \cite{teng2021touch}.
|
||||||
\item The hRing, a wearable haptic ring mounted on the proximal phalanx able to render normal and shear forces to the finger \cite{pacchierotti2016hring}.
|
\item The hRing, a wearable haptic ring mounted on the proximal phalanx able to render normal and shear forces to the finger \cite{pacchierotti2016hring}.
|
||||||
@@ -72,7 +72,7 @@ Between these two extremes lies \MR, which comprises \AR and \VR as different le
|
|||||||
\AR/\VR is most often understood as addressing only the visual sense, and as haptics, it can take many forms as a user interface.
|
\AR/\VR is most often understood as addressing only the visual sense, and as haptics, it can take many forms as a user interface.
|
||||||
The most promising devices are \AR headset, which are portable displays worn directly on the head, providing the user with an immersive \AE/\VE.
|
The most promising devices are \AR headset, which are portable displays worn directly on the head, providing the user with an immersive \AE/\VE.
|
||||||
|
|
||||||
\begin{subfigs}{rv-continuums}{Reality-virtuality continuums. }[
|
\begin{subfigs}{rv-continuums}{Reality-virtuality continuums. }[][
|
||||||
\item For the visual sense, as originally proposed by and adapted from \textcite{milgram1994taxonomy}.
|
\item For the visual sense, as originally proposed by and adapted from \textcite{milgram1994taxonomy}.
|
||||||
\item Extension to include the haptic sense on a second, orthogonal axis, proposed by and adapted from \textcite{jeon2009haptic}.
|
\item Extension to include the haptic sense on a second, orthogonal axis, proposed by and adapted from \textcite{jeon2009haptic}.
|
||||||
]
|
]
|
||||||
@@ -94,9 +94,7 @@ All visual \VOs are inherently intangible and cannot physically constrain a user
|
|||||||
It is therefore necessary to provide haptic feedback that is consistent with the visual \AE and ensures the best possible user experience.
|
It is therefore necessary to provide haptic feedback that is consistent with the visual \AE and ensures the best possible user experience.
|
||||||
The integration of wearable haptics with \AR seems to be one of the most promising solutions, but it remains challenging due to their many respective characteristics and the additional constraints of combining them.
|
The integration of wearable haptics with \AR seems to be one of the most promising solutions, but it remains challenging due to their many respective characteristics and the additional constraints of combining them.
|
||||||
|
|
||||||
\begin{subfigs}{visuo-haptic-environments}{
|
\begin{subfigs}{visuo-haptic-environments}{Visuo-haptic environments with different degrees of reality-virtuality. }[][
|
||||||
Visuo-haptic environments with different degrees of reality-virtuality.
|
|
||||||
}[
|
|
||||||
\item Visual \AR environment with a real, tangible haptic object used as a proxy to manipulate a \VO \cite{kahl2023using}.
|
\item Visual \AR environment with a real, tangible haptic object used as a proxy to manipulate a \VO \cite{kahl2023using}.
|
||||||
\item Visual \AR environment with a wearable haptic device that provides virtual, synthetic feedback from contact with a \VO \cite{meli2018combining}.
|
\item Visual \AR environment with a wearable haptic device that provides virtual, synthetic feedback from contact with a \VO \cite{meli2018combining}.
|
||||||
\item A tangible object seen in a visual \VR environment whose haptic perception of stiffness is augmented with the hRing haptic device \cite{salazar2020altering}.
|
\item A tangible object seen in a visual \VR environment whose haptic perception of stiffness is augmented with the hRing haptic device \cite{salazar2020altering}.
|
||||||
@@ -121,9 +119,7 @@ The \RE and the user's hand are tracked in real time by sensors and reconstructe
|
|||||||
The interactions between the virtual hand and objects are then simulated and rendered as visual and haptic feedback to the user using an \AR headset and a wearable haptic device.
|
The interactions between the virtual hand and objects are then simulated and rendered as visual and haptic feedback to the user using an \AR headset and a wearable haptic device.
|
||||||
Because the visuo-haptic \VE is displayed in real time, colocalized and aligned with the real one, the user is given the illusion of directly perceiving and interacting with the virtual content as if it were part of the \RE.
|
Because the visuo-haptic \VE is displayed in real time, colocalized and aligned with the real one, the user is given the illusion of directly perceiving and interacting with the virtual content as if it were part of the \RE.
|
||||||
|
|
||||||
\fig{interaction-loop}{
|
\fig{interaction-loop}{The interaction loop between a user and a visuo-haptic augmented environment.}[
|
||||||
The interaction loop between a user and a visuo-haptic augmented environment.
|
|
||||||
}[
|
|
||||||
One interact with the visual (in blue) and haptic (in red) virtual environment through a virtual hand (in purple) interaction technique that tracks real hand movements and simulates contact with \VOs.
|
One interact with the visual (in blue) and haptic (in red) virtual environment through a virtual hand (in purple) interaction technique that tracks real hand movements and simulates contact with \VOs.
|
||||||
The virtual environment is rendered back to the user co-localized with the real one (in gray) using a visual \AR headset and a wearable haptic device.
|
The virtual environment is rendered back to the user co-localized with the real one (in gray) using a visual \AR headset and a wearable haptic device.
|
||||||
]
|
]
|
||||||
@@ -186,12 +182,10 @@ We consider two main axes of research, each addressing one of the research chall
|
|||||||
\end{enumerate*}
|
\end{enumerate*}
|
||||||
Our contributions in these two axes are summarized in \figref{contributions}.
|
Our contributions in these two axes are summarized in \figref{contributions}.
|
||||||
|
|
||||||
\fig[0.95]{contributions}{
|
\fig[0.95]{contributions}{Summary of our contributions through the simplified interaction loop.}[
|
||||||
Summary of our contributions through the simplified interaction loop.
|
|
||||||
}[
|
|
||||||
The contributions are represented in dark gray boxes, and the research axes in light green circles.
|
The contributions are represented in dark gray boxes, and the research axes in light green circles.
|
||||||
The first (I) axis designs and evaluates the perception of visuo-haptic texture augmentations of tangible surfaces, directly touched by the hand.
|
The first axis is \textbf{(I)} the design and evaluation of the perception of visuo-haptic texture augmentations of tangible surfaces, directly touched by the hand.
|
||||||
The second (II) axis focuses on improving the manipulation of \VOs with the bare hand using visuo-haptic augmentations of the hand as interaction feedback.
|
The second axis focuses on \textbf{(II)} improving the manipulation of \VOs with the bare hand using visuo-haptic augmentations of the hand as interaction feedback.
|
||||||
]
|
]
|
||||||
|
|
||||||
\subsectionstarbookmark{Modifying the Perception of Tangible Surfaces with Visuo-Haptic Texture Augmentations}
|
\subsectionstarbookmark{Modifying the Perception of Tangible Surfaces with Visuo-Haptic Texture Augmentations}
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ It also allows us to act on these objects with the hand, to come into contact wi
|
|||||||
This implies that the haptic perception is localized at the points of contact between the hand and the environment, \ie we cannot haptically perceive an object without actively touching it.
|
This implies that the haptic perception is localized at the points of contact between the hand and the environment, \ie we cannot haptically perceive an object without actively touching it.
|
||||||
These two mechanisms, \emph{action} and \emph{perception}, are therefore closely associated and both are essential to form the haptic experience of interacting with the environment using the hand \cite{lederman2009haptic}.
|
These two mechanisms, \emph{action} and \emph{perception}, are therefore closely associated and both are essential to form the haptic experience of interacting with the environment using the hand \cite{lederman2009haptic}.
|
||||||
|
|
||||||
|
|
||||||
\subsection{The Haptic Sense}
|
\subsection{The Haptic Sense}
|
||||||
\label{haptic_sense}
|
\label{haptic_sense}
|
||||||
|
|
||||||
@@ -38,18 +37,18 @@ There are also two types of thermal receptors implanted in the skin, which respo
|
|||||||
Finally, free nerve endings (without specialized receptors) provide information about pain \cite{mcglone2007discriminative}.
|
Finally, free nerve endings (without specialized receptors) provide information about pain \cite{mcglone2007discriminative}.
|
||||||
|
|
||||||
\begin{tab}{cutaneous_receptors}{Characteristics of the cutaneous mechanoreceptors.}[
|
\begin{tab}{cutaneous_receptors}{Characteristics of the cutaneous mechanoreceptors.}[
|
||||||
Adaptation rate is the speed and duration of the receptor's response to a stimulus. Receptive size is the area of skin detectable by a single receptor. Sensitivities are the stimuli detected by the receptor. Adapted from \textcite{mcglone2007discriminative} and \textcite{johansson2009coding}.
|
Adaptation rate is the speed and duration of the receptor's response to a stimulus. Receptive size is the area of skin detectable by a single receptor. Sensitivities are the stimuli detected by the receptor. Adapted from \textcite{mcglone2007discriminative} and \textcite{johansson2009coding}.
|
||||||
]
|
]
|
||||||
\begin{tabularx}{\linewidth}{p{1.7cm} p{2cm} p{2cm} X}
|
\begin{tabularx}{\linewidth}{p{1.7cm} p{2cm} p{2cm} X}
|
||||||
\toprule
|
\toprule
|
||||||
\textbf{Receptor} & \textbf{Adaptation Rate} & \textbf{Receptive Size} & \textbf{Sensitivities} \\
|
\textbf{Receptor} & \textbf{Adaptation Rate} & \textbf{Receptive Size} & \textbf{Sensitivities} \\
|
||||||
\midrule
|
\midrule
|
||||||
Meissner & Fast & Small & Discontinuities (\eg edges), medium-frequency vibration (\qtyrange{5}{50}{\Hz}) \\
|
Meissner & Fast & Small & Discontinuities (\eg edges), medium-frequency vibration (\qtyrange{5}{50}{\Hz}) \\
|
||||||
Merkel & Slow & Small & Pressure, low-frequency vibration (\qtyrange{0}{5}{\Hz}) \\
|
Merkel & Slow & Small & Pressure, low-frequency vibration (\qtyrange{0}{5}{\Hz}) \\
|
||||||
Pacinian & Fast & Large & High-frequency vibration (\qtyrange{40}{400}{\Hz}) \\
|
Pacinian & Fast & Large & High-frequency vibration (\qtyrange{40}{400}{\Hz}) \\
|
||||||
Ruffini & Slow & Large & Skin stretch \\
|
Ruffini & Slow & Large & Skin stretch \\
|
||||||
\bottomrule
|
\bottomrule
|
||||||
\end{tabularx}
|
\end{tabularx}
|
||||||
\end{tab}
|
\end{tab}
|
||||||
|
|
||||||
\subsubsection{Kinesthetic Sensitivity}
|
\subsubsection{Kinesthetic Sensitivity}
|
||||||
@@ -67,7 +66,6 @@ By providing sensory feedback in response to the position and movement of our li
|
|||||||
This allows us to plan and execute precise movements to touch or grasp a target, even with our eyes closed.
|
This allows us to plan and execute precise movements to touch or grasp a target, even with our eyes closed.
|
||||||
Cutaneous mechanoreceptors are essential for this perception because any movement of the body or contact with the environment necessarily deforms the skin \cite{johansson2009coding}.
|
Cutaneous mechanoreceptors are essential for this perception because any movement of the body or contact with the environment necessarily deforms the skin \cite{johansson2009coding}.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Hand-Object Interactions}
|
\subsection{Hand-Object Interactions}
|
||||||
\label{hand_object_interactions}
|
\label{hand_object_interactions}
|
||||||
|
|
||||||
@@ -80,17 +78,17 @@ These receptors give the hand its great tactile sensitivity and great dexterity
|
|||||||
\textcite{jones2006human} have proposed a sensorimotor continuum of hand functions, from mainly sensory activities to activities with a more important motor component.
|
\textcite{jones2006human} have proposed a sensorimotor continuum of hand functions, from mainly sensory activities to activities with a more important motor component.
|
||||||
As illustrated in the \figref{sensorimotor_continuum}, \Citeauthor{jones2006human} propose to delineate four categories of hand function on this continuum:
|
As illustrated in the \figref{sensorimotor_continuum}, \Citeauthor{jones2006human} propose to delineate four categories of hand function on this continuum:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item \emph{Passive touch}, or tactile sensing, is the ability to perceive an object through cutaneous sensations with a static hand contact. The object may be moving, but the hand remains static. It allows for relatively good surface perception, \eg in \textcite{gunther2022smooth}.
|
\item \emph{Passive touch}, or tactile sensing, is the ability to perceive an object through cutaneous sensations with a static hand contact. The object may be moving, but the hand remains static. It allows for relatively good surface perception, \eg in \textcite{gunther2022smooth}.
|
||||||
\item \emph{Exploration}, or active haptic sensing, is the manual and voluntary exploration of an object with the hand, involving all cutaneous and kinesthetic sensations. It enables a more precise perception than passive touch \cite{lederman2009haptic}.
|
\item \emph{Exploration}, or active haptic sensing, is the manual and voluntary exploration of an object with the hand, involving all cutaneous and kinesthetic sensations. It enables a more precise perception than passive touch \cite{lederman2009haptic}.
|
||||||
\item \emph{Prehension} is the action of grasping and holding an object with the hand. It involves fine coordination between hand and finger movements and the haptic sensations produced.
|
\item \emph{Prehension} is the action of grasping and holding an object with the hand. It involves fine coordination between hand and finger movements and the haptic sensations produced.
|
||||||
\item \emph{Gestures}, or non-prehensible skilled movements, are motor activities without constant contact with an object. Examples include pointing at a target, typing on a keyboard, accompanying speech with gestures, or signing in sign language, \eg in \textcite{yoon2020evaluating}.
|
\item \emph{Gestures}, or non-prehensible skilled movements, are motor activities without constant contact with an object. Examples include pointing at a target, typing on a keyboard, accompanying speech with gestures, or signing in sign language, \eg in \textcite{yoon2020evaluating}.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
\fig[0.65]{sensorimotor_continuum}{
|
\fig[0.65]{sensorimotor_continuum}{
|
||||||
The sensorimotor continuum of the hand function proposed by and adapted from \textcite{jones2006human}.
|
The sensorimotor continuum of the hand function proposed by and adapted from \textcite{jones2006human}.
|
||||||
}[
|
}[
|
||||||
Functions of the hand are classified into four categories based on the relative importance of sensory and motor components.
|
Functions of the hand are classified into four categories based on the relative importance of sensory and motor components.
|
||||||
Icons are from \href{https://thenounproject.com/creator/leremy/}{Gan Khoon Lay} / \href{https://creativecommons.org/licenses/by/3.0/}{CC BY}.
|
Icons are from \href{https://thenounproject.com/creator/leremy/}{Gan Khoon Lay} / \href{https://creativecommons.org/licenses/by/3.0/}{CC BY}.
|
||||||
]
|
]
|
||||||
|
|
||||||
This classification has been further refined by \textcite{bullock2013handcentric} into 15 categories of possible hand interactions with an object.
|
This classification has been further refined by \textcite{bullock2013handcentric} into 15 categories of possible hand interactions with an object.
|
||||||
@@ -113,13 +111,13 @@ Thus the thumb has 5 DoFs, each of the other four fingers has 4 DoFs and the wri
|
|||||||
|
|
||||||
This complex structure enables the hand to perform a wide range of movements and gestures. However, the way we explore and grasp objects follows simpler patterns, depending on the object being touched and the aim of the interaction.
|
This complex structure enables the hand to perform a wide range of movements and gestures. However, the way we explore and grasp objects follows simpler patterns, depending on the object being touched and the aim of the interaction.
|
||||||
|
|
||||||
\begin{subfigs}{hand}{Anatomy and motion of the hand. }[
|
\begin{subfigs}{hand}{Anatomy and motion of the hand. }[][
|
||||||
\item Schema of the hand skeleton. Adapted from \textcite{blausen2014medical}.
|
\item Schema of the hand skeleton. Adapted from \textcite{blausen2014medical}.
|
||||||
\item Kinematic model of the hand with 27 \DoFs \cite{erol2007visionbased}.
|
\item Kinematic model of the hand with 27 \DoFs \cite{erol2007visionbased}.
|
||||||
]
|
]
|
||||||
\subfigsheight{58mm}
|
\subfigsheight{58mm}
|
||||||
\subfig{blausen2014medical_hand}
|
\subfig{blausen2014medical_hand}
|
||||||
\subfig{kinematic_hand_model}
|
\subfig{kinematic_hand_model}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\subsubsection{Exploratory Procedures}
|
\subsubsection{Exploratory Procedures}
|
||||||
@@ -158,7 +156,6 @@ This can be explained by the sensitivity of the fingertips (\secref{haptic_sense
|
|||||||
|
|
||||||
\fig{gonzalez2014analysis}{Taxonomy of grasp types of \textcite{gonzalez2014analysis}}[, classified according to their type (power, precision or intermediate) and the shape of the grasped object. Each grasp shows the area of the palm and fingers in contact with the object and the grasp with an example of object.]
|
\fig{gonzalez2014analysis}{Taxonomy of grasp types of \textcite{gonzalez2014analysis}}[, classified according to their type (power, precision or intermediate) and the shape of the grasped object. Each grasp shows the area of the palm and fingers in contact with the object and the grasp with an example of object.]
|
||||||
|
|
||||||
|
|
||||||
\subsection{Haptic Perception of Roughness and Hardness}
|
\subsection{Haptic Perception of Roughness and Hardness}
|
||||||
\label{object_properties}
|
\label{object_properties}
|
||||||
|
|
||||||
@@ -174,7 +171,6 @@ These properties are described and rated\footnotemark using scales opposing two
|
|||||||
|
|
||||||
The most salient and fundamental perceived material properties are the roughness and hardness of the object \cite{hollins1993perceptual,baumgartner2013visual}, which are also the most studied and best understood \cite{bergmanntiest2010tactual}.
|
The most salient and fundamental perceived material properties are the roughness and hardness of the object \cite{hollins1993perceptual,baumgartner2013visual}, which are also the most studied and best understood \cite{bergmanntiest2010tactual}.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Roughness}
|
\subsubsection{Roughness}
|
||||||
\label{roughness}
|
\label{roughness}
|
||||||
|
|
||||||
@@ -196,17 +192,17 @@ However, the speed of exploration affects the perceived intensity of micro-rough
|
|||||||
To establish the relationship between spacing and intensity for macro-roughness, patterned textured surfaces were manufactured: as a linear grating (on one axis) composed of ridges and grooves, \eg in \figref{lawrence2007haptic_1} \cite{lederman1972fingertip,lawrence2007haptic}, or as a surface composed of micro conical elements on two axes, \eg in \figref{klatzky2003feeling_1} \cite{klatzky2003feeling}.
|
To establish the relationship between spacing and intensity for macro-roughness, patterned textured surfaces were manufactured: as a linear grating (on one axis) composed of ridges and grooves, \eg in \figref{lawrence2007haptic_1} \cite{lederman1972fingertip,lawrence2007haptic}, or as a surface composed of micro conical elements on two axes, \eg in \figref{klatzky2003feeling_1} \cite{klatzky2003feeling}.
|
||||||
As shown in \figref{lawrence2007haptic_2}, there is a quadratic relationship between the logarithm of the perceived roughness intensity $r$ and the logarithm of the space between the elements $s$ ($a$, $b$ and $c$ are empirical parameters to be estimated) \cite{klatzky2003feeling}:
|
As shown in \figref{lawrence2007haptic_2}, there is a quadratic relationship between the logarithm of the perceived roughness intensity $r$ and the logarithm of the space between the elements $s$ ($a$, $b$ and $c$ are empirical parameters to be estimated) \cite{klatzky2003feeling}:
|
||||||
\begin{equation}{roughness_intensity}
|
\begin{equation}{roughness_intensity}
|
||||||
log(r) \sim a \, log(s)^2 + b \, s + c
|
log(r) \sim a \, log(s)^2 + b \, s + c
|
||||||
\end{equation}
|
\end{equation}
|
||||||
A larger spacing between elements increases the perceived roughness, but reaches a plateau from \qty{\sim 5}{\mm} for the linear grating \cite{lawrence2007haptic}, while the roughness decreases from \qty{\sim 2.5}{\mm} \cite{klatzky2003feeling} for the conical elements.
|
A larger spacing between elements increases the perceived roughness, but reaches a plateau from \qty{\sim 5}{\mm} for the linear grating \cite{lawrence2007haptic}, while the roughness decreases from \qty{\sim 2.5}{\mm} \cite{klatzky2003feeling} for the conical elements.
|
||||||
|
|
||||||
\begin{subfigs}{lawrence2007hapti}{Estimation of haptic roughness of a linear grating surface by active exploration \cite{lawrence2007haptic}. }[
|
\begin{subfigs}{lawrence2007hapti}{Estimation of haptic roughness of a linear grating surface by active exploration \cite{lawrence2007haptic}. }[][
|
||||||
\item Schema of a linear grating surface, composed of ridges and grooves.
|
\item Schema of a linear grating surface, composed of ridges and grooves.
|
||||||
\item Perceived intensity of roughness (vertical axis) of the surface as a function of the size of the grooves (horizontal axis, interval of \qtyrange{0.125}{4.5}{mm}), the size of the ridges (RW, circles and squares) and the mode of exploration (with the finger in white and via a rigid probe held in hand in black).
|
\item Perceived intensity of roughness (vertical axis) of the surface as a function of the size of the grooves (horizontal axis, interval of \qtyrange{0.125}{4.5}{mm}), the size of the ridges (RW, circles and squares) and the mode of exploration (with the finger in white and via a rigid probe held in hand in black).
|
||||||
]
|
]
|
||||||
\subfigsheight{56mm}
|
\subfigsheight{56mm}
|
||||||
\subfig{lawrence2007haptic_1}
|
\subfig{lawrence2007haptic_1}
|
||||||
\subfig{lawrence2007haptic_2}
|
\subfig{lawrence2007haptic_2}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
It is also possible to perceive the roughness of a surface by \emph{indirect touch}, with a tool held in the hand, for example by writing with a pen on paper \cite{klatzky2003feeling}.
|
It is also possible to perceive the roughness of a surface by \emph{indirect touch}, with a tool held in the hand, for example by writing with a pen on paper \cite{klatzky2003feeling}.
|
||||||
@@ -215,19 +211,19 @@ But this information is sufficient to feel the roughness, which perceived intens
|
|||||||
The intensity peak varies with the size of the contact surface of the tool, \eg a small tool allows perceiving finer spaces between the elements than with the finger (\figref{klatzky2003feeling_2}).
|
The intensity peak varies with the size of the contact surface of the tool, \eg a small tool allows perceiving finer spaces between the elements than with the finger (\figref{klatzky2003feeling_2}).
|
||||||
However, as the speed of exploration changes the transmitted vibrations, a faster speed shifts the perceived intensity peak slightly to the right, \ie decreasing perceived roughness for fine spacings and increasing it for large spacings \cite{klatzky2003feeling}.
|
However, as the speed of exploration changes the transmitted vibrations, a faster speed shifts the perceived intensity peak slightly to the right, \ie decreasing perceived roughness for fine spacings and increasing it for large spacings \cite{klatzky2003feeling}.
|
||||||
|
|
||||||
\begin{subfigs}{klatzky2003feeling}{Estimation of haptic roughness of a surface of conical micro-elements by active exploration \cite{klatzky2003feeling}. }[
|
\begin{subfigs}{klatzky2003feeling}{Estimation of haptic roughness of a surface of conical micro-elements by active exploration \cite{klatzky2003feeling}. }[][
|
||||||
\item Electron micrograph of conical micro-elements on the surface.
|
\item Electron micrograph of conical micro-elements on the surface.
|
||||||
\item Perceived intensity of roughness (vertical axis) of the surface as a function of the average spacing of the elements (horizontal axis, interval of \qtyrange{0.8}{4.5}{mm}) and the mode of exploration (with the finger in black and via a rigid probe held in hand in white).
|
\item Perceived intensity of roughness (vertical axis) of the surface as a function of the average spacing of the elements (horizontal axis, interval of \qtyrange{0.8}{4.5}{mm}) and the mode of exploration (with the finger in black and via a rigid probe held in hand in white).
|
||||||
]
|
]
|
||||||
\subfig[.25]{klatzky2003feeling_1}
|
\subfig[.25]{klatzky2003feeling_1}
|
||||||
\subfig[.5]{klatzky2003feeling_2}
|
\subfig[.5]{klatzky2003feeling_2}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
Even when the fingertips are deafferented (absence of cutaneous sensations), the perception of roughness is maintained \cite{libouton2012tactile}, thanks to the propagation of vibrations in the finger, hand and wrist, for both pattern and "natural" everyday textures \cite{delhaye2012textureinduced}.
|
Even when the fingertips are deafferented (absence of cutaneous sensations), the perception of roughness is maintained \cite{libouton2012tactile}, thanks to the propagation of vibrations in the finger, hand and wrist, for both pattern and "natural" everyday textures \cite{delhaye2012textureinduced}.
|
||||||
The spectrum of vibrations shifts to higher frequencies as the exploration speed increases, but the brain integrates this change with proprioception to keep the \emph{perception constant} of the texture.
|
The spectrum of vibrations shifts to higher frequencies as the exploration speed increases, but the brain integrates this change with proprioception to keep the \emph{perception constant} of the texture.
|
||||||
For patterned textures, as illustrated in \figref{delhaye2012textureinduced}, the ratio of the finger speed $v$ to the frequency of the vibration intensity peak $f_p$ is measured most of the time equal to the period $\lambda$ of the spacing of the elements:
|
For patterned textures, as illustrated in \figref{delhaye2012textureinduced}, the ratio of the finger speed $v$ to the frequency of the vibration intensity peak $f_p$ is measured most of the time equal to the period $\lambda$ of the spacing of the elements:
|
||||||
\begin{equation}{grating_vibrations}
|
\begin{equation}{grating_vibrations}
|
||||||
\lambda \sim \frac{v}{f_p}
|
\lambda \sim \frac{v}{f_p}
|
||||||
\end{equation}
|
\end{equation}
|
||||||
|
|
||||||
The vibrations generated by exploring everyday textures are also very specific to each texture and similar between individuals, making them identifiable by vibration alone \cite{manfredi2014natural,greenspon2020effect}.
|
The vibrations generated by exploring everyday textures are also very specific to each texture and similar between individuals, making them identifiable by vibration alone \cite{manfredi2014natural,greenspon2020effect}.
|
||||||
@@ -239,7 +235,6 @@ The everyday textures are more complex to study because they are composed of mul
|
|||||||
In addition, the perceptions of micro and macro roughness overlap and are difficult to distinguish \cite{okamoto2013psychophysical}.
|
In addition, the perceptions of micro and macro roughness overlap and are difficult to distinguish \cite{okamoto2013psychophysical}.
|
||||||
Thus, individuals have a subjective definition of roughness, with some paying more attention to larger elements and others to smaller ones \cite{bergmanntiest2007haptic}, or even including other perceptual properties such as hardness or friction \cite{bergmanntiest2010tactual}.
|
Thus, individuals have a subjective definition of roughness, with some paying more attention to larger elements and others to smaller ones \cite{bergmanntiest2007haptic}, or even including other perceptual properties such as hardness or friction \cite{bergmanntiest2010tactual}.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Hardness}
|
\subsubsection{Hardness}
|
||||||
\label{hardness}
|
\label{hardness}
|
||||||
|
|
||||||
@@ -255,20 +250,20 @@ Passive touch (without voluntary hand movements) and tapping allow a perception
|
|||||||
Two physical properties determine the haptic perception of hardness: its stiffness and elasticity, as shown in \figref{hardness} \cite{bergmanntiest2010tactual}.
|
Two physical properties determine the haptic perception of hardness: its stiffness and elasticity, as shown in \figref{hardness} \cite{bergmanntiest2010tactual}.
|
||||||
The \emph{stiffness} $k$ of an object is the ratio between the applied force $F$ and the resulting \emph{displacement} $D$ of the surface:
|
The \emph{stiffness} $k$ of an object is the ratio between the applied force $F$ and the resulting \emph{displacement} $D$ of the surface:
|
||||||
\begin{equation}{stiffness}
|
\begin{equation}{stiffness}
|
||||||
k = \frac{F}{D}
|
k = \frac{F}{D}
|
||||||
\end{equation}
|
\end{equation}
|
||||||
|
|
||||||
The \emph{elasticity} of an object is expressed by its Young's modulus $Y$, which is the ratio between the applied pressure (the force $F$ per unit area $A$) and the resulting deformation $D / l$ (the relative displacement) of the object:
|
The \emph{elasticity} of an object is expressed by its Young's modulus $Y$, which is the ratio between the applied pressure (the force $F$ per unit area $A$) and the resulting deformation $D / l$ (the relative displacement) of the object:
|
||||||
\begin{equation}{young_modulus}
|
\begin{equation}{young_modulus}
|
||||||
Y = \frac{F / A}{D / l}
|
Y = \frac{F / A}{D / l}
|
||||||
\end{equation}
|
\end{equation}
|
||||||
|
|
||||||
\begin{subfigs}{stiffness_young}{Perceived hardness of an object by finger pressure. }[
|
\begin{subfigs}{stiffness_young}{Perceived hardness of an object by finger pressure. }[][
|
||||||
\item Diagram of an object with a stiffness coefficient $k$ and a length $l$ compressed by a force $F$ on an area $A$ by a distance $D$.
|
\item Diagram of an object with a stiffness coefficient $k$ and a length $l$ compressed by a force $F$ on an area $A$ by a distance $D$.
|
||||||
\item Identical perceived hardness intensity between Young's modulus (horizontal axis) and stiffness (vertical axis). The dashed and dotted lines indicate the objects tested, the arrows the correspondences made between these objects, and the grey lines the predictions of the quadratic relationship \cite{bergmanntiest2009cues}.
|
\item Identical perceived hardness intensity between Young's modulus (horizontal axis) and stiffness (vertical axis). The dashed and dotted lines indicate the objects tested, the arrows the correspondences made between these objects, and the grey lines the predictions of the quadratic relationship \cite{bergmanntiest2009cues}.
|
||||||
]
|
]
|
||||||
\subfig[.3]{hardness}
|
\subfig[.3]{hardness}
|
||||||
\subfig[.45]{bergmanntiest2009cues}
|
\subfig[.45]{bergmanntiest2009cues}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\textcite{bergmanntiest2009cues} showed the role of these two physical properties in the perception of hardness.
|
\textcite{bergmanntiest2009cues} showed the role of these two physical properties in the perception of hardness.
|
||||||
@@ -284,7 +279,6 @@ In addition, an object with low stiffness but high Young's modulus can be percei
|
|||||||
%En pressant du doigt, l'intensité perçue (subjective) de dureté suit avec la raideur une relation selon une loi de puissance avec un exposant de \num{0.8} \cite{harper1964subjective}, \ie quand la raideur double, la dureté perçue augmente de \num{1.7}.
|
%En pressant du doigt, l'intensité perçue (subjective) de dureté suit avec la raideur une relation selon une loi de puissance avec un exposant de \num{0.8} \cite{harper1964subjective}, \ie quand la raideur double, la dureté perçue augmente de \num{1.7}.
|
||||||
%\textcite{bergmanntiest2009cues} ont ainsi observé une relation quadratique d'égale intensité perçue de dureté, comme illustré sur la \figref{bergmanntiest2009cues}.
|
%\textcite{bergmanntiest2009cues} ont ainsi observé une relation quadratique d'égale intensité perçue de dureté, comme illustré sur la \figref{bergmanntiest2009cues}.
|
||||||
|
|
||||||
|
|
||||||
%\subsubsection{Friction}
|
%\subsubsection{Friction}
|
||||||
%\label{friction}
|
%\label{friction}
|
||||||
%
|
%
|
||||||
@@ -333,7 +327,6 @@ In addition, an object with low stiffness but high Young's modulus can be percei
|
|||||||
%Le taux de transfert de chaleur, décrit par $\tau$, et l'écart de température $T_s - T_e$, sont les deux indices essentiels pour la perception de la température.
|
%Le taux de transfert de chaleur, décrit par $\tau$, et l'écart de température $T_s - T_e$, sont les deux indices essentiels pour la perception de la température.
|
||||||
%Dans des conditions de la vie de tous les jours, avec une température de la pièce de \qty{20}{\celsius}, une différence relative du taux de transfert de chaleur de \percent{43} ou un écart de \qty{2}{\celsius} est nécessaire pour percevoir une différence de température \cite{bergmanntiest2009tactile}.
|
%Dans des conditions de la vie de tous les jours, avec une température de la pièce de \qty{20}{\celsius}, une différence relative du taux de transfert de chaleur de \percent{43} ou un écart de \qty{2}{\celsius} est nécessaire pour percevoir une différence de température \cite{bergmanntiest2009tactile}.
|
||||||
|
|
||||||
|
|
||||||
%\subsubsection{Spatial Properties}
|
%\subsubsection{Spatial Properties}
|
||||||
%\label{spatial_properties}
|
%\label{spatial_properties}
|
||||||
|
|
||||||
@@ -367,7 +360,6 @@ In addition, an object with low stiffness but high Young's modulus can be percei
|
|||||||
% \subfig{plaisier2009salient_2}
|
% \subfig{plaisier2009salient_2}
|
||||||
%\end{subfigs}
|
%\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsection{Conclusion}
|
\subsection{Conclusion}
|
||||||
\label{haptic_sense_conclusion}
|
\label{haptic_sense_conclusion}
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ An increasing wearability resulting in the loss of the system's kinesthetic feed
|
|||||||
|
|
||||||
\begin{subfigs}{pacchierotti2017wearable}{
|
\begin{subfigs}{pacchierotti2017wearable}{
|
||||||
Schematic wearability level of haptic devices for the hand \cite{pacchierotti2017wearable}.
|
Schematic wearability level of haptic devices for the hand \cite{pacchierotti2017wearable}.
|
||||||
}[
|
}[][
|
||||||
\item World-grounded haptic devices are fixed on the environment to provide kinesthetic feedback to the user.
|
\item World-grounded haptic devices are fixed on the environment to provide kinesthetic feedback to the user.
|
||||||
\item Exoskeletons are body-grounded kinesthetic devices.
|
\item Exoskeletons are body-grounded kinesthetic devices.
|
||||||
\item Wearable haptic devices are grounded on the point of application of the tactile stimulus.
|
\item Wearable haptic devices are grounded on the point of application of the tactile stimulus.
|
||||||
@@ -41,9 +41,7 @@ Such \emph{body-grounded} devices are often heavy and bulky and cannot be consid
|
|||||||
An approach is then to move the grounding point very close to the end-effector (\figref{pacchierotti2017wearable_3}): the interface is limited to cutaneous haptic feedback, but its design is more compact, lightweight, comfortable and portable, \eg in \figref{grounded_to_wearable}.
|
An approach is then to move the grounding point very close to the end-effector (\figref{pacchierotti2017wearable_3}): the interface is limited to cutaneous haptic feedback, but its design is more compact, lightweight, comfortable and portable, \eg in \figref{grounded_to_wearable}.
|
||||||
Moreover, as detailed in \secref{object_properties}, cutaneous sensations are necessary and often sufficient for the perception of the haptic properties of an object explored with the hand, as also argued by \textcite{pacchierotti2017wearable}.
|
Moreover, as detailed in \secref{object_properties}, cutaneous sensations are necessary and often sufficient for the perception of the haptic properties of an object explored with the hand, as also argued by \textcite{pacchierotti2017wearable}.
|
||||||
|
|
||||||
\begin{subfigs}{grounded_to_wearable}{
|
\begin{subfigs}{grounded_to_wearable}{Haptic devices for the hand with different wearability levels. }[][
|
||||||
Haptic devices for the hand with different wearability levels.
|
|
||||||
}[
|
|
||||||
\item Teleoperation of a virtual cube grasped with the thumb and index fingers each attached to a grounded haptic device \cite{pacchierotti2015cutaneous}.
|
\item Teleoperation of a virtual cube grasped with the thumb and index fingers each attached to a grounded haptic device \cite{pacchierotti2015cutaneous}.
|
||||||
\item A passive exoskeleton for fingers simulating stiffness of a trumpet's pistons \cite{achibet2017flexifingers}.
|
\item A passive exoskeleton for fingers simulating stiffness of a trumpet's pistons \cite{achibet2017flexifingers}.
|
||||||
\item Manipulation of a virtual cube with the thumb and index fingers each attached with the 3-RSR wearable haptic device \cite{leonardis20173rsr}.
|
\item Manipulation of a virtual cube with the thumb and index fingers each attached with the 3-RSR wearable haptic device \cite{leonardis20173rsr}.
|
||||||
@@ -84,7 +82,7 @@ Although these two types of effector can be considered wearable, their actuation
|
|||||||
|
|
||||||
\begin{subfigs}{normal_actuators}{
|
\begin{subfigs}{normal_actuators}{
|
||||||
Normal indentation actuators for the fingertip.
|
Normal indentation actuators for the fingertip.
|
||||||
}[
|
}[][
|
||||||
\item A moving platform actuated with cables \cite{gabardi2016new}.
|
\item A moving platform actuated with cables \cite{gabardi2016new}.
|
||||||
\item A moving platform actuated by articulated limbs \cite{perez2017optimizationbased}.
|
\item A moving platform actuated by articulated limbs \cite{perez2017optimizationbased}.
|
||||||
\item Diagram of a pin-array of tactors \cite{sarakoglou2012high}.
|
\item Diagram of a pin-array of tactors \cite{sarakoglou2012high}.
|
||||||
@@ -112,7 +110,7 @@ By turning in opposite directions, the motors shorten the belt and create a sens
|
|||||||
Conversely, by turning simultaneously in the same direction, the belt pulls on the skin, creating a shearing sensation.
|
Conversely, by turning simultaneously in the same direction, the belt pulls on the skin, creating a shearing sensation.
|
||||||
The simplicity of this approach allows the belt to be placed anywhere on the hand, leaving the fingertip free to interact with the \RE, \eg the hRing on the proximal phalanx in \figref{pacchierotti2016hring} \cite{pacchierotti2016hring} or Tasbi on the wrist in \figref{pezent2022design} \cite{pezent2022design}.
|
The simplicity of this approach allows the belt to be placed anywhere on the hand, leaving the fingertip free to interact with the \RE, \eg the hRing on the proximal phalanx in \figref{pacchierotti2016hring} \cite{pacchierotti2016hring} or Tasbi on the wrist in \figref{pezent2022design} \cite{pezent2022design}.
|
||||||
|
|
||||||
\begin{subfigs}{tangential_belts}{Tangential motion actuators and compression belts. }[
|
\begin{subfigs}{tangential_belts}{Tangential motion actuators and compression belts. }[][
|
||||||
\item A skin strech actuator for the fingertip \cite{leonardis2015wearable}.
|
\item A skin strech actuator for the fingertip \cite{leonardis2015wearable}.
|
||||||
\item A 3 \DoF actuator capable of normal and tangential motion on the fingertip \cite{schorr2017fingertip}.
|
\item A 3 \DoF actuator capable of normal and tangential motion on the fingertip \cite{schorr2017fingertip}.
|
||||||
%\item A shearing belt actuator for the fingertip \cite{minamizawa2007gravity}.
|
%\item A shearing belt actuator for the fingertip \cite{minamizawa2007gravity}.
|
||||||
@@ -137,7 +135,7 @@ Several types of vibrotactile actuators are used in haptics, with different trad
|
|||||||
|
|
||||||
An \ERM is a \DC motor that rotates an off-center mass when a voltage or current is applied (\figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
|
An \ERM is a \DC motor that rotates an off-center mass when a voltage or current is applied (\figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
|
||||||
|
|
||||||
\begin{subfigs}{erm}{Diagram and performance of \ERMs. }[
|
\begin{subfigs}{erm}{Diagram and performance of \ERMs. }[][
|
||||||
\item Diagram of a cylindrical encapsulated \ERM. From Precision Microdrives~\footnotemark.
|
\item Diagram of a cylindrical encapsulated \ERM. From Precision Microdrives~\footnotemark.
|
||||||
\item Amplitude and frequency output of an \ERM as a function of the input voltage.
|
\item Amplitude and frequency output of an \ERM as a function of the input voltage.
|
||||||
]
|
]
|
||||||
@@ -158,7 +156,7 @@ Piezoelectric actuators deform a solid material when a voltage is applied.
|
|||||||
They are very small and thin and provide two \DoFs of amplitude and frequency control.
|
They are very small and thin and provide two \DoFs of amplitude and frequency control.
|
||||||
However, they require high voltages to operate, limiting their use in wearable devices.
|
However, they require high voltages to operate, limiting their use in wearable devices.
|
||||||
|
|
||||||
\begin{subfigs}{lra}{Diagram and performance of \LRAs. }[
|
\begin{subfigs}{lra}{Diagram and performance of \LRAs. }[][
|
||||||
\item Diagram. From Precision Microdrives~\footnotemarkrepeat.
|
\item Diagram. From Precision Microdrives~\footnotemarkrepeat.
|
||||||
\item Force generated by two \LRAs as a function of sinusoidal wave input with different frequencies: both their maximum force and resonant frequency are different \cite{azadi2014vibrotactile}.
|
\item Force generated by two \LRAs as a function of sinusoidal wave input with different frequencies: both their maximum force and resonant frequency are different \cite{azadi2014vibrotactile}.
|
||||||
]
|
]
|
||||||
@@ -238,7 +236,7 @@ Alternative models have been proposed to both render both isotropic and patterne
|
|||||||
When comparing real textures felt through a stylus with their virtual models rendered with a voice-coil actuator attached to the stylus (\figref{culbertson2012refined}), the virtual textures were found to accurately reproduce the perception of roughness, but hardness and friction were not rendered properly \cite{culbertson2014modeling}.
|
When comparing real textures felt through a stylus with their virtual models rendered with a voice-coil actuator attached to the stylus (\figref{culbertson2012refined}), the virtual textures were found to accurately reproduce the perception of roughness, but hardness and friction were not rendered properly \cite{culbertson2014modeling}.
|
||||||
\textcite{culbertson2015should} further showed that the perceived realism of the virtual textures, and similarity to the real textures, depended mostly on the user's speed but not on the user's force as inputs to the model, \ie responding to speed is sufficient to render isotropic virtual textures.
|
\textcite{culbertson2015should} further showed that the perceived realism of the virtual textures, and similarity to the real textures, depended mostly on the user's speed but not on the user's force as inputs to the model, \ie responding to speed is sufficient to render isotropic virtual textures.
|
||||||
|
|
||||||
\begin{subfigs}{textures_rendering_data}{Augmentating haptic texture perception with voice-coil actuators. }[
|
\begin{subfigs}{textures_rendering_data}{Augmentating haptic texture perception with voice-coil actuators. }[][
|
||||||
\item Increasing and decreasing the perceived roughness of a real patterned texture in direct touch \cite{asano2015vibrotactile}.
|
\item Increasing and decreasing the perceived roughness of a real patterned texture in direct touch \cite{asano2015vibrotactile}.
|
||||||
\item Comparing real patterned texture with virtual texture augmentation in direct touch \cite{friesen2024perceived}.
|
\item Comparing real patterned texture with virtual texture augmentation in direct touch \cite{friesen2024perceived}.
|
||||||
\item Rendering virtual contacts in direct touch with the virtual texture \cite{ando2007fingernailmounted}.
|
\item Rendering virtual contacts in direct touch with the virtual texture \cite{ando2007fingernailmounted}.
|
||||||
@@ -270,7 +268,9 @@ The displacement $x_r(t)$ is estimated with the reaction force and the tapping v
|
|||||||
As shown in \figref{jeon2009haptic_2}, the force $\tilde{f_r}(t)$ perceived by the user is modulated, but not the displacement $x_r(t)$, hence the perceived stiffness is $\tilde{k}(t)$.
|
As shown in \figref{jeon2009haptic_2}, the force $\tilde{f_r}(t)$ perceived by the user is modulated, but not the displacement $x_r(t)$, hence the perceived stiffness is $\tilde{k}(t)$.
|
||||||
This stiffness augmentation technique was then extended to allow tapping and pressing with 3 \DoFs \cite{jeon2010stiffness}, to render friction and weight augmentations \cite{jeon2011extensions}, and to grasp and squeez the real object with two contact points \cite{jeon2012extending}.
|
This stiffness augmentation technique was then extended to allow tapping and pressing with 3 \DoFs \cite{jeon2010stiffness}, to render friction and weight augmentations \cite{jeon2011extensions}, and to grasp and squeez the real object with two contact points \cite{jeon2012extending}.
|
||||||
|
|
||||||
\begin{subfigs}{stiffness_rendering_grounded}{Augmenting the perceived stiffness of a real surface with a hand-held force-feedback device. }[%
|
\begin{subfigs}{stiffness_rendering_grounded}{
|
||||||
|
Augmenting the perceived stiffness of a real surface with a hand-held force-feedback device.
|
||||||
|
}[][
|
||||||
\item Diagram of a user tapping the surface \cite{jeon2009haptic}.
|
\item Diagram of a user tapping the surface \cite{jeon2009haptic}.
|
||||||
\item Displacement-force curves of a real rubber ball (dashed line) and when its perceived stiffness $\tilde{k}$ is modulated \cite{jeon2009haptic}.
|
\item Displacement-force curves of a real rubber ball (dashed line) and when its perceived stiffness $\tilde{k}$ is modulated \cite{jeon2009haptic}.
|
||||||
]
|
]
|
||||||
@@ -283,7 +283,9 @@ More importantly, the augmentation proved to be robust to the placement of the d
|
|||||||
Conversely, the technique allowed to \emph{decrease} the perceived stiffness by compressing the phalanx before the contact and reducing the pressure when the user pressed the piston \cite{salazar2020altering}.
|
Conversely, the technique allowed to \emph{decrease} the perceived stiffness by compressing the phalanx before the contact and reducing the pressure when the user pressed the piston \cite{salazar2020altering}.
|
||||||
\textcite{tao2021altering} proposed instead to restrict the deformation of the fingerpad by pulling a hollow frame around it to decrease perceived stiffness (\figref{tao2021altering}): it augments the finger contact area and thus the perceived Young's modulus of the object (\secref{hardness}).
|
\textcite{tao2021altering} proposed instead to restrict the deformation of the fingerpad by pulling a hollow frame around it to decrease perceived stiffness (\figref{tao2021altering}): it augments the finger contact area and thus the perceived Young's modulus of the object (\secref{hardness}).
|
||||||
|
|
||||||
\begin{subfigs}{stiffness_rendering_wearable}{Modifying the perceived stiffness with wearable pressure devices. }[%
|
\begin{subfigs}{stiffness_rendering_wearable}{
|
||||||
|
Modifying the perceived stiffness with wearable pressure devices.
|
||||||
|
}[][
|
||||||
\item Modify the perceived stiffness of a piston by pressing the finger during or prior the contact \cite{detinguy2018enhancing,salazar2020altering}.
|
\item Modify the perceived stiffness of a piston by pressing the finger during or prior the contact \cite{detinguy2018enhancing,salazar2020altering}.
|
||||||
\item Decrease perceived stiffness of hard object by restricting the fingerpad deformation \cite{tao2021altering}.
|
\item Decrease perceived stiffness of hard object by restricting the fingerpad deformation \cite{tao2021altering}.
|
||||||
]
|
]
|
||||||
@@ -303,12 +305,12 @@ It has been shown that these material properties perceptually express the stiffn
|
|||||||
Therefore, when contacting or tapping a real object through an indirect feel-through interface that provides such vibrations (\figref{choi2021augmenting_control}) using a voice-coil (\secref{vibrotactile_actuators}), the perceived stiffness can be increased or decreased \cite{kuchenbecker2006improving,hachisu2012augmentation,choi2021augmenting}, \eg sponge feels stiffer or wood feels softer (\figref{choi2021augmenting_results}).
|
Therefore, when contacting or tapping a real object through an indirect feel-through interface that provides such vibrations (\figref{choi2021augmenting_control}) using a voice-coil (\secref{vibrotactile_actuators}), the perceived stiffness can be increased or decreased \cite{kuchenbecker2006improving,hachisu2012augmentation,choi2021augmenting}, \eg sponge feels stiffer or wood feels softer (\figref{choi2021augmenting_results}).
|
||||||
A challenge with this technique is to provide the vibration feedback at the right time to be felt simultaneously with the real contact \cite{park2023perceptual}.
|
A challenge with this technique is to provide the vibration feedback at the right time to be felt simultaneously with the real contact \cite{park2023perceptual}.
|
||||||
|
|
||||||
\begin{subfigs}{contact_vibrations}{Augmenting perceived stiffness using vibrations when touching a real surface \cite{choi2021augmenting}. }[%
|
\begin{subfigs}{contact_vibrations}{
|
||||||
%\item Experimental setup with a voice-coil actuator attached to a touch-through interface.
|
Augmenting perceived stiffness using vibrations when touching a real surface \cite{choi2021augmenting}.
|
||||||
|
}[][
|
||||||
\item Voltage inputs (top) to the voice-coil for soft, medium, and hard vibrations, with the corresponding displacement (middle) and force (bottom) outputs of the actuator.
|
\item Voltage inputs (top) to the voice-coil for soft, medium, and hard vibrations, with the corresponding displacement (middle) and force (bottom) outputs of the actuator.
|
||||||
\item Perceived stiffness intensity of real sponge ("Sp") and wood ("Wd") surfaces without added vibrations ("N") and modified by soft ("S"), medium ("M") and hard ("H") vibrations.
|
\item Perceived stiffness intensity of real sponge ("Sp") and wood ("Wd") surfaces without added vibrations ("N") and modified by soft ("S"), medium ("M") and hard ("H") vibrations.
|
||||||
]
|
]
|
||||||
%\subfig[.15]{choi2021augmenting_demo}
|
|
||||||
\subfigsheight{50mm}
|
\subfigsheight{50mm}
|
||||||
\subfig{choi2021augmenting_control}
|
\subfig{choi2021augmenting_control}
|
||||||
\subfig{choi2021augmenting_results}
|
\subfig{choi2021augmenting_results}
|
||||||
|
|||||||
@@ -13,7 +13,6 @@ Immersive systems such as headsets leave the hands free to interact with \VOs, p
|
|||||||
% \subfig{sutherland1970computer2}
|
% \subfig{sutherland1970computer2}
|
||||||
%\end{subfigs}
|
%\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsection{What is Augmented Reality?}
|
\subsection{What is Augmented Reality?}
|
||||||
\label{what_is_ar}
|
\label{what_is_ar}
|
||||||
|
|
||||||
@@ -33,7 +32,6 @@ Yet, most of the research have focused on visual augmentations, and the term \AR
|
|||||||
\footnotetext{This third characteristic has been slightly adapted to use the version of \textcite{marchand2016pose}, the original definition was: \enquote{registered in \ThreeD}.}
|
\footnotetext{This third characteristic has been slightly adapted to use the version of \textcite{marchand2016pose}, the original definition was: \enquote{registered in \ThreeD}.}
|
||||||
%For example, \textcite{milgram1994taxonomy} proposed a taxonomy of \MR experiences based on the degree of mixing real and virtual environments, and \textcite{skarbez2021revisiting} revisited this taxonomy to include the user's perception of the experience.
|
%For example, \textcite{milgram1994taxonomy} proposed a taxonomy of \MR experiences based on the degree of mixing real and virtual environments, and \textcite{skarbez2021revisiting} revisited this taxonomy to include the user's perception of the experience.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Applications of AR}
|
\subsubsection{Applications of AR}
|
||||||
\label{ar_applications}
|
\label{ar_applications}
|
||||||
|
|
||||||
@@ -43,22 +41,19 @@ It can also guide workers in complex tasks, such as assembly, maintenance or ver
|
|||||||
Most of (visual) \AR/\VR experience can now be implemented with commercially available hardware and software solutions, in particular for tracking, rendering and display.
|
Most of (visual) \AR/\VR experience can now be implemented with commercially available hardware and software solutions, in particular for tracking, rendering and display.
|
||||||
Yet, the user experience in \AR is still highly dependent on the display used.
|
Yet, the user experience in \AR is still highly dependent on the display used.
|
||||||
|
|
||||||
\begin{subfigs}{ar_applications}{Examples of \AR applications. }[
|
\begin{subfigs}{ar_applications}{Examples of \AR applications. }[][
|
||||||
%\item Neurosurgery \AR visualization of the brain on a patient's head \cite{watanabe2016transvisible}.
|
\item Visuo-haptic surgery training with cutting into virtual soft tisues \cite{harders2009calibration}.
|
||||||
\item Visuo-haptic surgery training with cutting into virtual soft tisues \cite{harders2009calibration}.
|
\item \AR can interactively guide in document verification tasks by recognizing and comparing with virtual references \cite{hartl2013mobile}.
|
||||||
%\item HOBIT is a spatial, tangible \AR table simulating an optical bench for educational experimentations \cite{bousquet2024reconfigurable}.
|
\item SpaceTop is transparent \AR desktop computer featuring direct hand manipulation of \ThreeD content \cite{lee2013spacetop}.
|
||||||
\item \AR can interactively guide in document verification tasks by recognizing and comparing with virtual references \cite{hartl2013mobile}.
|
\item Inner Garden is a spatial \AR zen garden made of real sand visually augmented to create a mini world that can be reshaped by hand \cite{roo2017inner}.
|
||||||
\item SpaceTop is transparent \AR desktop computer featuring direct hand manipulation of \ThreeD content \cite{lee2013spacetop}.
|
]
|
||||||
\item Inner Garden is a spatial \AR zen garden made of real sand visually augmented to create a mini world that can be reshaped by hand \cite{roo2017inner}.
|
\subfigsheight{41mm}
|
||||||
]
|
\subfig{harders2009calibration}
|
||||||
\subfigsheight{41mm}
|
\subfig{hartl2013mobile}
|
||||||
\subfig{harders2009calibration}
|
\subfig{lee2013spacetop}
|
||||||
\subfig{hartl2013mobile}
|
\subfig{roo2017inner}
|
||||||
\subfig{lee2013spacetop}
|
|
||||||
\subfig{roo2017inner}
|
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{AR Displays}
|
\subsubsection{AR Displays}
|
||||||
\label{ar_displays}
|
\label{ar_displays}
|
||||||
|
|
||||||
@@ -75,15 +70,15 @@ These displays feature a direct, preserved view of the \RE at the cost of more d
|
|||||||
Finally, \emph{projection-based \AR} overlays the virtual images on the real world using a projector, as illustrated in \figref{roo2017one_2}, \eg \figref{roo2017inner}.
|
Finally, \emph{projection-based \AR} overlays the virtual images on the real world using a projector, as illustrated in \figref{roo2017one_2}, \eg \figref{roo2017inner}.
|
||||||
It doesn't require the user to wear the display, but requires a real surface to project the virtual on, and is vulnerable to shadows created by the user or the real objects \cite{billinghurst2015survey}.
|
It doesn't require the user to wear the display, but requires a real surface to project the virtual on, and is vulnerable to shadows created by the user or the real objects \cite{billinghurst2015survey}.
|
||||||
|
|
||||||
\begin{subfigs}{ar_displays}{Simplified operating diagram of \AR display methods. }[
|
\begin{subfigs}{ar_displays}{Simplified operating diagram of \AR display methods. }[][
|
||||||
\item \VST-\AR \cite{itoh2022indistinguishable}.
|
\item \VST-\AR \cite{itoh2022indistinguishable}.
|
||||||
\item \OST-\AR \cite{itoh2022indistinguishable}.
|
\item \OST-\AR \cite{itoh2022indistinguishable}.
|
||||||
\item Spatial \AR \cite{roo2017one}.
|
\item Spatial \AR \cite{roo2017one}.
|
||||||
]
|
]
|
||||||
\subfigsheight{44mm}
|
\subfigsheight{44mm}
|
||||||
\subfig{itoh2022indistinguishable_vst}
|
\subfig{itoh2022indistinguishable_vst}
|
||||||
\subfig{itoh2022indistinguishable_ost}
|
\subfig{itoh2022indistinguishable_ost}
|
||||||
\subfig{roo2017one_2}
|
\subfig{roo2017one_2}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
Regardless the \AR display, it can be placed at different locations \cite{bimber2005spatial}, as shown in \figref{roo2017one_1}.
|
Regardless the \AR display, it can be placed at different locations \cite{bimber2005spatial}, as shown in \figref{roo2017one_1}.
|
||||||
@@ -120,13 +115,15 @@ The plausibility can be applied to \AR as is, but the \VOs must additionally hav
|
|||||||
%\textcite{skarbez2021revisiting} also named place illusion for \AR as \enquote{immersion} and plausibility as \enquote{coherence}, and these terms will be used in the remainder of this thesis.
|
%\textcite{skarbez2021revisiting} also named place illusion for \AR as \enquote{immersion} and plausibility as \enquote{coherence}, and these terms will be used in the remainder of this thesis.
|
||||||
%One main issue with presence is how to measure it both in \VR \cite{slater2022separate} and \AR \cite{tran2024survey}.
|
%One main issue with presence is how to measure it both in \VR \cite{slater2022separate} and \AR \cite{tran2024survey}.
|
||||||
|
|
||||||
\begin{subfigs}{presence}{The sense of immersion in virtual and augmented environments. Adapted from \textcite{stevens2002putting}. }[
|
\begin{subfigs}{presence}{
|
||||||
\item Place illusion is the sense of the user of \enquote{being there} in the \VE.
|
The sense of immersion in virtual and augmented environments. Adapted from \textcite{stevens2002putting}.
|
||||||
\item Objet illusion is the sense of the \VO to \enquote{feels here} in the \RE.
|
}[][
|
||||||
]
|
\item Place illusion is the sense of the user of \enquote{being there} in the \VE.
|
||||||
\subfigsheight{35mm}
|
\item Objet illusion is the sense of the \VO to \enquote{feels here} in the \RE.
|
||||||
\subfig{presence-vr}
|
]
|
||||||
\subfig{presence-ar}
|
\subfigsheight{35mm}
|
||||||
|
\subfig{presence-vr}
|
||||||
|
\subfig{presence-ar}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\paragraph{Embodiment}
|
\paragraph{Embodiment}
|
||||||
@@ -138,14 +135,12 @@ This illusion arises when the visual, proprioceptive and (if any) haptic sensati
|
|||||||
It can be decomposed into three subcomponents: \emph{Agency}, which is the feeling of controlling the body; \emph{Ownership}, which is the feeling that \enquote{the body is the source of the experienced sensations}; and \emph{Self-Location}, which is the feeling \enquote{spatial experience of being inside [the] body} \cite{kilteni2012sense}.
|
It can be decomposed into three subcomponents: \emph{Agency}, which is the feeling of controlling the body; \emph{Ownership}, which is the feeling that \enquote{the body is the source of the experienced sensations}; and \emph{Self-Location}, which is the feeling \enquote{spatial experience of being inside [the] body} \cite{kilteni2012sense}.
|
||||||
In \AR, it could take the form of body accessorization, \eg wearing virtual clothes or make-up in overlay, of partial avatarization, \eg using a virtual prothesis, or a full avatarization \cite{genay2022being}.
|
In \AR, it could take the form of body accessorization, \eg wearing virtual clothes or make-up in overlay, of partial avatarization, \eg using a virtual prothesis, or a full avatarization \cite{genay2022being}.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Direct Hand Manipulation in AR}
|
\subsection{Direct Hand Manipulation in AR}
|
||||||
\label{ar_interaction}
|
\label{ar_interaction}
|
||||||
|
|
||||||
A user in \AR must be able to interact with the virtual content to fulfil the second point of \textcite{azuma1997survey}'s definition (\secref{ar_definition}) and complete the interaction loop (\figref[introduction]{interaction-loop}).%, \eg through a hand-held controller, a tangible object, or even directly with the hands.
|
A user in \AR must be able to interact with the virtual content to fulfil the second point of \textcite{azuma1997survey}'s definition (\secref{ar_definition}) and complete the interaction loop (\figref[introduction]{interaction-loop}).%, \eg through a hand-held controller, a tangible object, or even directly with the hands.
|
||||||
In all examples of \AR applications shown in \secref{ar_applications}, the user interacts with the \VE using their hands, either directly or through a physical interface.
|
In all examples of \AR applications shown in \secref{ar_applications}, the user interacts with the \VE using their hands, either directly or through a physical interface.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{User Interfaces and Interaction Techniques}
|
\subsubsection{User Interfaces and Interaction Techniques}
|
||||||
\label{interaction_techniques}
|
\label{interaction_techniques}
|
||||||
|
|
||||||
@@ -157,7 +152,6 @@ Choosing useful and efficient \UIs and interaction techniques is crucial for the
|
|||||||
|
|
||||||
\fig[0.5]{interaction-technique}{An interaction technique map user inputs to actions within a computer system. Adapted from \textcite{billinghurst2005designing}.}
|
\fig[0.5]{interaction-technique}{An interaction technique map user inputs to actions within a computer system. Adapted from \textcite{billinghurst2005designing}.}
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Tasks with Virtual Environments}
|
\subsubsection{Tasks with Virtual Environments}
|
||||||
\label{ve_tasks}
|
\label{ve_tasks}
|
||||||
|
|
||||||
@@ -176,20 +170,19 @@ Wayfinding is the cognitive planning of the movement, such as path finding or ro
|
|||||||
|
|
||||||
The \emph{system control tasks} are changes to the system state through commands or menus such as creating, deleting, or modifying \VOs, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
|
The \emph{system control tasks} are changes to the system state through commands or menus such as creating, deleting, or modifying \VOs, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
|
||||||
|
|
||||||
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[
|
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[][
|
||||||
\item Spatial selection of virtual item of an extended display using a hand-held smartphone \cite{grubert2015multifi}.
|
\item Spatial selection of virtual item of an extended display using a hand-held smartphone \cite{grubert2015multifi}.
|
||||||
\item Displaying as an overlay registered on the \RE the route to follow \cite{grubert2017pervasive}.
|
\item Displaying as an overlay registered on the \RE the route to follow \cite{grubert2017pervasive}.
|
||||||
\item Virtual drawing on a tangible object with a hand-held pen \cite{roo2017onea}.
|
\item Virtual drawing on a tangible object with a hand-held pen \cite{roo2017onea}.
|
||||||
\item Simultaneous Localization and Mapping (SLAM) algorithms such as KinectFusion \cite{newcombe2011kinectfusion} reconstruct the \RE in real time and enables to register the \VE in it.
|
\item Simultaneous Localization and Mapping (SLAM) algorithms such as KinectFusion \cite{newcombe2011kinectfusion} reconstruct the \RE in real time and enables to register the \VE in it.
|
||||||
]
|
]
|
||||||
\subfigsheight{36mm}
|
\subfigsheight{36mm}
|
||||||
\subfig{grubert2015multifi}
|
\subfig{grubert2015multifi}
|
||||||
\subfig{grubert2017pervasive}
|
\subfig{grubert2017pervasive}
|
||||||
\subfig{roo2017onea}
|
\subfig{roo2017onea}
|
||||||
\subfig{newcombe2011kinectfusion}
|
\subfig{newcombe2011kinectfusion}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Reducing the Real-Virtual Gap}
|
\subsubsection{Reducing the Real-Virtual Gap}
|
||||||
\label{real-virtual-gap}
|
\label{real-virtual-gap}
|
||||||
|
|
||||||
@@ -205,7 +198,6 @@ It enables the \VE to be registered with the \RE and the user simply moves to na
|
|||||||
However, direct hand manipulation of virtual content is a challenge that requires specific interaction techniques \cite{billinghurst2021grand}.
|
However, direct hand manipulation of virtual content is a challenge that requires specific interaction techniques \cite{billinghurst2021grand}.
|
||||||
It is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands} \cite{billinghurst2015survey,hertel2021taxonomy}.
|
It is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands} \cite{billinghurst2015survey,hertel2021taxonomy}.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Manipulating with Tangibles}
|
\subsubsection{Manipulating with Tangibles}
|
||||||
\label{ar_tangibles}
|
\label{ar_tangibles}
|
||||||
|
|
||||||
@@ -224,20 +216,19 @@ In a pick-and-place task with tangibles of different shapes, a difference in siz
|
|||||||
This suggests the feasibility of using simplified tangibles in \AR whose spatial properties (\secref{object_properties}) abstract those of the \VOs.
|
This suggests the feasibility of using simplified tangibles in \AR whose spatial properties (\secref{object_properties}) abstract those of the \VOs.
|
||||||
Similarly, in \secref{tactile_rendering} we described how a material property (\secref{object_properties}) of a touched tangible can be modified using wearable haptic devices \cite{detinguy2018enhancing,salazar2020altering}: It could be used to render coherent visuo-haptic material perceptions directly touched with the hand in \AR.
|
Similarly, in \secref{tactile_rendering} we described how a material property (\secref{object_properties}) of a touched tangible can be modified using wearable haptic devices \cite{detinguy2018enhancing,salazar2020altering}: It could be used to render coherent visuo-haptic material perceptions directly touched with the hand in \AR.
|
||||||
|
|
||||||
\begin{subfigs}{ar_applications}{Manipulating \VOs with tangibles. }[
|
\begin{subfigs}{ar_applications}{Manipulating \VOs with tangibles. }[][
|
||||||
\item Ubi-Touch paired the movements and screw interaction of a virtual drill with a real vaporizer held by the user \cite{jain2023ubitouch}.
|
\item Ubi-Touch paired the movements and screw interaction of a virtual drill with a real vaporizer held by the user \cite{jain2023ubitouch}.
|
||||||
\item A tangible cube that can be moved into the \VE and used to grasp and manipulate \VOs \cite{issartel2016tangible}.
|
\item A tangible cube that can be moved into the \VE and used to grasp and manipulate \VOs \cite{issartel2016tangible}.
|
||||||
\item Size and
|
\item Size and
|
||||||
\item shape difference between a tangible and a \VO is acceptable for manipulation in \AR \cite{kahl2021investigation,kahl2023using}.
|
\item shape difference between a tangible and a \VO is acceptable for manipulation in \AR \cite{kahl2021investigation,kahl2023using}.
|
||||||
]
|
]
|
||||||
\subfigsheight{37.5mm}
|
\subfigsheight{37.5mm}
|
||||||
\subfig{jain2023ubitouch}
|
\subfig{jain2023ubitouch}
|
||||||
\subfig{issartel2016tangible}
|
\subfig{issartel2016tangible}
|
||||||
\subfig{kahl2021investigation}
|
\subfig{kahl2021investigation}
|
||||||
\subfig{kahl2023using}
|
\subfig{kahl2023using}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Manipulating with Virtual Hands}
|
\subsubsection{Manipulating with Virtual Hands}
|
||||||
\label{ar_virtual_hands}
|
\label{ar_virtual_hands}
|
||||||
|
|
||||||
@@ -259,17 +250,17 @@ The virtual phalanx follows the movements of the real phalanx, but remains const
|
|||||||
The forces acting on the object are calculated as a function of the distance between the real and virtual hands (\figref{borst2006spring}).
|
The forces acting on the object are calculated as a function of the distance between the real and virtual hands (\figref{borst2006spring}).
|
||||||
More advanced techniques simulate the friction phenomena \cite{talvas2013godfinger} and finger deformations \cite{talvas2015aggregate}, allowing highly accurate and realistic interactions, but which can be difficult to compute in real time.
|
More advanced techniques simulate the friction phenomena \cite{talvas2013godfinger} and finger deformations \cite{talvas2015aggregate}, allowing highly accurate and realistic interactions, but which can be difficult to compute in real time.
|
||||||
|
|
||||||
\begin{subfigs}{virtual-hand}{Manipulating \VOs with virtual hands. }[
|
\begin{subfigs}{virtual-hand}{Manipulating \VOs with virtual hands. }[][
|
||||||
\item A fingertip tracking that allows to select a \VO by opening the hand \cite{lee2007handy}.
|
\item A fingertip tracking that allows to select a \VO by opening the hand \cite{lee2007handy}.
|
||||||
\item Physics-based hand-object manipulation with a virtual hand made of numerous many small rigid-body spheres \cite{hilliges2012holodesk}.
|
\item Physics-based hand-object manipulation with a virtual hand made of numerous many small rigid-body spheres \cite{hilliges2012holodesk}.
|
||||||
\item Grasping a through gestures when the fingers are detected as opposing on the \VO \cite{piumsomboon2013userdefined}.
|
\item Grasping a through gestures when the fingers are detected as opposing on the \VO \cite{piumsomboon2013userdefined}.
|
||||||
\item A kinematic hand model with rigid-body phalanges (in beige) taht follows the real tracked hand (in green) but kept physically constrained to the \VO. Applied forces are shown as red arrows \cite{borst2006spring}.
|
\item A kinematic hand model with rigid-body phalanges (in beige) taht follows the real tracked hand (in green) but kept physically constrained to the \VO. Applied forces are shown as red arrows \cite{borst2006spring}.
|
||||||
]
|
]
|
||||||
\subfigsheight{37mm}
|
\subfigsheight{37mm}
|
||||||
\subfig{lee2007handy}
|
\subfig{lee2007handy}
|
||||||
\subfig{hilliges2012holodesk_1}
|
\subfig{hilliges2012holodesk_1}
|
||||||
\subfig{piumsomboon2013userdefined_1}
|
\subfig{piumsomboon2013userdefined_1}
|
||||||
\subfig{borst2006spring}
|
\subfig{borst2006spring}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
However, the lack of physical constraints on the user's hand movements makes manipulation actions tiring \cite{hincapie-ramos2014consumed}.
|
However, the lack of physical constraints on the user's hand movements makes manipulation actions tiring \cite{hincapie-ramos2014consumed}.
|
||||||
@@ -277,7 +268,6 @@ While the user's fingers traverse the virtual object, a physics-based virtual ha
|
|||||||
Finally, in the absence of haptic feedback on each finger, it is difficult to estimate the contact and forces exerted by the fingers on the object during grasping and manipulation \cite{maisto2017evaluation,meli2018combining}.
|
Finally, in the absence of haptic feedback on each finger, it is difficult to estimate the contact and forces exerted by the fingers on the object during grasping and manipulation \cite{maisto2017evaluation,meli2018combining}.
|
||||||
While a visual rendering of the virtual hand in \VR can compensate for these issues \cite{prachyabrued2014visual}, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched.
|
While a visual rendering of the virtual hand in \VR can compensate for these issues \cite{prachyabrued2014visual}, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Visual Rendering of Hands in AR}
|
\subsection{Visual Rendering of Hands in AR}
|
||||||
\label{ar_visual_hands}
|
\label{ar_visual_hands}
|
||||||
|
|
||||||
@@ -316,20 +306,20 @@ Taken together, these results suggest that a visual rendering of the hand in \AR
|
|||||||
%\textcite{saito2021contact} found that masking the real hand with a textured 3D opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did.
|
%\textcite{saito2021contact} found that masking the real hand with a textured 3D opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did.
|
||||||
%To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of \VO manipulation.
|
%To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of \VO manipulation.
|
||||||
|
|
||||||
\begin{subfigs}{visual-hands}{Visual hand renderings in \AR. }[
|
\begin{subfigs}{visual-hands}{Visual hand renderings in \AR. }[][
|
||||||
\item Grasping a \VO in \OST-\AR with no visual hand rendering \cite{hilliges2012holodesk}.
|
\item Grasping a \VO in \OST-\AR with no visual hand rendering \cite{hilliges2012holodesk}.
|
||||||
\item Simulated mutual-occlusion between the hand grasping and the \VO in \VST-\AR \cite{suzuki2014grasping}.
|
\item Simulated mutual-occlusion between the hand grasping and the \VO in \VST-\AR \cite{suzuki2014grasping}.
|
||||||
\item Grasping a real object with a semi-transparent hand in \VST-\AR \cite{buchmann2005interaction}.
|
\item Grasping a real object with a semi-transparent hand in \VST-\AR \cite{buchmann2005interaction}.
|
||||||
\item Skeleton rendering overlaying the real hand in \VST-\AR \cite{blaga2017usability}.
|
\item Skeleton rendering overlaying the real hand in \VST-\AR \cite{blaga2017usability}.
|
||||||
\item Robotic rendering overlaying the real hands in \OST-\AR \cite{genay2021virtual}.
|
\item Robotic rendering overlaying the real hands in \OST-\AR \cite{genay2021virtual}.
|
||||||
]
|
]
|
||||||
\subfigsheight{29.5mm}
|
\subfigsheight{29.5mm}
|
||||||
\subfig{hilliges2012holodesk_2}
|
\subfig{hilliges2012holodesk_2}
|
||||||
\subfig{suzuki2014grasping}
|
\subfig{suzuki2014grasping}
|
||||||
\subfig{buchmann2005interaction}
|
\subfig{buchmann2005interaction}
|
||||||
\subfig{blaga2017usability}
|
\subfig{blaga2017usability}
|
||||||
\subfig{genay2021virtual}
|
\subfig{genay2021virtual}
|
||||||
%\subfig{yoon2020evaluating}
|
%\subfig{yoon2020evaluating}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\subsection{Conclusion}
|
\subsection{Conclusion}
|
||||||
|
|||||||
@@ -43,7 +43,9 @@ The objective was to determine a \PSE between the comparison and reference bars,
|
|||||||
%\figref{ernst2002humans_within} shows the discrimination of participants with only the haptic or visual feedback, and how much the estimation becomes difficult (thus higher variance) when noise is added to the visual feedback.
|
%\figref{ernst2002humans_within} shows the discrimination of participants with only the haptic or visual feedback, and how much the estimation becomes difficult (thus higher variance) when noise is added to the visual feedback.
|
||||||
\figref{ernst2004merging_results} shows that when the visual noise was low, the visual feedback had more weight, but as visual noise increased, haptic feedback gained more weight, as predicted by the \MLE model.
|
\figref{ernst2004merging_results} shows that when the visual noise was low, the visual feedback had more weight, but as visual noise increased, haptic feedback gained more weight, as predicted by the \MLE model.
|
||||||
|
|
||||||
\begin{subfigs}{ernst2002humans}{Visuo-haptic perception of height of a virtual bar \cite{ernst2002humans}. }[
|
\begin{subfigs}{ernst2002humans}{
|
||||||
|
Visuo-haptic perception of height of a virtual bar \cite{ernst2002humans}.
|
||||||
|
}[][
|
||||||
\item Experimental setup.%: Participants estimated height visually with an \OST-\AR display and haptically with force-feedback devices worn on the thumb and index fingers.
|
\item Experimental setup.%: Participants estimated height visually with an \OST-\AR display and haptically with force-feedback devices worn on the thumb and index fingers.
|
||||||
%\item with only haptic feedback (red) or only visual feedback (blue, with different added noise),
|
%\item with only haptic feedback (red) or only visual feedback (blue, with different added noise),
|
||||||
%\item combined visuo-haptic feedback (purple, with different visual noises).
|
%\item combined visuo-haptic feedback (purple, with different visual noises).
|
||||||
@@ -82,7 +84,7 @@ For example, in a fixed \VST-\AR screen (\secref{ar_displays}), by visually defo
|
|||||||
\textcite{punpongsanon2015softar} used this technique in spatial \AR (\secref{ar_displays}) to induce a softness illusion of a hard tangible object by superimposing a virtual texture that deforms when pressed by the hand (\figref{punpongsanon2015softar}).
|
\textcite{punpongsanon2015softar} used this technique in spatial \AR (\secref{ar_displays}) to induce a softness illusion of a hard tangible object by superimposing a virtual texture that deforms when pressed by the hand (\figref{punpongsanon2015softar}).
|
||||||
\textcite{ujitoko2019modulating} increased the perceived roughness of a virtual patterned texture rendered as vibrations through a hand-held stylus (\secref{texture_rendering}) by adding small oscillations to the visual feedback of the stylus on a screen.
|
\textcite{ujitoko2019modulating} increased the perceived roughness of a virtual patterned texture rendered as vibrations through a hand-held stylus (\secref{texture_rendering}) by adding small oscillations to the visual feedback of the stylus on a screen.
|
||||||
|
|
||||||
\begin{subfigs}{pseudo_haptic}{Pseudo-haptic feedback in \AR. }[
|
\begin{subfigs}{pseudo_haptic}{Pseudo-haptic feedback in \AR. }[][
|
||||||
\item A virtual soft texture projected on a table and that deforms when pressed by the hand \cite{punpongsanon2015softar}.
|
\item A virtual soft texture projected on a table and that deforms when pressed by the hand \cite{punpongsanon2015softar}.
|
||||||
\item Modifying visually a tangible object and the hand touching it in \VST-\AR to modify its perceived shape \cite{ban2014displaying}.
|
\item Modifying visually a tangible object and the hand touching it in \VST-\AR to modify its perceived shape \cite{ban2014displaying}.
|
||||||
]
|
]
|
||||||
@@ -104,7 +106,9 @@ In a \TIFC task (\secref{sensations_perception}), participants pressed two pisto
|
|||||||
One had a reference stiffness but an additional visual or haptic delay, while the other varied with a comparison stiffness but had no delay. \footnote{Participants were not told about the delays and stiffness tested, nor which piston was the reference or comparison. The order of the pistons (which one was pressed first) was also randomized.}
|
One had a reference stiffness but an additional visual or haptic delay, while the other varied with a comparison stiffness but had no delay. \footnote{Participants were not told about the delays and stiffness tested, nor which piston was the reference or comparison. The order of the pistons (which one was pressed first) was also randomized.}
|
||||||
Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (\figref{knorlein2009influence_2}).
|
Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (\figref{knorlein2009influence_2}).
|
||||||
|
|
||||||
\begin{subfigs}{visuo-haptic-stiffness}{Perception of haptic stiffness in \VST-\AR \cite{knorlein2009influence}. }[
|
\begin{subfigs}{visuo-haptic-stiffness}{
|
||||||
|
Perception of haptic stiffness in \VST-\AR \cite{knorlein2009influence}.
|
||||||
|
}[][
|
||||||
\item Participant pressing a virtual piston rendered by a force-feedback device with their hand.
|
\item Participant pressing a virtual piston rendered by a force-feedback device with their hand.
|
||||||
\item Proportion of comparison piston perceived as stiffer than reference piston (vertical axis) as a function of the comparison stiffness (horizontal axis) and visual and haptic delays of the reference (colors).
|
\item Proportion of comparison piston perceived as stiffer than reference piston (vertical axis) as a function of the comparison stiffness (horizontal axis) and visual and haptic delays of the reference (colors).
|
||||||
]
|
]
|
||||||
@@ -125,7 +129,7 @@ The reference piston was judged to be stiffer when seen in \VR than in \AR, with
|
|||||||
This suggests that the haptic stiffness of \VOs feels \enquote{softer} in an \AE than in a full \VE.
|
This suggests that the haptic stiffness of \VOs feels \enquote{softer} in an \AE than in a full \VE.
|
||||||
%Two differences that could be worth investigating with the two previous studies are the type of \AR (visuo or optical) and to see the hand touching the \VO.
|
%Two differences that could be worth investigating with the two previous studies are the type of \AR (visuo or optical) and to see the hand touching the \VO.
|
||||||
|
|
||||||
\begin{subfigs}{gaffary2017ar}{Perception of haptic stiffness in \OST-\AR \vs \VR \cite{gaffary2017ar}. }[
|
\begin{subfigs}{gaffary2017ar}{Perception of haptic stiffness in \OST-\AR \vs \VR \cite{gaffary2017ar}. }[][
|
||||||
\item Experimental setup: a virtual piston was pressed with a force-feedback placed to the side of the participant.
|
\item Experimental setup: a virtual piston was pressed with a force-feedback placed to the side of the participant.
|
||||||
\item View of the virtual piston seen in front of the participant in \OST-\AR and
|
\item View of the virtual piston seen in front of the participant in \OST-\AR and
|
||||||
\item in \VR.
|
\item in \VR.
|
||||||
@@ -178,7 +182,7 @@ However, as with \textcite{teng2021touch}, finger speed was not taken into accou
|
|||||||
Finally, \textcite{preechayasomboon2021haplets} (\figref{preechayasomboon2021haplets}) and \textcite{sabnis2023haptic} designed Haplets and Haptic Servo, respectively: These are very compact and lightweight vibrotactile \LRA devices designed to provide both integrated finger motion sensing and very low latency haptic feedback (\qty{<5}{ms}).
|
Finally, \textcite{preechayasomboon2021haplets} (\figref{preechayasomboon2021haplets}) and \textcite{sabnis2023haptic} designed Haplets and Haptic Servo, respectively: These are very compact and lightweight vibrotactile \LRA devices designed to provide both integrated finger motion sensing and very low latency haptic feedback (\qty{<5}{ms}).
|
||||||
However, no proper user study has been conducted to evaluate these devices in \AR.
|
However, no proper user study has been conducted to evaluate these devices in \AR.
|
||||||
|
|
||||||
\begin{subfigs}{ar_wearable}{Nail-mounted wearable haptic devices designed for \AR. }[
|
\begin{subfigs}{ar_wearable}{Nail-mounted wearable haptic devices designed for \AR. }[][
|
||||||
%\item A voice-coil rendering a virtual haptic texture on a real sheet of paper \cite{ando2007fingernailmounted}.
|
%\item A voice-coil rendering a virtual haptic texture on a real sheet of paper \cite{ando2007fingernailmounted}.
|
||||||
\item Touch\&Fold provide contact pressure and vibrations on demand to the fingertip \cite{teng2021touch}.
|
\item Touch\&Fold provide contact pressure and vibrations on demand to the fingertip \cite{teng2021touch}.
|
||||||
\item Fingeret is a finger-side wearable haptic device that pulls and pushs the fingertip skin \cite{maeda2022fingeret}.
|
\item Fingeret is a finger-side wearable haptic device that pulls and pushs the fingertip skin \cite{maeda2022fingeret}.
|
||||||
@@ -211,7 +215,7 @@ The haptic ring was also perceived as more effective than the moving platform.
|
|||||||
However, the measured difference in performance could be due to either the device or the device position (proximal vs fingertip), or both.
|
However, the measured difference in performance could be due to either the device or the device position (proximal vs fingertip), or both.
|
||||||
These two studies were also conducted in non-immersive setups, where users viewed a screen displaying the visual interactions, and only compared the haptic and visual rendering of the hand-object contacts, but did not examine them together.
|
These two studies were also conducted in non-immersive setups, where users viewed a screen displaying the visual interactions, and only compared the haptic and visual rendering of the hand-object contacts, but did not examine them together.
|
||||||
|
|
||||||
\begin{subfigs}{ar_rings}{Wearable haptic ring devices for \AR. }[
|
\begin{subfigs}{ar_rings}{Wearable haptic ring devices for \AR. }[][
|
||||||
\item Rendering weight of a virtual cube placed on a real surface \cite{scheggi2010shape}.
|
\item Rendering weight of a virtual cube placed on a real surface \cite{scheggi2010shape}.
|
||||||
\item Rendering the contact force exerted by the fingers on a virtual cube \cite{maisto2017evaluation,meli2018combining}.
|
\item Rendering the contact force exerted by the fingers on a virtual cube \cite{maisto2017evaluation,meli2018combining}.
|
||||||
]
|
]
|
||||||
@@ -233,7 +237,7 @@ A user study was conducted in \VR to compare the perception of visuo-haptic stif
|
|||||||
%This suggests that in \VR, the haptic pressure is more important perceptual cue than the visual displacement to render stiffness.
|
%This suggests that in \VR, the haptic pressure is more important perceptual cue than the visual displacement to render stiffness.
|
||||||
%A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered when contacting the button, but kept constant across all conditions: It may have affected the overall perception when only the visual stiffness changed.
|
%A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered when contacting the button, but kept constant across all conditions: It may have affected the overall perception when only the visual stiffness changed.
|
||||||
|
|
||||||
%\begin{subfigs}{pezent2019tasbi}{Visuo-haptic stiffness rendering of a virtual button in \VR with the Tasbi bracelet. }[
|
%\begin{subfigs}{pezent2019tasbi}{Visuo-haptic stiffness rendering of a virtual button in \VR with the Tasbi bracelet. }[][
|
||||||
% \item The \VE seen by the user: the virtual hand (in beige) is constrained by the virtual button. The displacement is proportional to the visual stiffness. The real hand (in green) is hidden by the \VE.
|
% \item The \VE seen by the user: the virtual hand (in beige) is constrained by the virtual button. The displacement is proportional to the visual stiffness. The real hand (in green) is hidden by the \VE.
|
||||||
% \item When the rendered visuo-haptic stiffness are coherents (in purple) or only the haptic stiffness change (in blue), participants easily discrimated the different levels.
|
% \item When the rendered visuo-haptic stiffness are coherents (in purple) or only the haptic stiffness change (in blue), participants easily discrimated the different levels.
|
||||||
% \item When varying only the visual stiffness (in red) but keeping the haptic stiffness constant, participants were not able to discriminate the different stiffness levels.
|
% \item When varying only the visual stiffness (in red) but keeping the haptic stiffness constant, participants were not able to discriminate the different stiffness levels.
|
||||||
|
|||||||
@@ -1,21 +1,19 @@
|
|||||||
\section{User Study}
|
\section{User Study}
|
||||||
\label{experiment}
|
\label{experiment}
|
||||||
|
|
||||||
\begin{subfigs}{setup}{%
|
\begin{subfigs}{setup}{User Study. }[][
|
||||||
User Study.
|
\item The nine visuo-haptic textures used in the user study, selected from the HaTT database \cite{culbertson2014one}.
|
||||||
}[%
|
The texture names were never shown, so as to prevent the use of the user's visual or haptic memory of the textures.
|
||||||
\item The nine visuo-haptic textures used in the user study, selected from the HaTT database \cite{culbertson2014one}. %
|
\item Experimental setup.
|
||||||
The texture names were never shown, so as to prevent the use of the user's visual or haptic memory of the textures.
|
Participant sat in front of the tangible surfaces, which were augmented with visual textures displayed by the HoloLens~2 AR headset and haptic roughness textures rendered by the vibrotactile haptic device placed on the middle index phalanx.
|
||||||
\item Experimental setup. %
|
A webcam above the surfaces tracked the finger movements.
|
||||||
Participant sat in front of the tangible surfaces, which were augmented with visual textures displayed by the HoloLens~2 AR headset and haptic roughness textures rendered by the vibrotactile haptic device placed on the middle index phalanx. %
|
\item First person view of the user study, as seen through the immersive AR headset HoloLens~2.
|
||||||
A webcam above the surfaces tracked the finger movements.
|
The visual texture overlays are statically displayed on the surfaces, allowing the user to move around to view them from different angles.
|
||||||
\item First person view of the user study, as seen through the immersive AR headset HoloLens~2. %
|
The haptic roughness texture is generated based on HaTT data-driven texture models and finger speed, and it is rendered on the middle index phalanx as it slides on the considered surface.
|
||||||
The visual texture overlays are statically displayed on the surfaces, allowing the user to move around to view them from different angles. %
|
]
|
||||||
The haptic roughness texture is generated based on HaTT data-driven texture models and finger speed, and it is rendered on the middle index phalanx as it slides on the considered surface.%
|
\subfig[0.32]{experiment/textures}
|
||||||
]
|
\subfig[0.32]{experiment/setup}
|
||||||
\subfig[0.32]{experiment/textures}%
|
\subfig[0.32]{experiment/view}
|
||||||
\subfig[0.32]{experiment/setup}%
|
|
||||||
\subfig[0.32]{experiment/view}%
|
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
The user study aimed at analyzing the user perception of tangible surfaces when augmented through a visuo-haptic texture using AR and vibrotactile haptic feedback provided on the finger touching the surfaces.
|
The user study aimed at analyzing the user perception of tangible surfaces when augmented through a visuo-haptic texture using AR and vibrotactile haptic feedback provided on the finger touching the surfaces.
|
||||||
@@ -26,7 +24,6 @@ Nine representative visuo-haptic texture pairs from the HaTT database \cite{culb
|
|||||||
%
|
%
|
||||||
Our objective is to assess which haptic textures were associated with which visual textures, how the roughness of the visual and haptic textures are perceived, and whether the perceived roughness can explain the matches made between them.
|
Our objective is to assess which haptic textures were associated with which visual textures, how the roughness of the visual and haptic textures are perceived, and whether the perceived roughness can explain the matches made between them.
|
||||||
|
|
||||||
|
|
||||||
\subsection{The textures}
|
\subsection{The textures}
|
||||||
\label{textures}
|
\label{textures}
|
||||||
|
|
||||||
@@ -38,7 +35,6 @@ Nine texture pairs were selected (\figref{setup}, left) to cover various perceiv
|
|||||||
%
|
%
|
||||||
All these visual and haptic textures are isotropic: their rendering (appearance or roughness) is the same whatever the direction of the movement on the surface, \ie there are no local deformations (holes, bumps, or breaks).
|
All these visual and haptic textures are isotropic: their rendering (appearance or roughness) is the same whatever the direction of the movement on the surface, \ie there are no local deformations (holes, bumps, or breaks).
|
||||||
|
|
||||||
|
|
||||||
\subsection{Apparatus}
|
\subsection{Apparatus}
|
||||||
\label{apparatus}
|
\label{apparatus}
|
||||||
|
|
||||||
@@ -68,7 +64,6 @@ This latency was below the \qty{60}{\ms} threshold for vibrotactile feedback \ci
|
|||||||
%
|
%
|
||||||
The user study was held in a quiet room with no windows, with one light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table.
|
The user study was held in a quiet room with no windows, with one light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Procedure and Collected Data}
|
\subsection{Procedure and Collected Data}
|
||||||
\label{procedure}
|
\label{procedure}
|
||||||
|
|
||||||
@@ -115,7 +110,6 @@ In an open question, participants commented also on their strategy for completin
|
|||||||
%
|
%
|
||||||
The user study took on average 1 hour to complete.
|
The user study took on average 1 hour to complete.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Participants}
|
\subsection{Participants}
|
||||||
\label{participants}
|
\label{participants}
|
||||||
|
|
||||||
@@ -133,7 +127,6 @@ Participants were recruited at the university on a voluntary basis.
|
|||||||
%
|
%
|
||||||
They all signed an informed consent form before the user study.
|
They all signed an informed consent form before the user study.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Design}
|
\subsection{Design}
|
||||||
\label{design}
|
\label{design}
|
||||||
|
|
||||||
|
|||||||
@@ -7,17 +7,16 @@
|
|||||||
\subsubsection{Confusion Matrix}
|
\subsubsection{Confusion Matrix}
|
||||||
\label{results_matching_confusion_matrix}
|
\label{results_matching_confusion_matrix}
|
||||||
|
|
||||||
\begin{subfigs}{results_matching_ranking}{%
|
\begin{subfigs}{results_matching_ranking}{Results of the matching and ranking tasks. }[][
|
||||||
(Left) Confusion matrix of the matching task, with the presented visual textures as columns and the selected haptic texture in proportion as rows. %
|
\item Confusion matrix of the matching task, with the presented visual textures as columns and the selected haptic texture in proportion as rows.
|
||||||
The number in a cell is the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture. %
|
The number in a cell is the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture.
|
||||||
The diagonal represents the expected correct answers. %
|
The diagonal represents the expected correct answers.
|
||||||
Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 11~\%, \pinf{0.05}).
|
Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 11~\%, \pinf{0.05}).
|
||||||
%
|
\item Means with bootstrap 95~\% confidence interval of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs.
|
||||||
(Right) Means with bootstrap 95~\% confidence interval of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs. %
|
A lower rank means that the texture was considered rougher, a higher rank means smoother.
|
||||||
A lower rank means that the texture was considered rougher, a higher rank means smoother. %
|
]
|
||||||
}
|
\subfig[0.58]{results/matching_confusion_matrix}%
|
||||||
\subfig[0.58]{results/matching_confusion_matrix}%
|
\subfig[0.41]{results/ranking_mean_ci}%
|
||||||
\subfig[0.41]{results/ranking_mean_ci}%
|
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\figref{results_matching_ranking} (left) shows the confusion matrix of the matching task with the visual textures and the proportion of haptic texture selected in response, \ie the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture.
|
\figref{results_matching_ranking} (left) shows the confusion matrix of the matching task with the visual textures and the proportion of haptic texture selected in response, \ie the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture.
|
||||||
@@ -57,7 +56,6 @@ Normality was verified with a QQ-plot of the model residuals.
|
|||||||
%
|
%
|
||||||
No statistical significant effect of \textit{Visual Texture} was found (\anova{8}{512}{1.9}, \p{0.06}) on \textit{Completion Time} (\geomean{44}{\s}, \ci{42}{46}), indicating an equal difficulty and participant behaviour for all the visual textures.
|
No statistical significant effect of \textit{Visual Texture} was found (\anova{8}{512}{1.9}, \p{0.06}) on \textit{Completion Time} (\geomean{44}{\s}, \ci{42}{46}), indicating an equal difficulty and participant behaviour for all the visual textures.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Textures Ranking}
|
\subsection{Textures Ranking}
|
||||||
\label{results_ranking}
|
\label{results_ranking}
|
||||||
|
|
||||||
@@ -81,28 +79,27 @@ A Wilcoxon signed-rank test indicated that this difference was statistically sig
|
|||||||
%
|
%
|
||||||
These results indicate, with \figref{results_matching_ranking} (right), that the two haptic and visual modalities were integrated together, the resulting roughness ranking being between the two rankings of the modalities alone, but with haptics predominating.
|
These results indicate, with \figref{results_matching_ranking} (right), that the two haptic and visual modalities were integrated together, the resulting roughness ranking being between the two rankings of the modalities alone, but with haptics predominating.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Perceived Similarity of Visual and Haptic Textures}
|
\subsection{Perceived Similarity of Visual and Haptic Textures}
|
||||||
\label{results_similarity}
|
\label{results_similarity}
|
||||||
|
|
||||||
\begin{subfigs}{results_similarity}{%
|
\begin{subfigs}{results_similarity}{%
|
||||||
(Left) Correspondence analysis of the matching task confusion matrix (\figref{results_matching_ranking}, left).
|
(Left) Correspondence analysis of the matching task confusion matrix (\figref{results_matching_ranking}, left).
|
||||||
The visual textures are represented as blue squares, the haptic textures as red circles. %
|
The visual textures are represented as blue squares, the haptic textures as red circles. %
|
||||||
The closer the textures are, the more similar they were judged. %
|
The closer the textures are, the more similar they were judged. %
|
||||||
The first dimension (horizontal axis) explains 60~\% of the variance, the second dimension (vertical axis) explains 30~\% of the variance.
|
The first dimension (horizontal axis) explains 60~\% of the variance, the second dimension (vertical axis) explains 30~\% of the variance.
|
||||||
(Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (\figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. %
|
(Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (\figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. %
|
||||||
The height of the dendrograms represents the distance between the clusters. %
|
The height of the dendrograms represents the distance between the clusters. %
|
||||||
}
|
}
|
||||||
\begin{minipage}[c]{0.50\linewidth}%
|
\begin{minipage}[c]{0.50\linewidth}%
|
||||||
\centering%
|
\centering%
|
||||||
\subfig[1.0]{results/matching_correspondence_analysis}%
|
\subfig[1.0]{results/matching_correspondence_analysis}%
|
||||||
\end{minipage}%
|
\end{minipage}%
|
||||||
\begin{minipage}[c]{0.50\linewidth}%
|
\begin{minipage}[c]{0.50\linewidth}%
|
||||||
\centering%
|
\centering%
|
||||||
\subfig[0.66]{results/clusters_haptic}%
|
\subfig[0.66]{results/clusters_haptic}%
|
||||||
\par%
|
\par%
|
||||||
\subfig[0.66]{results/clusters_visual}%
|
\subfig[0.66]{results/clusters_visual}%
|
||||||
\end{minipage}%
|
\end{minipage}%
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (\secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (\secref{results_matching}).
|
The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (\secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (\secref{results_matching}).
|
||||||
@@ -131,12 +128,15 @@ The five identified visual texture clusters were: "Roughest" \{Metal Mesh\}; "Ro
|
|||||||
%
|
%
|
||||||
They are also easily identifiable on the visual ranking results, which also made it possible to name them.
|
They are also easily identifiable on the visual ranking results, which also made it possible to name them.
|
||||||
|
|
||||||
\begin{subfigs}{results_clusters}{%
|
\begin{subfigs}{results_clusters}{
|
||||||
(Left) Confusion matrix of the visual texture clusters with the corresponding haptic texture clusters selected in proportion. %
|
Confusion matrices of the visual textures with the corresponding haptic texture clusters selected in proportion.
|
||||||
(Right) Confusion matrix of the visual texture ranks with the corresponding haptic texture clusters selected in proportion. %
|
}[
|
||||||
(Both) Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 20~\%, \pinf{0.05}).
|
Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 20~\%, \pinf{0.05}).
|
||||||
}
|
][
|
||||||
\subfig[1]{results/haptic_visual_clusters_confusion_matrices}%
|
\item Confusion matrix of the visual texture clusters.
|
||||||
|
\item Confusion matrix of the visual texture ranks.
|
||||||
|
]
|
||||||
|
\subfig[1]{results/haptic_visual_clusters_confusion_matrices}%
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
Based on these results, two alternative confusion matrices were constructed.
|
Based on these results, two alternative confusion matrices were constructed.
|
||||||
@@ -153,19 +153,18 @@ A two-sample Pearson Chi-Squared test (\chisqr{24}{540}{342}, \pinf{0.001}) and
|
|||||||
%
|
%
|
||||||
This shows that the participants consistently identified the roughness of each visual texture and selected the corresponding haptic texture cluster.
|
This shows that the participants consistently identified the roughness of each visual texture and selected the corresponding haptic texture cluster.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Questionnaire}
|
\subsection{Questionnaire}
|
||||||
\label{results_questions}
|
\label{results_questions}
|
||||||
|
|
||||||
\begin{subfigs}{results_questions}{%
|
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each visual hand rendering.}[
|
||||||
Boxplots of the 7-item Likert scale question results (1=Not at all, 7=Extremely) %
|
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
||||||
with Holm-Bonferroni adjusted pairwise Wilcoxon signed-rank tests %
|
Lower is better for Difficulty and Uncomfortable; higher is better for Realism and Textures Match.%
|
||||||
(*** is \pinf{0.001} and ** is \pinf{0.01}),
|
][
|
||||||
by modality (left) and by task (right). %
|
\item By modality.
|
||||||
Lower is better for Difficulty and Uncomfortable; higher is better for Realism and Textures Match.%
|
\item By task.
|
||||||
}
|
]
|
||||||
\subfig[0.32]{results/questions_modalities}%
|
\subfig[0.32]{results/questions_modalities}%
|
||||||
\subfig[0.49]{results/questions_tasks}%
|
\subfig[0.49]{results/questions_tasks}%
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\figref{results_questions} presents the questionnaire results of the matching and ranking tasks.
|
\figref{results_questions} presents the questionnaire results of the matching and ranking tasks.
|
||||||
|
|||||||
@@ -1,26 +1,21 @@
|
|||||||
\section{User Study}
|
\section{User Study}
|
||||||
\label{experiment}
|
\label{experiment}
|
||||||
|
|
||||||
\begin{subfigswide}{renderings}{%
|
\begin{subfigs}{renderings}{
|
||||||
The three visual rendering conditions and the experimental procedure of the two-alternative forced choice (2AFC) psychophysical study.
|
The three visual rendering conditions and the experimental procedure of the two-alternative forced choice (2AFC) psychophysical study.
|
||||||
%
|
}[
|
||||||
During a trial, two tactile textures were rendered on the augmented area of the paper sheet (black rectangle) for 3\,s each, one after the other, then the participant chose which one was the roughest.
|
During a trial, two tactile textures were rendered on the augmented area of the paper sheet (black rectangle) for \qty{3}{\s} each, one after the other, then the participant chose which one was the roughest.
|
||||||
%
|
The visual rendering stayed the same during the trial.
|
||||||
The visual rendering stayed the same during the trial.
|
%The pictures are captured directly from the Microsoft HoloLens 2 headset.
|
||||||
%
|
][
|
||||||
(\level{Real}) The real environment and real hand view without any visual augmentation.
|
\item The real environment and real hand view without any visual augmentation.
|
||||||
%
|
\item The real environment and hand view with the virtual hand.
|
||||||
(\level{Mixed}) The real environment and hand view with the virtual hand.
|
\item Virtual environment with the virtual hand.
|
||||||
%
|
]
|
||||||
(\level{Virtual}) Virtual environment with the virtual hand.
|
\subfig[0.32]{experiment/real}
|
||||||
%
|
\subfig[0.32]{experiment/mixed}
|
||||||
%The pictures are captured directly from the Microsoft HoloLens 2 headset.
|
\subfig[0.32]{experiment/virtual}
|
||||||
}
|
\end{subfigs}
|
||||||
\hidesubcaption
|
|
||||||
\subfig[0.32]{experiment/real}
|
|
||||||
\subfig[0.32]{experiment/mixed}
|
|
||||||
\subfig[0.32]{experiment/virtual}
|
|
||||||
\end{subfigswide}
|
|
||||||
|
|
||||||
Our visuo-haptic rendering system, described in \secref{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in AR or VR.
|
Our visuo-haptic rendering system, described in \secref{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in AR or VR.
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -34,22 +34,20 @@ The \level{Real} rendering had the lowest JND (\percent{26} \ci{23}{29}), the \l
|
|||||||
%
|
%
|
||||||
All pairwise differences were statistically significant.
|
All pairwise differences were statistically significant.
|
||||||
|
|
||||||
\begin{subfigs}{discrimination_accuracy}{%
|
\begin{subfigs}{discrimination_accuracy}{Results of the vibrotactile texture roughness discrimination task. }[
|
||||||
Generalized Linear Mixed Model (GLMM) results in the vibrotactile texture roughness discrimination task, with non-parametric bootstrap 95\% confidence intervals.
|
Curves represent predictions from the GLMM model (probit link function), and points are estimated marginal means with non-parametric bootstrap 95\% confidence intervals.
|
||||||
}[%
|
][
|
||||||
\item Percentage of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering.
|
\item Proportion of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering.
|
||||||
Curves represent predictions from the GLMM (probit link function) and points are estimated marginal means.
|
\item Estimated points of subjective equality (PSE) of each visual rendering.
|
||||||
\item Estimated points of subjective equality (PSE) of each visual rendering.
|
%, defined as the amplitude difference at which both reference and comparison textures are perceived to be equivalent, \ie the accuracy in discriminating vibrotactile roughness.
|
||||||
%, defined as the amplitude difference at which both reference and comparison textures are perceived to be equivalent, \ie the accuracy in discriminating vibrotactile roughness.
|
\item Estimated just-noticeable difference (JND) of each visual rendering.
|
||||||
\item Estimated just-noticeable difference (JND) of each visual rendering.
|
%, defined as the minimum perceptual amplitude difference, \ie the sensitivity to vibrotactile roughness differences.
|
||||||
%, defined as the minimum perceptual amplitude difference, \ie the sensitivity to vibrotactile roughness differences.
|
]
|
||||||
]
|
\subfig[0.85]{results/trial_predictions}\\
|
||||||
\subfig[0.85]{results/trial_predictions}\\
|
\subfig[0.45]{results/trial_pses}
|
||||||
\subfig[0.45]{results/trial_pses}
|
\subfig[0.45]{results/trial_jnds}
|
||||||
\subfig[0.45]{results/trial_jnds}
|
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Response Time}
|
\subsubsection{Response Time}
|
||||||
\label{response_time}
|
\label{response_time}
|
||||||
|
|
||||||
|
|||||||
@@ -1,26 +1,3 @@
|
|||||||
\section{Introduction}
|
|
||||||
\label{introduction}
|
|
||||||
|
|
||||||
\begin{subfigswide}{hands}{%
|
|
||||||
Experiment \#1. The six considered visual hand renderings, as seen by the user through the AR headset
|
|
||||||
during the two-finger grasping of a virtual cube.
|
|
||||||
%
|
|
||||||
From left to right: %
|
|
||||||
no visual rendering \emph{(None)}, %
|
|
||||||
cropped virtual content to {enable} hand-cube occlusion \emph{(Occlusion, Occl)}, %
|
|
||||||
rings on the fingertips \emph{(Tips)}, %
|
|
||||||
thin outline of the hand \emph{(Contour, Cont)}, %
|
|
||||||
fingers' joints and phalanges \emph{(Skeleton, Skel)}, and %
|
|
||||||
semi-transparent 3D hand model \emph{(Mesh)}.
|
|
||||||
}
|
|
||||||
\subfig[0.15]{method/hands-none}%[None]
|
|
||||||
\subfig[0.15]{method/hands-occlusion}%[Occlusion (Occl)]
|
|
||||||
\subfig[0.15]{method/hands-tips}%[Tips]
|
|
||||||
\subfig[0.15]{method/hands-contour}%[Contour (Cont)]
|
|
||||||
\subfig[0.15]{method/hands-skeleton}%[Skeleton (Skel)]
|
|
||||||
\subfig[0.15]{method/hands-mesh}%[Mesh]
|
|
||||||
\end{subfigswide}
|
|
||||||
|
|
||||||
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
|
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
|
||||||
%
|
%
|
||||||
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
||||||
@@ -62,6 +39,23 @@ We consider two representative manipulation tasks: push-and-slide and grasp-and-
|
|||||||
The main contributions of this work are:
|
The main contributions of this work are:
|
||||||
%
|
%
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item a first human subject experiment evaluating the performance and user experience of six visual hand renderings superimposed on the real hand; %
|
\item a first human subject experiment evaluating the performance and user experience of six visual hand renderings superimposed on the real hand;
|
||||||
\item a second human subject experiment evaluating the performance and user experience of visuo-haptic hand renderings by comparing two vibrotactile contact techniques provided at four delocalized positions on the hand and combined with the two most representative visual hand renderings established in the first experiment.
|
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{subfigs}{hands}{The six visual hand renderings}[
|
||||||
|
Depicted as seen by the user through the AR headset during the two-finger grasping of a virtual cube.
|
||||||
|
][
|
||||||
|
\item No visual rendering \emph{(None)}.
|
||||||
|
\item Cropped virtual content to enable hand-cube occlusion \emph{(Occlusion, Occl)}.
|
||||||
|
\item Rings on the fingertips \emph{(Tips)}.
|
||||||
|
\item Thin outline of the hand \emph{(Contour, Cont)}.
|
||||||
|
\item Fingers' joints and phalanges \emph{(Skeleton, Skel)}.
|
||||||
|
\item Semi-transparent 3D hand model \emph{(Mesh)}.
|
||||||
|
]
|
||||||
|
\subfig[0.15]{method/hands-none}
|
||||||
|
\subfig[0.15]{method/hands-occlusion}
|
||||||
|
\subfig[0.15]{method/hands-tips}
|
||||||
|
\subfig[0.15]{method/hands-contour}
|
||||||
|
\subfig[0.15]{method/hands-skeleton}
|
||||||
|
\subfig[0.15]{method/hands-mesh}
|
||||||
|
\end{subfigs}
|
||||||
|
|||||||
@@ -75,17 +75,15 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
|
|||||||
\subsection{Manipulation Tasks and Virtual Scene}
|
\subsection{Manipulation Tasks and Virtual Scene}
|
||||||
\label{tasks}
|
\label{tasks}
|
||||||
|
|
||||||
\begin{subfigs}{tasks}{%
|
\begin{subfigs}{tasks}{The two manipulation tasks of the user study. }[
|
||||||
Experiment \#1. The two manipulation tasks:
|
The cube to manipulate is in the middle of the table (5-cm-edge and opaque) and the eight possible targets to reach are arround (7-cm-edge volume and semi-transparent).
|
||||||
}[
|
Only one target at a time was shown during the experiments.
|
||||||
\item pushing a virtual cube along a table towards a target placed on the same surface; %
|
][
|
||||||
\item grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane. %
|
\item Push task: pushing a virtual cube along a table towards a target placed on the same surface.
|
||||||
Both pictures show the cube to manipulate in the middle (5-cm-edge and opaque) and the eight possible targets to
|
\item Grasp task: grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane.
|
||||||
reach (7-cm-edge volume and semi-transparent). %
|
]
|
||||||
Only one target at a time was shown during the experiments.%
|
\subfig[0.23]{method/task-push}
|
||||||
]
|
\subfig[0.23]{method/task-grasp}
|
||||||
\subfig[0.23]{method/task-push}
|
|
||||||
\subfig[0.23]{method/task-grasp}
|
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
|
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
|
||||||
|
|||||||
@@ -1,18 +1,18 @@
|
|||||||
\subsection{Ranking}
|
\subsection{Ranking}
|
||||||
\label{ranks}
|
\label{ranks}
|
||||||
|
|
||||||
\begin{subfigs}{ranks}{%
|
\begin{subfigs}{results_ranks}{Boxplots of the ranking for each visual hand rendering. }[
|
||||||
Experiment \#1. Boxplots of the ranking (lower is better) of each visual hand rendering
|
Lower is better.
|
||||||
%
|
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
|
||||||
and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment:
|
][
|
||||||
%
|
\item Push task ranking.
|
||||||
** is \pinf{0.01} and * is \pinf{0.05}.
|
\item Grasp task ranking.
|
||||||
}
|
]
|
||||||
\subfig[0.24]{results/Ranks-Push}
|
\subfig[0.24]{results/Ranks-Push}
|
||||||
\subfig[0.24]{results/Ranks-Grasp}
|
\subfig[0.24]{results/Ranks-Grasp}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\figref{ranks} shows the ranking of each visual hand rendering for the Push and Grasp tasks.
|
\figref{results_ranks} shows the ranking of each visual hand rendering for the Push and Grasp tasks.
|
||||||
%
|
%
|
||||||
Friedman tests indicated that both ranking had statistically significant differences (\pinf{0.001}).
|
Friedman tests indicated that both ranking had statistically significant differences (\pinf{0.001}).
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -1,21 +1,19 @@
|
|||||||
\subsection{Questionnaire}
|
\subsection{Questionnaire}
|
||||||
\label{questions}
|
\label{questions}
|
||||||
|
|
||||||
\begin{subfigswide}{questions}{%
|
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each visual hand rendering. }[
|
||||||
Experiment \#1. Boxplots of the questionnaire results of each visual hand rendering
|
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
|
||||||
%
|
Lower is better for \textbf{(a)} difficulty and \textbf{(b)} fatigue.
|
||||||
and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
|
Higher is better for \textbf{(c)} precision, \textbf{(d)} efficiency, and \textbf{(e)} rating.
|
||||||
%
|
]
|
||||||
Lower is better for Difficulty and Fatigue. Higher is better for Precision, Efficiency, and Rating.
|
\subfig[0.19]{results/Question-Difficulty}
|
||||||
}
|
\subfig[0.19]{results/Question-Fatigue}
|
||||||
\subfig[0.19]{results/Question-Difficulty}
|
\subfig[0.19]{results/Question-Precision}
|
||||||
\subfig[0.19]{results/Question-Fatigue}
|
\subfig[0.19]{results/Question-Efficiency}
|
||||||
\subfig[0.19]{results/Question-Precision}
|
\subfig[0.19]{results/Question-Rating}
|
||||||
\subfig[0.19]{results/Question-Efficiency}
|
\end{subfigs}
|
||||||
\subfig[0.19]{results/Question-Rating}
|
|
||||||
\end{subfigswide}
|
|
||||||
|
|
||||||
\figref{questions} presents the questionnaire results for each visual hand rendering.
|
\figref{results_questions} presents the questionnaire results for each visual hand rendering.
|
||||||
%
|
%
|
||||||
Friedman tests indicated that all questions had statistically significant differences (\pinf{0.001}).
|
Friedman tests indicated that all questions had statistically significant differences (\pinf{0.001}).
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -1,31 +1,33 @@
|
|||||||
\section{Results}
|
\section{Results}
|
||||||
\label{results}
|
\label{results}
|
||||||
|
|
||||||
\begin{subfigs}{push_results}{%
|
\begin{subfigs}{push_results}{Results of the push task performance metrics for each visual hand rendering. }[
|
||||||
Experiment \#1: Push task.
|
Geometric means with bootstrap 95~\% confidence interval
|
||||||
%
|
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||||
Geometric means with bootstrap 95~\% confidence interval for each visual hand rendering
|
][
|
||||||
%
|
\item Time to complete a trial.
|
||||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
\item Number of contacts with the cube.
|
||||||
}
|
\item Time spent on each contact.
|
||||||
\subfig[0.24]{results/Push-CompletionTime-Hand-Overall-Means}%[Time to complete a trial.]
|
]
|
||||||
\subfig[0.24]{results/Push-ContactsCount-Hand-Overall-Means}%[Number of contacts with the cube.]
|
\subfig[0.24]{results/Push-CompletionTime-Hand-Overall-Means}
|
||||||
\hspace*{10mm}
|
\subfig[0.24]{results/Push-ContactsCount-Hand-Overall-Means}
|
||||||
\subfig[0.24]{results/Push-MeanContactTime-Hand-Overall-Means}%[Mean time spent on each contact.]
|
\subfig[0.24]{results/Push-MeanContactTime-Hand-Overall-Means}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\begin{subfigswide}{grasp_results}{%
|
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each visual hand rendering. }[
|
||||||
Experiment \#1: Grasp task.
|
Geometric means with bootstrap 95~\% confidence interval
|
||||||
%
|
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||||
Geometric means with bootstrap 95~\% confidence interval for each visual hand rendering
|
][
|
||||||
%
|
\item Time to complete a trial.
|
||||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
\item Number of contacts with the cube.
|
||||||
}
|
\item Time spent on each contact.
|
||||||
\subfig[0.24]{results/Grasp-CompletionTime-Hand-Overall-Means}%[Time to complete a trial.]
|
\item Distance between thumb and the other fingertips when grasping.
|
||||||
\subfig[0.24]{results/Grasp-ContactsCount-Hand-Overall-Means}%[Number of contacts with the cube.]
|
]
|
||||||
\subfig[0.24]{results/Grasp-MeanContactTime-Hand-Overall-Means}%[Mean time spent on each contact.]
|
\subfig[0.24]{results/Grasp-CompletionTime-Hand-Overall-Means}
|
||||||
\subfig[0.24]{results/Grasp-GripAperture-Hand-Overall-Means}%[\centering Distance between thumb and the other fingertips when grasping.]
|
\subfig[0.24]{results/Grasp-ContactsCount-Hand-Overall-Means}
|
||||||
\end{subfigswide}
|
\subfig[0.24]{results/Grasp-MeanContactTime-Hand-Overall-Means}
|
||||||
|
\subfig[0.24]{results/Grasp-GripAperture-Hand-Overall-Means}
|
||||||
|
\end{subfigs}
|
||||||
|
|
||||||
Results of each trials measure were analyzed with a linear mixed model (LMM), with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
|
Results of each trials measure were analyzed with a linear mixed model (LMM), with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -24,11 +24,11 @@ We evaluated both the delocalized positioning and the contact vibration techniqu
|
|||||||
\label{positioning}
|
\label{positioning}
|
||||||
|
|
||||||
\fig[0.30]{method/locations}{%
|
\fig[0.30]{method/locations}{%
|
||||||
Experiment \#2: setup of the vibrotactile devices.
|
Experiment \#2: setup of the vibrotactile devices.
|
||||||
%
|
%
|
||||||
To ensure minimal encumbrance, we used the same two motors throughout the experiment, moving them to the considered positioning before each new experimental block (in this case, on the co-located proximal phalanges, \emph{Prox}).
|
To ensure minimal encumbrance, we used the same two motors throughout the experiment, moving them to the considered positioning before each new experimental block (in this case, on the co-located proximal phalanges, \emph{Prox}).
|
||||||
%
|
%
|
||||||
Thin self-gripping straps were placed on the five considered positionings during the entirety of the experiment.
|
Thin self-gripping straps were placed on the five considered positionings during the entirety of the experiment.
|
||||||
}
|
}
|
||||||
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
@@ -72,29 +72,31 @@ Similarly, we designed the distance vibration technique (Dist) so that interpene
|
|||||||
\subsection{Experimental Design}
|
\subsection{Experimental Design}
|
||||||
\label{design}
|
\label{design}
|
||||||
|
|
||||||
\begin{subfigs}{tasks}{%
|
\begin{subfigs}{tasks}{The two manipulation tasks of the user study. }[
|
||||||
Experiment \#2. The two manipulation tasks: %
|
Both pictures show the cube to manipulate in the middle (\qty{5}{\cm} and opaque) and the eight possible targets to reach (\qty{7}{\cm} cube and semi-transparent).
|
||||||
(a) pushing a virtual cube along a table toward a target placed on the same surface; %
|
Only one target at a time was shown during the experiments.
|
||||||
(b) grasping and lifting a virtual cube toward a target placed on a \qty{20}{\cm} higher plane. %
|
][
|
||||||
Both pictures show the cube to manipulate in the middle (\qty{5}{\cm} and opaque) and the eight possible targets to reach (\qty{7}{\cm} cube and semi-transparent). %
|
\item Pushing a virtual cube along a table toward a target placed on the same surface.
|
||||||
Only one target at a time was shown during the experiments.%
|
\item Grasping and lifting a virtual cube toward a target placed on a \qty{20}{\cm} higher plane.
|
||||||
}
|
]
|
||||||
\subfig[0.23]{method/task-push}
|
\subfig[0.23]{method/task-push}
|
||||||
\subfig[0.23]{method/task-grasp}
|
\subfig[0.23]{method/task-grasp}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\begin{subfigswide}{push_results}{%
|
\begin{subfigs}{push_results}{Results of the grasp task performance metrics. }[
|
||||||
Experiment \#2: Push task.
|
Geometric means with bootstrap 95~\% confidence interval for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
|
||||||
%
|
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||||
Geometric means with bootstrap 95~\% confidence interval for each vibrotactile positioning (a, b, and c) or visual hand rendering (d)
|
][
|
||||||
%
|
\item Time to complete a trial.
|
||||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
\item Number of contacts with the cube.
|
||||||
}
|
\item Mean time spent on each contact.
|
||||||
\subfig[0.24]{results/Push-CompletionTime-Location-Overall-Means}%[Time to complete a trial.]
|
\item Mean time spent on each contact.
|
||||||
\subfig[0.24]{results/Push-Contacts-Location-Overall-Means}%[Number of contacts with the cube.]
|
]
|
||||||
\subfig[0.24]{results/Push-TimePerContact-Location-Overall-Means}%[Mean time spent on each contact.]
|
\subfig[0.24]{results/Push-CompletionTime-Location-Overall-Means}
|
||||||
\subfig[0.24]{results/Push-TimePerContact-Hand-Overall-Means}%[Mean time spent on each contact.]
|
\subfig[0.24]{results/Push-Contacts-Location-Overall-Means}
|
||||||
\end{subfigswide}
|
\subfig[0.24]{results/Push-TimePerContact-Location-Overall-Means}
|
||||||
|
\subfig[0.24]{results/Push-TimePerContact-Hand-Overall-Means}
|
||||||
|
\end{subfigs}
|
||||||
|
|
||||||
We considered the same two tasks as in Experiment \#1, described in \secref[visual_hand]{tasks}, that we analyzed separately, considering four independent, within-subject variables:
|
We considered the same two tasks as in Experiment \#1, described in \secref[visual_hand]{tasks}, that we analyzed separately, considering four independent, within-subject variables:
|
||||||
|
|
||||||
|
|||||||
@@ -19,16 +19,16 @@ Although the Distance technique provided additional feedback on the interpenetra
|
|||||||
\subsection{Questionnaire}
|
\subsection{Questionnaire}
|
||||||
\label{questions}
|
\label{questions}
|
||||||
|
|
||||||
\begin{subfigswide}{questions}{%
|
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each vibrotactile positioning. }[
|
||||||
Experiment \#2. Boxplots of the questionnaire results of each vibrotactile positioning
|
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||||
%
|
Higher is better for \textbf{(a)} vibrotactile rendering rating, \textbf{(c)} usefulness and \textbf{(c)} fatigue.
|
||||||
and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
Lower is better for \textbf{(d)} workload.
|
||||||
}
|
]
|
||||||
\subfig[0.24]{results/Question-Vibration Rating-Positioning-Overall}
|
\subfig[0.24]{results/Question-Vibration Rating-Positioning-Overall}
|
||||||
\subfig[0.24]{results/Question-Workload-Positioning-Overall}
|
\subfig[0.24]{results/Question-Usefulness-Positioning-Overall}
|
||||||
\subfig[0.24]{results/Question-Usefulness-Positioning-Overall}
|
\subfig[0.24]{results/Question-Realism-Positioning-Overall}
|
||||||
\subfig[0.24]{results/Question-Realism-Positioning-Overall}
|
\subfig[0.24]{results/Question-Workload-Positioning-Overall}
|
||||||
\end{subfigswide}
|
\end{subfigs}
|
||||||
|
|
||||||
\figref{questions} shows the questionnaire results for each vibrotactile positioning.
|
\figref{questions} shows the questionnaire results for each vibrotactile positioning.
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -1,18 +1,19 @@
|
|||||||
\section{Results}
|
\section{Results}
|
||||||
\label{results}
|
\label{results}
|
||||||
|
|
||||||
\begin{subfigswide}{grasp_results}{%
|
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning. }[
|
||||||
Experiment \#{2}: Grasp task.
|
Geometric means with bootstrap 95~\% confidence and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||||
%
|
][
|
||||||
Geometric means with bootstrap 95~\% confidence interval for each {vibrotactile positioning}
|
\item Time to complete a trial.
|
||||||
%
|
\item Number of contacts with the cube.
|
||||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
\item Time spent on each contact.
|
||||||
}
|
\item Distance between thumb and the other fingertips when grasping.
|
||||||
\subfig[0.24]{results/Grasp-CompletionTime-Location-Overall-Means}%[Time to complete a trial.]
|
]
|
||||||
\subfig[0.24]{results/Grasp-Contacts-Location-Overall-Means}%[Number of contacts with the cube.]
|
\subfig[0.24]{results/Grasp-CompletionTime-Location-Overall-Means}
|
||||||
\subfig[0.24]{results/Grasp-TimePerContact-Location-Overall-Means}%[Mean time spent on each contact.]
|
\subfig[0.24]{results/Grasp-Contacts-Location-Overall-Means}
|
||||||
\subfig[0.24]{results/Grasp-GripAperture-Location-Overall-Means}%[\centering Distance between thumb and the other fingertips when grasping.]
|
\subfig[0.24]{results/Grasp-TimePerContact-Location-Overall-Means}
|
||||||
\end{subfigswide}
|
\subfig[0.24]{results/Grasp-GripAperture-Location-Overall-Means}
|
||||||
|
\end{subfigs}
|
||||||
|
|
||||||
Results were analyzed similarly as for the first experiment (\secref{results}).
|
Results were analyzed similarly as for the first experiment (\secref{results}).
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -59,8 +59,8 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
% Images
|
% Images
|
||||||
% example: \fig[1]{universe}{The Universe}[Additional caption text, not shown in the list of figures]
|
% example: \fig[1]{filename}{Caption}[Additional caption text, not shown in the list of figures]
|
||||||
% reference later with: \figref{universe}
|
% reference later with: \figref{filename}
|
||||||
% 1 = \linewidth = 150 mm
|
% 1 = \linewidth = 150 mm
|
||||||
\RenewDocumentCommand{\fig}{O{1} O{htbp} m m O{}}{% #1 = width, #2 = position, #3 = filename, #4 = caption, #5 = additional caption
|
\RenewDocumentCommand{\fig}{O{1} O{htbp} m m O{}}{% #1 = width, #2 = position, #3 = filename, #4 = caption, #5 = additional caption
|
||||||
\begin{figure}[#2]%
|
\begin{figure}[#2]%
|
||||||
@@ -81,20 +81,23 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
% example:
|
% example:
|
||||||
% \begin{subfigs}{label}{Fig title}[Subfig titles]
|
% \begin{subfigs}{label}{Caption}[Additional caption text, not shown in the list of figures][
|
||||||
% \subfig{subfig1}%
|
% \item Subfig title 1.
|
||||||
% \subfig[1][htbp]{subfig2}[caption]%
|
% \item Subfig title 2.
|
||||||
|
% ]
|
||||||
|
% \subfig{filename1}%
|
||||||
|
% \subfig[1][htbp]{filename2}%
|
||||||
% \end{subfigs}
|
% \end{subfigs}
|
||||||
% reference later with: \figref{label}
|
% reference later with: \figref{label}
|
||||||
\RenewDocumentEnvironment{subfigs}{O{htbp} m m o}{% #1 = position, #2 = label, #3 = filename, #4 = subfig titles
|
\RenewDocumentEnvironment{subfigs}{O{htbp} m m O{} o}{% #1 = position, #2 = label, #3 = caption, #4 = additional caption, #5 = subfig titles
|
||||||
\begin{figure}[#1]%
|
\begin{figure}[#1]%
|
||||||
\centering%
|
\centering%
|
||||||
}{%
|
}{%
|
||||||
\caption[#3]{%
|
\caption[#3]{%
|
||||||
#3%
|
#3#4%
|
||||||
\IfValueTF{#4}{%
|
\IfValueTF{#5}{%
|
||||||
\begin{enumerate*}[label=\textbf{(\alph*)}]%
|
\begin{enumerate*}[label=\textbf{(\alph*)}]%
|
||||||
#4%
|
#5%
|
||||||
\end{enumerate*}%
|
\end{enumerate*}%
|
||||||
}%
|
}%
|
||||||
}%
|
}%
|
||||||
|
|||||||
Reference in New Issue
Block a user