Remove "see" before section or figure reference

This commit is contained in:
2024-09-16 12:57:05 +02:00
parent 8705affcc4
commit 3b66b69fa1
21 changed files with 145 additions and 133 deletions

View File

@@ -108,7 +108,7 @@ The most mature devices are \HMDs, which are portable headsets worn directly on
\AR/\VR can also be extended to render for sensory modalities other than vision.
%
\textcite{jeon2009haptic} proposed extending the \RV continuum to include haptic feedback by decoupling into two orthogonal haptic and visual axes (see \figref{visuo-haptic-rv-continuum3}).
\textcite{jeon2009haptic} proposed extending the \RV continuum to include haptic feedback by decoupling into two orthogonal haptic and visual axes (\figref{visuo-haptic-rv-continuum3}).
%
The combination of the two axes defines 9 types of \vh environments, with 3 possible levels of \RV for each \v or \h axis: real, augmented and virtual.
%

View File

@@ -94,7 +94,7 @@ As illustrated in the \figref{sensorimotor_continuum}, \Citeauthor{jones2006huma
]
This classification has been further refined by \textcite{bullock2013handcentric} into 15 categories of possible hand interactions with an object.
In this thesis, we are interested in exploring \vh augmentations (see \partref{perception}) and grasping of \VOs (see \partref{manipulation}) in the context of \AR and \WHs.
In this thesis, we are interested in exploring \vh augmentations (\partref{perception}) and grasping of \VOs (\partref{manipulation}) in the context of \AR and \WHs.
\subsubsection{Hand Anatomy and Motion}
\label{hand_anatomy}
@@ -143,8 +143,8 @@ It takes only \qtyrange{2}{3}{\s} to perform these procedures, except for contou
\subsubsection{Grasp Types}
\label{grasp_types}
Thanks to the degrees of freedom of its skeleton, the hand can take many postures to grasp an object (see \secref{hand_anatomy}).
By placing the thumb or palm against the other fingers (pad or palm grasps respectively), or by placing the fingers against each other as if holding a cigarette (side grasp), the hand can hold the object securely.
Thanks to the degrees of freedom of its skeleton, the hand can take many postures to grasp an object (\secref{hand_anatomy}).
By placing the thumb or palm against the other fingers (pad or palm opposition respectively), or by placing the fingers against each other as if holding a cigarette (side opposition), the hand can hold the object securely.
Grasping adapts to the shape of the object and the task to be performed, \eg grasping a pen with the fingertips then holding it to write, or taking a mug by the body to fill it and by the handle to drink it~\cite{cutkosky1986modeling}.
Three types of grasp are differentiated according to their degree of strength and precision.
In \emph{power grasps}, the object is held firmly and follows the movements of the hand rigidly.
@@ -154,7 +154,7 @@ In \emph{precision grasps}, the fingers can move the object within the hand but
For all possible objects and tasks, the number of grasp types can be reduced to 34 and classified as the taxonomy on \figref{gonzalez2014analysis}~\cite{gonzalez2014analysis}.\footnote{An updated taxonomy was then proposed by \textcite{feix2016grasp}: it is more complete but harder to present.}
For everyday objects, this number is even smaller, with between 5 and 10 grasp types depending on the activity~\cite{bullock2013grasp}.
Furthermore, the fingertips are the most involved areas of the hand, both in terms of frequency of use and time spent in contact: In particular, the thumb is almost always used, as well as the index and middle fingers, but the other fingers are used less frequently~\cite{gonzalez2014analysis}.
This can be explained by the sensitivity of the fingertips (see \secref{haptic_sense}) and the ease with which the thumb can be opposed to the index and middle fingers compared to the other fingers.
This can be explained by the sensitivity of the fingertips (\secref{haptic_sense}) and the ease with which the thumb can be opposed to the index and middle fingers compared to the other fingers.
\fig{gonzalez2014analysis}{Taxonomy of grasp types of~\textcite{gonzalez2014analysis}}[, classified according to their type (power, precision or intermediate) and the shape of the grasped object. Each grasp shows the area of the palm and fingers in contact with the object and the grasp with an example of object.]
@@ -162,7 +162,7 @@ This can be explained by the sensitivity of the fingertips (see \secref{haptic_s
\subsection{Haptic Perception of Object Properties}
\label{object_properties}
The active exploration of an object with the hand is performed as a sensorimotor loop: The exploratory movements (see \secref{exploratory_procedures}) guide the search for and adapt to sensory information (see \secref{haptic_sense}), allowing to construct a haptic perception of the object's properties.
The active exploration of an object with the hand is performed as a sensorimotor loop: The exploratory movements (\secref{exploratory_procedures}) guide the search for and adapt to sensory information (\secref{haptic_sense}), allowing to construct a haptic perception of the object's properties.
There are two main types of \emph{perceptual properties}.
The \emph{material properties} are the perception of the roughness, hardness, temperature and friction of the surface of the object~\cite{bergmanntiest2010tactual}.
The \emph{spatial properties} are the perception of the weight, shape and size of the object~\cite{lederman2009haptic}.
@@ -181,7 +181,7 @@ It is, for example, the perception of the fibers of fabric or wood and the textu
Roughness is what essentially characterises the perception of the \emph{texture} of the surface~\cite{hollins1993perceptual,baumgartner2013visual}.
When touching a surface in static touch, the asperities deform the skin and cause pressure sensations that allow a good perception of coarse roughness.
But when running the finger over the surface with a lateral movement (see \secref{exploratory_procedures}), vibrations are alos caused which give a better discrimination range and precision of roughness~\cite{bensmaia2005pacinian}.
But when running the finger over the surface with a lateral movement (\secref{exploratory_procedures}), vibrations are alos caused which give a better discrimination range and precision of roughness~\cite{bensmaia2005pacinian}.
In particular, when the asperities are smaller than \qty{0.1}{mm}, such as paper fibers, the pressure cues are no longer captured and only the movement, \ie the vibrations, can be used to detect the roughness~\cite{hollins2000evidence}.
This limit distinguishes \emph{macro-roughness} from \emph{micro-roughness}.
@@ -211,7 +211,7 @@ A larger spacing between elements increases the perceived roughness, but reaches
It is also possible to perceive the roughness of a surface by \emph{indirect touch}, with a tool held in the hand, for example by writing with a pen on paper~\cite{klatzky2003feeling}.
The skin is no longer deformed and only the vibrations of the tool are transmitted.
But this information is sufficient to feel the roughness, which perceived intensity follows the same quadratic law.
The intensity peak varies with the size of the contact surface of the tool, \eg a small tool allows to perceive finer spaces between the elements than with the finger (see \figref{klatzky2003feeling_2}).
The intensity peak varies with the size of the contact surface of the tool, \eg a small tool allows to perceive finer spaces between the elements than with the finger (\figref{klatzky2003feeling_2}).
However, as the speed of exploration changes the transmitted vibrations, a faster speed shifts the perceived intensity peak slightly to the right, \ie decreasing perceived roughness for fine spacings and increasing it for large spacings~\cite{klatzky2003feeling}.
\begin{subfigs}{klatzky2003feeling}{Estimation of haptic roughness of a surface of conical micro-elements by active exploration~\cite{klatzky2003feeling}. }[
@@ -248,7 +248,7 @@ The perceived softness of a fruit allows us to judge its ripeness, while ceramic
By tapping on a surface, metal will be perceived as harder than wood.
If the surface returns to its original shape after being deformed, the object is elastic (like a spring), otherwise it is plastic (like clay).
When the finger presses on an object (see \figref{exploratory_procedures}), its surface will move and deform with some resistance, and the contact area of the skin will also expand, changing the pressure distribution.
When the finger presses on an object (\figref{exploratory_procedures}), its surface will move and deform with some resistance, and the contact area of the skin will also expand, changing the pressure distribution.
When the surface is touched or tapped, vibrations are also transmitted to the skin.
Passive touch (without voluntary hand movements) and tapping allow a perception of hardness as good as active touch~\cite{friedman2008magnitude}.
@@ -290,7 +290,7 @@ Friction (or slipperiness) is the perception of \emph{resistance to movement} on
Sandpaper is typically perceived as sticky because it has a strong resistance to sliding on its surface, while glass is perceived as more slippery.
This perceptual property is closely related to the perception of roughness~\cite{hollins1993perceptual,baumgartner2013visual}.
When running the finger on a surface with a lateral movement (see \secref{exploratory_procedures}), the skin-surface contacts generate frictional forces in the opposite direction to the finger movement, giving kinesthetic cues, and also stretch the skin, giving cutaneous cues.
When running the finger on a surface with a lateral movement (\secref{exploratory_procedures}), the skin-surface contacts generate frictional forces in the opposite direction to the finger movement, giving kinesthetic cues, and also stretch the skin, giving cutaneous cues.
As illustrated in \figref{smith1996subjective_1}, a stick-slip phenomenon can also occur, where the finger is intermittently slowed by friction before continuing to move, on both rough and smooth surfaces~\cite{derler2013stick}.
The amplitude of the frictional force $F_s$ is proportional to the normal force of the finger $F_n$, \ie the force perpendicular to the surface, according to a coefficient of friction $\mu$:
\begin{equation}
@@ -340,7 +340,7 @@ For example, a larger object or a smoother surface, which increases the contact
Weight, size and shape are haptic spatial properties that are independent of the material properties described above.
Weight (or heaviness/lightness) is the perceived \emph{mass} of the object~\cite{bergmanntiest2010haptic}.
It is typically estimated by holding the object statically in the palm of the hand to feel the gravitational force (see \secref{exploratory_procedures}).
It is typically estimated by holding the object statically in the palm of the hand to feel the gravitational force (\secref{exploratory_procedures}).
A relative weight difference of \percent{8} is then required to be perceptible~\cite{brodie1985jiggling}.
By lifting the object, it is also possible to feel the object's force of inertia, \ie its resistance to velocity.
This provides an additional perceptual cue to its mass and slightly improves weight discrimination.
@@ -348,15 +348,15 @@ For both gravity and inertia, kinesthetic cues to force are much more important
%Le lien entre le poids physique et l'intensité perçue est variable selon les individus~\cite{kappers2013haptic}.
Size can be perceived as the object's \emph{length} (in one dimension) or its \emph{volume} (in three dimensions)~\cite{kappers2013haptic}.
In both cases, and if the object is small enough, a precision grip (see \figref{gonzalez2014analysis}) between the thumb and index finger can discriminate between sizes with an accuracy of \qty{1}{\mm}, but with an overestimation of length (power law with exponent \qty{1.3}).
Alternatively, it is necessary to follow the contours of the object with the fingers to estimate its length (see \secref{exploratory_procedures}), but with ten times less accuracy and an underestimation of length (power law with an exponent of \qty{0.9})~\cite{bergmanntiest2011cutaneous}.
In both cases, and if the object is small enough, a precision grip (\figref{gonzalez2014analysis}) between the thumb and index finger can discriminate between sizes with an accuracy of \qty{1}{\mm}, but with an overestimation of length (power law with exponent \qty{1.3}).
Alternatively, it is necessary to follow the contours of the object with the fingers to estimate its length (\secref{exploratory_procedures}), but with ten times less accuracy and an underestimation of length (power law with an exponent of \qty{0.9})~\cite{bergmanntiest2011cutaneous}.
The perception of the volume of an object that is not small is typically done by hand enclosure, but the estimate is strongly influenced by the size, shape and mass of the object, for an identical volume~\cite{kahrimanovic2010haptic}.
The shape of an object can be defined as the perception of its \emph{global geometry}, \ie its shape and contours.
This is the case, for example, when looking for a key in a pocket.
The exploration of contours and enclosure are then employed, as for the estimation of length and volume.
If the object is not known in advance, object identification is rather slow, taking several seconds~\cite{norman2004visual}.
Therefore, the exploration of other properties is favoured to recognize the object more quickly, in particular marked edges~\cite{klatzky1987there}, \eg a screw among nails (see \figref{plaisier2009salient_2}), or certain material properties~\cite{lakatos1999haptic,plaisier2009salient}, \eg a metal object among plastic objects.
Therefore, the exploration of other properties is favoured to recognize the object more quickly, in particular marked edges~\cite{klatzky1987there}, \eg a screw among nails (\figref{plaisier2009salient_2}), or certain material properties~\cite{lakatos1999haptic,plaisier2009salient}, \eg a metal object among plastic objects.
\begin{subfigs}{plaisier2009salient}{Identifcation of a sphere among cubes~\cite{plaisier2009salient}. }[
\item The shape has a significant effect on the perception of the volume of an object, \eg a sphere is perceived smaller than a cube of the same volume.

View File

@@ -26,17 +26,17 @@ An increasing \emph{wearability} resulting in the loss of the system's kinesthet
\subfig{pacchierotti2017wearable_3}
\end{subfigs}
Haptic research comes from robotics and teleoperation, and historically led to the design of haptic systems that are \emph{grounded} to an external support in the environment, such as a table (see \figref{pacchierotti2017wearable_1}).
These are robotic arms whose end-effector is either held in the hand or worn on a finger and which simulate interactions with a \VE by providing kinesthetic forces and torques feedback (see \figref{pacchierotti2015cutaneous}).
Haptic research comes from robotics and teleoperation, and historically led to the design of haptic systems that are \emph{grounded} to an external support in the environment, such as a table (\figref{pacchierotti2017wearable_1}).
These are robotic arms whose end-effector is either held in the hand or worn on a finger and which simulate interactions with a \VE by providing kinesthetic forces and torques feedback (\figref{pacchierotti2015cutaneous}).
They provide high fidelity haptic feedback but are heavy, bulky and limited to small workspaces~\cite{culbertson2018haptics}.
More portable designs have been developed by moving the grounded part to the user's body.
The entire robotic system is thus mounted on the user, forming an exoskeleton capable of providing kinesthetic feedback to the finger, \eg in \figref{achibet2017flexifingers}.
However, it cannot constrain the movements of the wrist and the reaction force is transmitted to the user where the device is grounded (see \figref{pacchierotti2017wearable_2}).
However, it cannot constrain the movements of the wrist and the reaction force is transmitted to the user where the device is grounded (\figref{pacchierotti2017wearable_2}).
They are often heavy and bulky and cannot be considered wearable.
\textcite{pacchierotti2017wearable} defined that : \enquote{A wearable haptic interface should also be small, easy to carry, comfortable, and it should not impair the motion of the wearer}.
An approach is then to move the grounding point very close to the end-effector (see \figref{pacchierotti2017wearable_3}): the interface is limited to cutaneous haptic feedback, but its design is more compact, lightweight and comfortable, \eg in \figref{leonardis20173rsr}, and the system is wearable.
An approach is then to move the grounding point very close to the end-effector (\figref{pacchierotti2017wearable_3}): the interface is limited to cutaneous haptic feedback, but its design is more compact, lightweight and comfortable, \eg in \figref{leonardis20173rsr}, and the system is wearable.
Moreover, as detailed in \secref{object_properties}, cutaneous sensations are necessary and often sufficient for the perception of the haptic properties of an object explored with the hand, as also argued by \textcite{pacchierotti2017wearable}.
\begin{subfigs}{grounded_to_wearable}{
@@ -134,8 +134,8 @@ They are small, lightweight and can be placed directly on any part of the hand.
All vibrotactile actuators are based on the same principle: generating an oscillating motion from an electric current with a frequency and amplitude high enough to be perceived by cutaneous mechanoreceptors.
Several types of vibrotactile actuators are used in haptics, with different trade-offs between size, proposed \DoFs and application constraints:
\begin{itemize}
\item An \ERM is a \DC motor that rotates an off-center mass when a voltage or current is applied (see \figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
\item A \LRA consists of a coil that creates a magnetic field from an \AC to oscillate a magnet attached to a spring, as an audio loudspeaker (see \figref{precisionmicrodrives_lra}). They are more complex to control and a bit larger than \ERMs. Each \LRA is designed to vibrate with maximum amplitude at a given frequency, but won't vibrate efficiently at other frequencies, \ie their bandwidth is narrow, as shown on \figref{azadi2014vibrotactile}.
\item An \ERM is a \DC motor that rotates an off-center mass when a voltage or current is applied (\figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
\item A \LRA consists of a coil that creates a magnetic field from an \AC to oscillate a magnet attached to a spring, as an audio loudspeaker (\figref{precisionmicrodrives_lra}). They are more complex to control and a bit larger than \ERMs. Each \LRA is designed to vibrate with maximum amplitude at a given frequency, but won't vibrate efficiently at other frequencies, \ie their bandwidth is narrow, as shown on \figref{azadi2014vibrotactile}.
\item A \VCA is a \LRA but capable of generating vibration at two \DoF, with an independent control of the frequency and amplitude of the vibration on a wide bandwidth. They are larger in size than \ERMs and \LRAs, but can generate more complex renderings.
\item Piezoelectric actuators deform a solid material when a voltage is applied. They are very small and thin, and allow two \DoFs of amplitude and frequency control. However, they require high voltages to operate thus limiting their use in wearable devices.
\end{itemize}
@@ -169,8 +169,8 @@ Therefore, the visual rendering of a touched object can also greatly influence t
\textcite{bhatia2024augmenting} categorize the tactile augmentations of real objects into three types: direct touch, touch-through, and tool mediated.
In direct touch, the haptic device does not cover the interior of the hand to not impair the user to interact with the \RE.
We are interested in direct touch augmentations with wearable haptic devices (see \secref{wearable_haptic_devices}), as their integration with \AR is particularly promising for direct hand interaction with visuo-haptic augmentations.
We also focus tactile augmentations stimulating the mechanoreceptors of the skin (see \secref{haptic_sense}), thus excluding temperature perception, as they are the most common existing haptic interfaces.
We are interested in direct touch augmentations with wearable haptic devices (\secref{wearable_haptic_devices}), as their integration with \AR is particularly promising for direct hand interaction with visuo-haptic augmentations.
We also focus tactile augmentations stimulating the mechanoreceptors of the skin (\secref{haptic_sense}), thus excluding temperature perception, as they are the most common existing haptic interfaces.
% \cite{bhatia2024augmenting}. Types of interfaces : direct touch, through touch, through tool. Focus on direct touch, but when no rendering done,
% \cite{klatzky2003feeling} : rendering roughness, friction, deformation, temperatures

View File

@@ -1,7 +1,7 @@
\section{Principles and Capabilities of AR}
\label{augmented_reality}
The first \AR headset was invented by \textcite{sutherland1968headmounted}: With the technology available at the time, it was already capable of displaying virtual objects at a fixed point in space in real time, giving the user the illusion that the content was present in the room (see \figref{sutherland1968headmounted}).
The first \AR headset was invented by \textcite{sutherland1968headmounted}: With the technology available at the time, it was already capable of displaying virtual objects at a fixed point in space in real time, giving the user the illusion that the content was present in the room (\figref{sutherland1968headmounted}).
Fixed to the ceiling, the headset displayed a stereoscopic (one image per eye) perspective projection of the virtual content on a transparent screen, taking into account the user's position, and thus already following the interaction loop presented in \figref[introduction]{interaction-loop}.
\begin{subfigs}{sutherland1968headmounted}{Photos of the first \AR system~\cite{sutherland1968headmounted}. }[
@@ -90,14 +90,14 @@ Despite the clear and acknowledged definition presented in \secref{ar_definition
Presence is one of the key concept to characterize a \VR experience.
\AR and \VR are both essentially illusions as the virtual content does not physically exist but is just digitally simulated and rendered to the user's perception through a user interface and the user's senses.
Such experience of disbelief suspension in \VR is what is called presence, and it can be decomposed into two dimensions: \PI and \PSI~\cite{slater2009place}.
\PI is the sense of the user of \enquote{being there} in the \VE (see \figref{presence-vr}).
\PI is the sense of the user of \enquote{being there} in the \VE (\figref{presence-vr}).
It emerges from the real time rendering of the \VE from the user's perspective: to be able to move around inside the \VE and look from different point of views.
\PSI is the illusion that the virtual events are really happening, even if the user knows that they are not real.
It doesn't mean that the virtual events are realistic, but that they are plausible and coherent with the user's expectations.
A third strong illusion in \VR is the \SoE, which is the illusion that the virtual body is one's own~\cite{slater2022separate,guy2023sense}.
The \AR presence is far less defined and studied than for \VR~\cite{tran2024survey}, but it will be useful to design, evaluate and discuss our contributions in the next chapters.
Thereby, \textcite{slater2022separate} proposed to invert \PI to what we can call \enquote{object illusion}, \ie the sense of the virtual object to \enquote{feels here} in the \RE (see \figref{presence-ar}).
Thereby, \textcite{slater2022separate} proposed to invert \PI to what we can call \enquote{object illusion}, \ie the sense of the virtual object to \enquote{feels here} in the \RE (\figref{presence-ar}).
As with VR, \VOs must be able to be seen from different angles by moving the head but also, this is more difficult, be consistent with the \RE, \eg occlude or be occluded by real objects~\cite{macedo2023occlusion}, cast shadows or reflect lights.
The \PSI can be applied to \AR as is, but the \VOs must additionally have knowledge of the \RE and react accordingly to it.
\textcite{skarbez2021revisiting} also named \PI for \AR as \enquote{immersion} and \PSI as \enquote{coherence}, and these terms will be used in the remainder of this thesis.
@@ -120,12 +120,12 @@ As presence, \SoE in \AR is a recent topic and little is known about its percept
Both \AR/\VR and haptic systems are able to render virtual objects and environments as sensations displayed to the user's senses.
However, as presented in \figref[introduction]{interaction-loop}, the user must be able to manipulate the virtual objects and environments to complete the loop, \eg through a hand-held controller, a tangible object, or even directly with the hands.
An interaction technique is then required to map user inputs to actions on the \VE~\cite{laviola20173d}.
An \emph{interaction technique} is then required to map user inputs to actions on the \VE~\cite{laviola20173d}.
\subsubsection{Interaction Techniques}
For a user to interact with a computer system, they first perceive the state of the system and then act on it using an input interface.
An input interface can be either an active sensing, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a passive sensing, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking.
An input interface can be either an \emph{active sensing}, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a \emph{passive sensing}, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking.
The sensors' information gathered by the input interface are then translated into actions within the computer system by an interaction technique.
For example, a cursor on a screen can be moved either with a mouse or with arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image.
Choosing useful and efficient input interfaces and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}.
@@ -145,7 +145,7 @@ These three tasks are geometric (rigid) manipulations of the object: they do not
The \emph{navigation tasks} are the movements of the user within the \VE.
Travel is the control of the position and orientation of the viewpoint in the \VE, \eg physical walking, velocity control, or teleportation.
Wayfinding is the cognitive planning of the movement such as pathfinding or route following (see \figref{grubert2017pervasive}).
Wayfinding is the cognitive planning of the movement such as pathfinding or route following (\figref{grubert2017pervasive}).
The \emph{system control tasks} are changes in the system state through commands or menus such as creation, deletion, or modification of objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
@@ -161,7 +161,7 @@ As of today, an immersive \AR system track itself with the user in \ThreeD, usin
It enables to register the \VE with the \RE and the user simply moves themselves to navigate within the virtual content.
%This tracking and mapping of the user and \RE into the \VE is named the \enquote{extent of world knowledge} by \textcite{skarbez2021revisiting}, \ie to what extent the \AR system knows about the \RE and is able to respond to changes in it.
However, direct hand manipulation of the virtual content is a challenge that requires specific interaction techniques~\cite{billinghurst2021grand}.
This is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{hertel2021taxonomy}.
Such \emph{reality based interaction}~\cite{jacob2008realitybased} in immersive \AR is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{billinghurst2015survey,hertel2021taxonomy}.
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[
\item Spatial selection of virtual item of an extended display using a hand-held smartphone~\cite{grubert2015multifi}.
@@ -176,24 +176,6 @@ This is often achieved using two interaction techniques: \emph{tangible objects}
\subfig{newcombe2011kinectfusion}
\end{subfigs}
\paragraph{Manipulating with Virtual Hands}
Dans le cas de la RA immersive avec une interaction "naturelles" (cf \cite{billinghurst2005designing}), la sélection consiste à toucher l'objet virtuel avec les mains, et la manipulation à le saisir et le déplacer avec les mains.
C'est ce qu'on appelle les "virtual hands" : les mains virtuelles de l'utilisateur dans le \VE.
Le dispositif d'entrée n'est pas une manette comme c'est souvent le cas en VR, mais directement les mains.
Les mains sont donc détectées et reproduites dans le \VE.
Maglré tout, le principal problème de l'interaction naturelle avec les mains dans un \VE, outre la détection des mains, est le manque de contrainte physique sur le mouvement de la main et des doigts, ce qui rend les actions fatiguantes (\cite{hincapie-ramos2014consumed}), imprécises (on ne sait pas si on touche l'objet virtuel sans retour haptique) et difficile (idem, sans retour haptique on ne sent pas l'objet glisser, et on a pas de confirmation qu'il est bien en main). Des techniques d'interactions d'une part sont toujours nécessaire,et un retour haptique adapté aux contraintes d'interactions de la RA est indispensable pour une bonne expérience utilisateur.
Cela peut être aussi difficile à comprendre : "\cite{chan2010touching} proposent la combinaison de retours continus, pour que lutilisateur situe le suivi de son corps, et de retours discrets pour confirmer ses actions." Un rendu et affichage visuel des mains est un retour continu, un bref changement de couleur ou un retour haptique est un retour discret. Mais cette combinaison n'a pas été évaluée.
\cite{hilliges2012holodesk}
\cite{piumsomboon2013userdefined} : user-defined gestures for manipulation of virtual objects in AR.
\cite{piumsomboon2014graspshell} : direct hand manipulation of virtual objects in immersive AR vs vocal commands.
\cite{chan2010touching} : cues for touching (selection) virtual objects.
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt quopaque, soit en affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
\paragraph{Manipulating with Tangibles}
\cite{issartel2016tangible}
@@ -206,6 +188,33 @@ et l'objet visuellement peut ne pas correspondre aux sensations haptiques du tan
C'est pourquoi utiliser du wearable pour modifier les sensations cutanées du tangible est une solution qui fonctionne en VR (\cite{detinguy2018enhancing,salazar2020altering}) et pourrait être adaptée à la RA.
Mais, spécifique à la RA vs RV, le tangible et la main sont visibles, du moins partiellement, même si caché par un objet virtuel : comment va fonctionner l'augmentation haptique en RA vs RV ? Biais perceptuels ? Le fait de voir toucher avec sa propre main le tangible vs en RV où il est caché, donc illusion potentiellement plus forte en RV ?
\paragraph{Manipulating with Virtual Hands}
Les techniques d'interactions dites \enquote{naturelles} sont celles qui permettent à l'utilisateur d'utiliser directement les mouvements de son corps comme interface d'entrée avec le système de \AR/\VR~\cite{billinghurst2015survey}.
C'est la main qui nous permet de manipuler avec force et précision les objets réels de la vie de tous les jours (\secref{hand_anatomy}), et c'est donc les techniques d'interactions de mains virtuelles qui sont les plus naturelles pour manipuler des objets virtuels~\cite{laviola20173d}.
Initialement suivi par des dispositifs de capture de mouvement sous forme de gants ou de contrôleurs, il est maintenant possible de suivre les mains d'un utilisateur en temps réel avec des caméra et algorithmes de vision par ordinateur intégrés nativement dans les casques de \AR~\cite{tong2023survey}.
La main de l'utilisateur est donc suivie et reconstruite dans le \VE sous forme d'une \emph{main virtuelle}~\cite{billinghurst2015survey,laviola20173d}.
Les modèles les plus simples représentent la main sous forme d'un objet 3D rigide suivant les mouvements de la main réelle avec \qty{6}{\DoF} (position et orientation dans l'espace)~\cite{talvas2012novel}.
Une alternative est de représenter seulement les bouts des doigts, ce qui permet de réaliser des oppositions entre les doigts (\secref{grasp_types}).
Enfin, les techniques les plus courantes représentent l'ensemble du squelette de la main sous forme d'un modèle kinématique articulé:
Chaque phalange virtuelle est alors représentée avec certain \DoFs par rapport à la phalange précédente (\secref{hand_anatomy}).
Il existe plusieurs techniques pour simuler les contacts et l'interaction du modèle de main virtuelle avec les objets virtuels~\cite{laviola20173d}.
Les techniques avec une approche heuristique utilisent des règles pour déterminer la sélection, la manipulation et le lâcher d'un objet~\cite{kim2015physicsbased}.
Une sélection se fait par exemple en réalisant avec la main un geste prédéfini sur l'objet comme un type de grasping (\secref{grasp_types})~\cite{piumsomboon2013userdefined}.
Les techniques basées sur la physique simulent les forces aux points de contact du modèle avec l'objet.
Maglré tout, le principal problème de l'interaction naturelle avec les mains dans un \VE, outre la détection des mains, est le manque de contrainte physique sur le mouvement de la main et des doigts, ce qui rend les actions fatiguantes (\cite{hincapie-ramos2014consumed}), imprécises (on ne sait pas si on touche l'objet virtuel sans retour haptique) et difficile (idem, sans retour haptique on ne sent pas l'objet glisser, et on a pas de confirmation qu'il est bien en main). Des techniques d'interactions d'une part sont toujours nécessaire,et un retour haptique adapté aux contraintes d'interactions de la RA est indispensable pour une bonne expérience utilisateur.
Cela peut être aussi difficile à comprendre : "\cite{chan2010touching} proposent la combinaison de retours continus, pour que lutilisateur situe le suivi de son corps, et de retours discrets pour confirmer ses actions." Un rendu et affichage visuel des mains est un retour continu, un bref changement de couleur ou un retour haptique est un retour discret. Mais cette combinaison n'a pas été évaluée.
\cite{piumsomboon2013userdefined} : user-defined gestures for manipulation of virtual objects in AR.
\cite{piumsomboon2014graspshell} : direct hand manipulation of virtual objects in immersive AR vs vocal commands.
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt quopaque, soit en affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
\subsection{Visual Rendering of Hands in AR}
@@ -218,6 +227,9 @@ It has also been shown that over a realistic avatar, a skeleton rendering can p
\fig{prachyabrued2014visual}{Effect of different hand renderings on a pick-and-place task in VR~\cite{prachyabrued2014visual}.}
\cite{hilliges2012holodesk}
\cite{chan2010touching} : cues for touching (selection) virtual objects.
Mutual visual occlusion between a virtual object and the real hand, \ie hiding the virtual object when the real hand is in front of it and hiding the real hand when it is behind the virtual object, is often presented as natural and realistic, enhancing the blending of real and virtual environments~\cite{piumsomboon2014graspshell, al-kalbani2016analysis}.
In video see-through AR (VST-AR), this could be solved as a masking problem by combining the image of the real world captured by a camera and the generated virtual image~\cite{macedo2023occlusion}.
In OST-AR, this is more difficult because the virtual environment is displayed as a transparent 2D image on top of the 3D real world, which cannot be easily masked~\cite{macedo2023occlusion}.

View File

@@ -67,10 +67,10 @@ Some studies have investigated the visuo-haptic perception of virtual objects in
They have shown how the latency of the visual rendering of an object with haptic feedback or the type of environment (\VE or \RE) can affect the perception of an identical haptic rendering.
Indeed, there are indeed inherent and unavoidable latencies in the visual and haptic rendering of virtual objects, and the visual-haptic feedback may not appear to be simultaneous.
In an immersive \VST-\AR setup, \textcite{knorlein2009influence} rendered a virtual piston using force-feedback haptics that participants pressed directly with their hand (see \figref{visuo-haptic-stiffness}).
In an immersive \VST-\AR setup, \textcite{knorlein2009influence} rendered a virtual piston using force-feedback haptics that participants pressed directly with their hand (\figref{visuo-haptic-stiffness}).
In a \TAFC task, participants pressed two pistons and indicated which was stiffer.
One had a reference stiffness but an additional visual or haptic delay, while the other varied with a comparison stiffness but had no delay. \footnote{Participants were not told about the delays and stiffness tested, nor which piston was the reference or comparison. The order of the pistons (which one was pressed first) was also randomized.}%
Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (see \figref{knorlein2009influence_2}).
Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (\figref{knorlein2009influence_2}).
\begin{subfigs}{visuo-haptic-stiffness}{Perception of haptic stiffness in \VST-\AR ~\cite{knorlein2009influence}. }[
\item Participant pressing a virtual piston rendered by a force-feedback device with their hand.
@@ -91,7 +91,7 @@ where $t_B = t_A + \Delta t$.
Therefore, a haptic delay (positive $\Delta t$) increases the perceived stiffness $k$, while a visual delay in displacement (negative $\Delta t$) decreases perceived $k$~\cite{diluca2011effects}.
In a similar \TAFC user study, participants compared perceived stiffness of virtual pistons in \OST-\AR and \VR~\cite{gaffary2017ar}.
However, the force-feedback device and the participant's hand were not visible (see \figref{gaffary2017ar}).
However, the force-feedback device and the participant's hand were not visible (\figref{gaffary2017ar}).
The reference piston was judged to be stiffer when seen in \VR than in \AR, without participants noticing this difference, and more force was exerted on the piston overall in \VR.
This suggests that the haptic stiffness of virtual objects feels \enquote{softer} in an \AE than in a full \VE.
%Two differences that could be worth investigating with the two previous studies are the type of \AR (visuo or optical) and to see the hand touching the virtual object.
@@ -118,7 +118,7 @@ No participant (out of 19) was able to detect a \qty{50}{\ms} visual lag and a \
A few wearable haptic devices have been specifically designed or experimentally tested for direct hand interaction in immersive \AR.
The main challenge of wearable haptics for \AR is to provide haptic sensations of virtual or augmented objects that are touched and manipulated directly with the fingers while keeping the fingertips free to interact with the \RE.
Several approaches have been proposed to move the actuator away to another location on the hand.
Yet, they differ greatly in the actuators used (see \secref{wearable_haptic_devices}) thus the haptic feedback (see \secref{tactile_rendering}), and the placement of the haptic rendering.
Yet, they differ greatly in the actuators used (\secref{wearable_haptic_devices}) thus the haptic feedback (\secref{tactile_rendering}), and the placement of the haptic rendering.
Other wearable haptic actuators have been proposed for \AR but are not detailed here.
A first reason is that they permanently cover the fingertip and affect the interaction with the \RE, such as thin-skin tactile interfaces~\cite{withana2018tacttoo,teng2024haptic} or fluid-based interfaces~\cite{han2018hydroring}.
@@ -126,12 +126,12 @@ Another category of actuators relies on systems that cannot be considered as por
\subsubsection{Nail-Mounted Devices}
\textcite{ando2007fingernailmounted} were the first to propose this approach that they experimented with a voice-coil mounted on the index nail (see \figref{ando2007fingernailmounted}).
The sensation of crossing edges of a virtual patterned texture (see \secref{texture_rendering}) on a real sheet of paper were rendered with \qty{20}{\ms} vibration impulses at \qty{130}{\Hz}.
\textcite{ando2007fingernailmounted} were the first to propose this approach that they experimented with a voice-coil mounted on the index nail (\figref{ando2007fingernailmounted}).
The sensation of crossing edges of a virtual patterned texture (\secref{texture_rendering}) on a real sheet of paper were rendered with \qty{20}{\ms} vibration impulses at \qty{130}{\Hz}.
Participants were able to match the virtual patterns to their real counterparts of height \qty{0.25}{\mm} and width \qtyrange{1}{10}{\mm}, but systematically overestimated the virtual width to be \qty{4}{\mm} longer.
This approach was later extended by \textcite{teng2021touch} with Touch\&Fold, a haptic device mounted on the nail but able to unfold its end-effector on demand to make contact with the fingertip when touching virtual objects (see \figref{teng2021touch}).
This moving platform also contains a \LRA (see \secref{moving_platforms}) and provides contact pressure (\qty{0.34}{\N} force) and texture (\qtyrange{150}{190}{\Hz} bandwidth) sensations.
This approach was later extended by \textcite{teng2021touch} with Touch\&Fold, a haptic device mounted on the nail but able to unfold its end-effector on demand to make contact with the fingertip when touching virtual objects (\figref{teng2021touch}).
This moving platform also contains a \LRA (\secref{moving_platforms}) and provides contact pressure (\qty{0.34}{\N} force) and texture (\qtyrange{150}{190}{\Hz} bandwidth) sensations.
%The whole system is very compact (\qtyproduct{24 x 24 x 41}{\mm}), lightweight (\qty{9.5}{\g}), and fully portable by including a battery and Bluetooth wireless communication. \qty{20}{\ms} for the Bluetooth
When touching virtual objects in \OST-\AR with the index finger, this device was found to be more realistic overall (5/7) than vibrations with a \LRA at \qty{170}{\Hz} on the nail (3/7).
Still, there is a high (\qty{92}{\ms}) latency for the folding mechanism and this design is not suitable for augmenting real tangible objects.
@@ -139,12 +139,12 @@ Still, there is a high (\qty{92}{\ms}) latency for the folding mechanism and thi
% teng2021touch: (5.27+3.03+5.23+5.5+5.47)/5 = 4.9
% ando2007fingernailmounted: (2.4+2.63+3.63+2.57+3.2)/5 = 2.9
To always keep the fingertip, \textcite{maeda2022fingeret} with Fingeret proposed to adapt the belt actuators (see \secref{belt_actuators}) to design a \enquote{finger-side actuator} instead (see \figref{maeda2022fingeret}).
To always keep the fingertip, \textcite{maeda2022fingeret} with Fingeret proposed to adapt the belt actuators (\secref{belt_actuators}) to design a \enquote{finger-side actuator} instead (\figref{maeda2022fingeret}).
Mounted on the nail, the device actuates two rollers, one on each side of the fingertip, to deform the skin: When the rollers both rotate inwards (towards the pad) they pull the skin, simulating a contact sensation, and when they both rotate outwards (towards the nail) they push the skin, simulating a release sensation.
By doing quick rotations, the rollers can also simulate a texture sensation.
%The device is also very compact (\qty{60 x 25 x 36}{\mm}), lightweight (\qty{18}{\g}), and portable with a battery and Bluetooth wireless communication with \qty{83}{\ms} latency.
In a user study not in \AR, but involving touching different images on a tablet, Fingeret was found to be more realistic (4/7) than a \LRA at \qty{100}{\Hz} on the nail (3/7) for rendering buttons and a patterned texture (see \secref{texture_rendering}), but not different from vibrations for rendering high-frequency textures (3.5/7 for both).
However, as for \textcite{teng2021touch}, finger speed was not taken into account for rendering vibrations, which may have been detrimental to texture perception (see \secref{texture_rendering}).
In a user study not in \AR, but involving touching different images on a tablet, Fingeret was found to be more realistic (4/7) than a \LRA at \qty{100}{\Hz} on the nail (3/7) for rendering buttons and a patterned texture (\secref{texture_rendering}), but not different from vibrations for rendering high-frequency textures (3.5/7 for both).
However, as for \textcite{teng2021touch}, finger speed was not taken into account for rendering vibrations, which may have been detrimental to texture perception (\secref{texture_rendering}).
\begin{subfigs}{ar_wearable}{Nail-mounted wearable haptic devices designed for \AR. }[
\item A voice-coil rendering a virtual haptic texture on a real sheet of paper~\cite{ando2007fingernailmounted}.
@@ -161,14 +161,14 @@ However, as for \textcite{teng2021touch}, finger speed was not taken into accoun
The haptic ring belt devices of \textcite{minamizawa2007gravity} and \textcite{pacchierotti2016hring}, presented in \secref{belt_actuators}, have been employed to improve the manipulation of real and virtual objects in \AR.
In a \VST-\AR setup, \textcite{scheggi2010shape} explored the effect of rendering the weight (see \secref{weight_rendering}) of a virtual cube placed on a real surface hold with the thumb, index, and middle fingers (see \figref{scheggi2010shape}).
In a \VST-\AR setup, \textcite{scheggi2010shape} explored the effect of rendering the weight (\secref{weight_rendering}) of a virtual cube placed on a real surface hold with the thumb, index, and middle fingers (\figref{scheggi2010shape}).
The middle phalanx of each of these fingers was equipped with a haptic ring of \textcite{minamizawa2007gravity}.
However, no proper user study was conducted to evaluate this feedback.% on the manipulation of the cube.
%that simulated the weight of the cube.
%A virtual cube that could push on the cube was manipulated with the other hand through a force-feedback device.
%\textcite{scheggi2010shape} report that \percent{80} of the participants appreciated the weight feedback.
In pick-and-place tasks in non-immersive \VST-\AR involving both virtual and real objects (see \figref{maisto2017evaluation}), \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the effects of providing haptic feedback about contacts at the fingertips using either the haptic ring of \textcite{pacchierotti2016hring}, or on the proximal phalanx, the moving platform of \textcite{chinello2020modular} on the fingertip.
In pick-and-place tasks in non-immersive \VST-\AR involving both virtual and real objects (\figref{maisto2017evaluation}), \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the effects of providing haptic feedback about contacts at the fingertips using either the haptic ring of \textcite{pacchierotti2016hring}, or on the proximal phalanx, the moving platform of \textcite{chinello2020modular} on the fingertip.
They showed that the haptic feedback improved the performance (completion time), reduced the exerted force on the cubes over a visual feedback alone.
The haptic ring was also perceived by users to be more effective than the moving platform.
However, the measured difference in performance could be attributed to either the device or the device position (proximal vs fingertip), or both.
@@ -188,10 +188,10 @@ These two studies were also conducted in non-immersive setups, where users looke
With their \enquote{Tactile And Squeeze Bracelet Interface} (Tasbi), already mentioned in \secref{belt_actuators}, \textcite{pezent2019tasbi} and \textcite{pezent2022design} explored the use of a wrist-worn bracelet actuator.
It is capable of providing a uniform pressure sensation (up to \qty{15}{\N} and \qty{10}{\Hz}) and vibration with six \LRAs (\qtyrange{150}{200}{\Hz} bandwidth).
A user study was conducted in \VR to compare the perception of visuo-haptic stiffness rendering~\cite{pezent2019tasbi}.
In a \TAFC task, participants pressed a virtual button with different levels of stiffness via a virtual hand constrained by the \VE (see \figref{pezent2019tasbi_2}).
In a \TAFC task, participants pressed a virtual button with different levels of stiffness via a virtual hand constrained by the \VE (\figref{pezent2019tasbi_2}).
A higher visual stiffness required a larger physical displacement to press the button (C/D ratio, see \secref{pseudo_haptic}), while the haptic stiffness control the rate of the pressure feedback when pressing.
When the visual and haptic stiffness were coherent or when only the haptic stiffness changed, participants easily discriminated two buttons with different stiffness levels (see \figref{pezent2019tasbi_3}).
However, if only the visual stiffness changed, participants were not able to discriminate the different stiffness levels (see \figref{pezent2019tasbi_4}).
When the visual and haptic stiffness were coherent or when only the haptic stiffness changed, participants easily discriminated two buttons with different stiffness levels (\figref{pezent2019tasbi_3}).
However, if only the visual stiffness changed, participants were not able to discriminate the different stiffness levels (\figref{pezent2019tasbi_4}).
This suggests that in \VR, the haptic pressure is more important perceptual cue than the visual displacement to render stiffness.
A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered when contacting the button, but kept constant across all conditions: It may have affected the overall perception when only the visual stiffness changed.
@@ -211,5 +211,5 @@ A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered w
\label{visuo_haptic_conclusion}
% the type of rendered object (real or virtual), the rendered haptic property (contact, hardness, texture, see \secref{tactile_rendering}), and .
%In this context of integrating \WHs with \AR to create a \vh-\AE (see \chapref{introduction}), the definition of \textcite{pacchierotti2017wearable} can be extended to an additional criterion: The wearable haptic interface should not impair the interaction with the \RE, \ie the user should be able to touch and manipulate objects in the real world while wearing the haptic device.
%In this context of integrating \WHs with \AR to create a \vh-\AE (\chapref{introduction}), the definition of \textcite{pacchierotti2017wearable} can be extended to an additional criterion: The wearable haptic interface should not impair the interaction with the \RE, \ie the user should be able to touch and manipulate objects in the real world while wearing the haptic device.
% The haptic feedback is thus rendered de-localized from the point of contact of the finger on the rendered object.