WIP related work

This commit is contained in:
2024-09-16 09:15:19 +02:00
parent 10f23cecfd
commit 8705affcc4
34 changed files with 205 additions and 53 deletions

View File

@@ -253,19 +253,20 @@ When the surface is touched or tapped, vibrations are also transmitted to the sk
Passive touch (without voluntary hand movements) and tapping allow a perception of hardness as good as active touch~\cite{friedman2008magnitude}.
Two physical properties determine the haptic perception of hardness: its stiffness and elasticity, as shown in \figref{hardness}~\cite{bergmanntiest2010tactual}.
The \emph{stiffness} $k$ of an object is the ratio between the applied force $F$ and the resulting \emph{displacement} $\Delta l$ of the surface:
The \emph{stiffness} $k$ of an object is the ratio between the applied force $F$ and the resulting \emph{displacement} $D$ of the surface:
\begin{equation}
\label{eq:stiffness}
k = \frac{F}{\Delta l}
k = \frac{F}{D}
\end{equation}
The \emph{elasticity} of an object is expressed by its Young's modulus $Y$, which is the ratio between the applied pressure (the force $F$ per unit area $A$) and the resulting deformation $\Delta l / l$ (the relative displacement) of the object:
The \emph{elasticity} of an object is expressed by its Young's modulus $Y$, which is the ratio between the applied pressure (the force $F$ per unit area $A$) and the resulting deformation $D / l$ (the relative displacement) of the object:
\begin{equation}
\label{eq:young_modulus}
Y = \frac{F / A}{\Delta l / l}
Y = \frac{F / A}{D / l}
\end{equation}
\begin{subfigs}{stiffness_young}{Perceived hardness of an object by finger pressure. }[
\item Diagram of an object with a stiffness coefficient $k$ and a length $l$ compressed by a force $F$ on an area $A$ by a distance $\Delta l$.
\item Diagram of an object with a stiffness coefficient $k$ and a length $l$ compressed by a force $F$ on an area $A$ by a distance $D$.
\item Identical perceived hardness intensity between Young's modulus (horizontal axis) and stiffness (vertical axis). The dashed and dotted lines indicate the objects tested, the arrows the correspondences made between these objects, and the grey lines the predictions of the quadratic relationship~\cite{bergmanntiest2009cues}.
]
\subfig[.3]{hardness}
@@ -374,4 +375,4 @@ Haptic perception and manipulation of objects with the hand involves several sim
Exploratory movements of the hand are performed on contact with the object to obtain multiple sensory information from several cutaneous and kinaesthetic receptors.
These sensations express physical parameters in the form of perceptual cues, which are then integrated to form a perception of the property being explored.
It is often the case that one perceptual cue is particularly important in the perception of a property, but perceptual constancy is possible by compensating for its absence with others.
In turn, these perceptions help to guide the grasping and manipulation of the object by adapting the grasp type and the forces applied to the shape of the object and the task to be performed.
In turn, these perceptions help to guide the grasping and manipulation of the object by adapting the grasp type and the forces applied to the shape of the object and the task to be performed.

View File

@@ -46,12 +46,12 @@ Yet, the user experience in \AR is still highly dependent on the display used.
\begin{subfigs}{ar_applications}{Examples of \AR applications. }[
\item Neurosurgery \AR visualization of the brain on a patient's head~\cite{watanabe2016transvisible}.
\item SpaceTop is transparent \AR desktop computer featuring direct hand interaction with content~\cite{lee2013spacetop}.
\item SpaceTop is transparent \AR desktop computer featuring direct hand manipulation of \ThreeD content~\cite{lee2013spacetop}.
\item \AR can interactively guide in document verification tasks by recognizing and comparing with virtual references
~\cite{hartl2013mobile}.
\item Inner Garden is a tangible, spatial \AR zen garden for relaxation and meditation~\cite{roo2017inner}.
]
\subfigsheight{45mm}
\subfigsheight{47mm}
\subfig{watanabe2016transvisible}
\subfig{lee2013spacetop}
\subfig{hartl2013mobile}
@@ -78,6 +78,7 @@ Using a VST-AR headset have notable consequences, as the "real" view of the envi
%Distances are underestimated~\cite{adams2022depth,peillard2019studying}.
% billinghurst2021grand
\subsection{Presence and Embodiment in AR}
\label{ar_presence}
@@ -96,14 +97,14 @@ It doesn't mean that the virtual events are realistic, but that they are plausib
A third strong illusion in \VR is the \SoE, which is the illusion that the virtual body is one's own~\cite{slater2022separate,guy2023sense}.
The \AR presence is far less defined and studied than for \VR~\cite{tran2024survey}, but it will be useful to design, evaluate and discuss our contributions in the next chapters.
Thereby, \textcite{slater2022separate} proposed to invert \PI to what we can call \enquote{object illusion}, \ie the sense of the virtual object of \enquote{being here} in the \RE (see \figref{presence-ar}).
Thereby, \textcite{slater2022separate} proposed to invert \PI to what we can call \enquote{object illusion}, \ie the sense of the virtual object to \enquote{feels here} in the \RE (see \figref{presence-ar}).
As with VR, \VOs must be able to be seen from different angles by moving the head but also, this is more difficult, be consistent with the \RE, \eg occlude or be occluded by real objects~\cite{macedo2023occlusion}, cast shadows or reflect lights.
The \PSI can be applied to \AR as is, but the \VOs must additionally have knowledge of the \RE and react accordingly to it.
\textcite{skarbez2021revisiting} also named \PI for \AR as \enquote{immersion} and \PSI as \enquote{coherence}, and these terms will be used in the remainder of this thesis.
\begin{subfigs}{presence}{The sense of immersion in virtual and augmented environments. Adapted from \textcite{stevens2002putting}. }[
\item Place Illusion (PI) is the sense of the user of \enquote{being there} in the \VE.
\item Objet illusion is the sense of the virtual object of \enquote{being here} in the \RE.
\item Objet illusion is the sense of the virtual object to \enquote{feels here} in the \RE.
]
\subfigsheight{35mm}
\subfig{presence-vr}
@@ -115,34 +116,67 @@ The \PSI can be applied to \AR as is, but the \VOs must additionally have knowle
As presence, \SoE in \AR is a recent topic and little is known about its perception on the user experience~\cite{genay2021virtual}.
\subsection{Direct Hand Interaction in AR}
\subsection{Direct Hand Manipulation in AR}
Retour à la boucle d'interaction :
on a présenté les interfaces haptiques et de RA (rendu du système vers l'utilisateur) pour faire le rendu du VE, qui essaye de recréer des expériences perceptuelles similaires et comparables à celles de la vie de touts les jours, \ie de rendre la meilleure immersion (voir \secref{ar_presence}) possible.
Mais il faut pouvoir permettre à l'utilisateur d'interagir avec l'environment et les objets virtuels (interaction), donc détecter et représenter l'utilisateur dans le VE (tracking).
Both \AR/\VR and haptic systems are able to render virtual objects and environments as sensations displayed to the user's senses.
However, as presented in \figref[introduction]{interaction-loop}, the user must be able to manipulate the virtual objects and environments to complete the loop, \eg through a hand-held controller, a tangible object, or even directly with the hands.
An interaction technique is then required to map user inputs to actions on the \VE~\cite{laviola20173d}.
\subsubsection{Interaction Techniques}
\paragraph{Reducing the Physical-Virtual Gap}
For a user to interact with a computer system, they first perceive the state of the system and then act on it using an input interface.
An input interface can be either an active sensing, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a passive sensing, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking.
The sensors' information gathered by the input interface are then translated into actions within the computer system by an interaction technique.
For example, a cursor on a screen can be moved either with a mouse or with arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image.
Choosing useful and efficient input interfaces and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}.
Pour cela il faut des techniques d'interaction, \cite{billinghurst2005designing} : Physical Elements as Input -- Interaction Technique --> Virtual Elements as Output.
Les techniques d'interactions sont cruciales pour l'expérience utilisateur, car elles dictent en grande partie la cohérence du système (voir \secref{ar_presence}) par la qualité des actions possible avec l'environment virtuel.
"il sagit de lier des entrées utilisateurs issues de capteurs physiques (souris, écran tactile, images dune caméra) à des actions sur lordinateur représentées par un résultat en sortie (affichage, son, commande) via une technique dinteraction"
ex : "La technique dinteraction est donc une méthode qui permet de traduire ces entrées en commandes : par exemple, le même mouvement avec une souris peut déplacer un curseur ou translater un objet le long dun axe, ou encore un même déplacement de deux doigts sur un écran tactile peut faire une rotation ou un zoom sur un objet."
Principe IHM : [Van Dam, 1997] : réduire l'écart entre les éléments physiques et virtuels, \ie en un sens rendre l'interaction la plus "naturelle" possible, la moins "visible" possible.
En RA, surtout immersive et portable, cet écart peut être tellement réduit qu'il n'est presque plus perceptible par l'utilisateur et l'interaction peut être pratiquement la même qu'avec le \RE, \ie essentiellement toucher, saisir et manipuler les objets virtuels directement avec les mains.
\fig[0.5]{interaction-technique}{An interaction technique map user inputs to actions within a computer system. Adapted from \textcite{billinghurst2005designing}.}
\paragraph{Tasks}
\cite{laviola20173d} a classé les techniques d'interactions avec les \VE en trois catégories selon la tâche qu'elles aident à accomplir : \enquote{navigation}, \enquote{selection} et \enquote{manipulation}.
La navigation est le déplacement de l'utilisateur dans le \VE, mais dans le cas d'un casque de RA, le \VE est aligné avec le \RE et sont perceptuellement un seul et même environment augmenté (immersion) : la navigation est donc essentiellement le déplacement de l'utilisateur dans le \RE. Pour cela le casque se repère dans le \RE avec des capteurs et algorithmes de tracking et l'affichage du \VE est déplacé et orienté similairement au déplacement réel afin de l'afficher dans la bonne perspective de l'utilisateur. Voir aussi \cite{marchand2016pose} pour une revue des techniques de tracking pour la RA.
La sélection est le choix d'un objet virtuel dans le \VE, et la manipulation est l'interaction avec cet objet, \ie le déplacer, le tourner, le redimensionner, etc.
\textcite{laviola20173d} classify interaction techniques into three categories based on the tasks they enable users to perform: manipulation, navigation, and system control.
\textcite{hertel2021taxonomy} proposed a revised taxonomy of interaction techniques specifically for immersive \AR.
\cite{hertel2021taxonomy} a proposé une taxonomie des techniques d'interactions spécifiquement en RA, en se basant sur les tâches et les modalités d'entrée utilisées.
Les tâches proposées sont la création, la sélection, la manipulation (géométrique), la manipulation abstraite, et l'entrée de texte.
Chacune de ces tâche peut-être potentiellement réalisée par
The \emph{manipulation tasks} are the most fundamental tasks in \AR and \VR systems, and the basic blocks for more complex interactions.
\emph{Selection} is the identification or acquisition of a specific virtual object, \eg pointing at a target as in \figref{grubert2015multifi}, touching a button with a finger, or grasping an object with a hand.
\emph{Positioning} and \emph{rotation} of a selected object are respectively the change of its position and orientation in \ThreeD space.
It is also common to \emph{resize} a virtual object to change its size.
These three tasks are geometric (rigid) manipulations of the object: they do not change its shape.
\paragraph{Natural Interaction in AR}
The \emph{navigation tasks} are the movements of the user within the \VE.
Travel is the control of the position and orientation of the viewpoint in the \VE, \eg physical walking, velocity control, or teleportation.
Wayfinding is the cognitive planning of the movement such as pathfinding or route following (see \figref{grubert2017pervasive}).
The \emph{system control tasks} are changes in the system state through commands or menus such as creation, deletion, or modification of objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
\paragraph{Reducing the Physical-Virtual Gap}
In \AR and \VR, the state of the system is displayed to the user as a \VE seen spatially in 3D.
Within an immersive and portable \AR system, this \VE is experienced at a 1:1 scale and as an integral part of the \RE.
The rendering gap between the physical and virtual elements, as described on the interaction loop in \figref[introduction]{interaction-loop}, is thus experienced as very narrow or even not consciously perceived by the user.
This manifests as a sense of presence of the virtual, as presented in \secref{ar_presence}.
As the physical-virtual rendering gap is reduced, we could expect a similar and seamless interaction with the \VE as with a physical environment that \cite{jacob2008realitybased} called \emph{reality based interactions}.
As of today, an immersive \AR system track itself with the user in \ThreeD, using tracking sensors and pose estimation algorithms~\cite{marchand2016pose}, \eg as in \figref{newcombe2011kinectfusion}.
It enables to register the \VE with the \RE and the user simply moves themselves to navigate within the virtual content.
%This tracking and mapping of the user and \RE into the \VE is named the \enquote{extent of world knowledge} by \textcite{skarbez2021revisiting}, \ie to what extent the \AR system knows about the \RE and is able to respond to changes in it.
However, direct hand manipulation of the virtual content is a challenge that requires specific interaction techniques~\cite{billinghurst2021grand}.
This is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{hertel2021taxonomy}.
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[
\item Spatial selection of virtual item of an extended display using a hand-held smartphone~\cite{grubert2015multifi}.
\item Displaying as an overlay registered on the \RE the route to follow~\cite{grubert2017pervasive}.
\item Virtual drawing on a tangible object with a hand-held pen~\cite{roo2017onea}.
\item Simultaneous Localization and Mapping (SLAM) techniques such as KinectFusion~\cite{newcombe2011kinectfusion} reconstruct the \RE in real time and enables to register the \VE in it.
]
\subfigsheight{36mm}
\subfig{grubert2015multifi}
\subfig{grubert2017pervasive}
\subfig{roo2017onea}
\subfig{newcombe2011kinectfusion}
\end{subfigs}
\paragraph{Manipulating with Virtual Hands}
Dans le cas de la RA immersive avec une interaction "naturelles" (cf \cite{billinghurst2005designing}), la sélection consiste à toucher l'objet virtuel avec les mains, et la manipulation à le saisir et le déplacer avec les mains.
C'est ce qu'on appelle les "virtual hands" : les mains virtuelles de l'utilisateur dans le \VE.
@@ -153,11 +187,6 @@ Maglré tout, le principal problème de l'interaction naturelle avec les mains d
Cela peut être aussi difficile à comprendre : "\cite{chan2010touching} proposent la combinaison de retours continus, pour que lutilisateur situe le suivi de son corps, et de retours discrets pour confirmer ses actions." Un rendu et affichage visuel des mains est un retour continu, un bref changement de couleur ou un retour haptique est un retour discret. Mais cette combinaison n'a pas été évaluée.
Deux techniques d'interactions naturelles avec les mains en RA immersive sont les plus courantes : les mains virtuelles et les tangibles.
Quand l'objet est lointain, la sélection peut se faire avec des techniques de pointing ou des gestes (voir \cite{hertel2021taxonomy}).
\subsubsection{Manipulating with Virtual Hands}
\cite{hilliges2012holodesk}
\cite{piumsomboon2013userdefined} : user-defined gestures for manipulation of virtual objects in AR.
\cite{piumsomboon2014graspshell} : direct hand manipulation of virtual objects in immersive AR vs vocal commands.
@@ -165,7 +194,7 @@ Quand l'objet est lointain, la sélection peut se faire avec des techniques de p
Problèmes d'occultation, les objets virtuels doivent toujours êtres visibles : soit en utilisant une main virtuelle transparente plutôt quopaque, soit en affichant leurs contours si elle les cache \cite{piumsomboon2014graspshell}.
\subsubsection{Manipulating with Tangibles}
\paragraph{Manipulating with Tangibles}
\cite{issartel2016tangible}
\cite{englmeier2020tangible}
@@ -180,6 +209,15 @@ Mais, spécifique à la RA vs RV, le tangible et la main sont visibles, du moins
\subsection{Visual Rendering of Hands in AR}
In VR, as the user is fully immersed in the \VE and cannot see their real hands, it is necessary to represent them virtually.
Virtual hand rendering is also known to influence how an object is grasped in VR~\cite{prachyabrued2014visual,blaga2020too} and AR, or even how real bumps and holes are perceived in VR~\cite{schwind2018touch}, but its effect on the perception of a haptic texture augmentation has not yet been investigated.
It is known that the virtual hand representation has an impact on perception, interaction performance, and preference of users~\cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch}.
In a pick-and-place task in VR, \textcite{prachyabrued2014visual} found that the virtual hand representation whose motion was constrained to the surface of the virtual objects performed the worst, while the virtual hand representation following the tracked human hand (thus penetrating the virtual objects), performed the best, even though it was rather disliked.
The authors also observed that the best compromise was a double rendering, showing both the tracked hand and a hand rendering constrained by the virtual environment.
It has also been shown that over a realistic avatar, a skeleton rendering can provide a stronger sense of being in control~\cite{argelaguet2016role} and that minimalistic fingertip rendering can be more effective in a typing task~\cite{grubert2018effects}.
\fig{prachyabrued2014visual}{Effect of different hand renderings on a pick-and-place task in VR~\cite{prachyabrued2014visual}.}
Mutual visual occlusion between a virtual object and the real hand, \ie hiding the virtual object when the real hand is in front of it and hiding the real hand when it is behind the virtual object, is often presented as natural and realistic, enhancing the blending of real and virtual environments~\cite{piumsomboon2014graspshell, al-kalbani2016analysis}.
In video see-through AR (VST-AR), this could be solved as a masking problem by combining the image of the real world captured by a camera and the generated virtual image~\cite{macedo2023occlusion}.
In OST-AR, this is more difficult because the virtual environment is displayed as a transparent 2D image on top of the 3D real world, which cannot be easily masked~\cite{macedo2023occlusion}.
@@ -187,16 +225,9 @@ Moreover, in VST-AR, the grip aperture and depth positioning of virtual objects
However, this effect has yet to be verified in an OST-AR setup.
An alternative is to render the virtual objects and the hand semi-transparents, so that they are partially visible even when one is occluding the other, \eg the real hand is behind the virtual cube but still visible.
Although perceived as less natural, this seems to be preferred to a mutual visual occlusion in VST-AR~\cite{buchmann2005interaction, ha2014wearhand, piumsomboon2014graspshell} and VR~\cite{vanveldhuizen2021effect}, but has not yet been evaluated in OST-AR.
Although perceived as less natural, this seems to be preferred to a mutual visual occlusion in VST-AR~\cite{buchmann2005interaction,ha2014wearhand,piumsomboon2014graspshell} and VR~\cite{vanveldhuizen2021effect}, but has not yet been evaluated in OST-AR.
However, this effect still causes depth conflicts that make it difficult to determine if one's hand is behind or in front of a virtual object, \eg the thumb is in front of the virtual cube, but it appears to be behind it.
In VR, as the user is fully immersed in the virtual environment and cannot see their real hands, it is necessary to represent them virtually.
Virtual hand rendering is also known to influence how an object is grasped in VR~\cite{prachyabrued2014visual,blaga2020too} and AR, or even how real bumps and holes are perceived in VR~\cite{schwind2018touch}, but its effect on the perception of a haptic texture augmentation has not yet been investigated.
It is known that the virtual hand representation has an impact on perception, interaction performance, and preference of users~\cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch}.
In a pick-and-place task in VR, \textcite{prachyabrued2014visual} found that the virtual hand representation whose motion was constrained to the surface of the virtual objects performed the worst, while the virtual hand representation following the tracked human hand (thus penetrating the virtual objects), performed the best, even though it was rather disliked.
The authors also observed that the best compromise was a double rendering, showing both the tracked hand and a hand rendering constrained by the virtual environment.
It has also been shown that over a realistic avatar, a skeleton rendering can provide a stronger sense of being in control~\cite{argelaguet2016role} and that minimalistic fingertip rendering can be more effective in a typing task~\cite{grubert2018effects}.
In AR, as the real hand of a user is visible but not physically constrained by the virtual environment, adding a visual hand rendering that can physically interact with virtual objects would achieve a similar result to the promising double-hand rendering of \textcite{prachyabrued2014visual}.
Additionally, \textcite{kahl2021investigation} showed that a virtual object overlaying a tangible object in OST-AR can vary in size without worsening the users' experience nor the performance.
This suggests that a visual hand rendering superimposed on the real hand could be helpful, but should not impair users.
@@ -213,3 +244,11 @@ Mais se pose la question de la représentation, qui a montré des effets sur la
\subsection{Conclusion}
\label{ar_conclusion}
\AR systems integrate virtual objects into the visual perception as if they were part of the \RE.
\AR headsets now enable real-time tracking of the head and hands, and high-quality display of virtual content, while being portable and mobile.
They enable highly immersive \AEs that users can explore with a strong sense of the presence of the virtual content.
But without a direct and seamless interaction with the virtual objects using the hands, the coherence of the \AE experience is compromised.
In particular, there is a lack of mutual occlusion and interaction cues between hands and virtual objects in \OST-\AR that could be mitigated by visual rendering of the hand.
A common alternative approach is to use tangible objects as proxies for interaction with virtual objects, but this raises concerns about their number and association with virtual objects, as well as consistency with the visual rendering.
In this context, the use of wearable haptic systems worn on the hand seems to be a promising solution both for improving direct hand manipulation of virtual objects and for coherent visuo-haptic augmentation of touched tangible objects.

View File

@@ -3,9 +3,13 @@
% Answer the following four questions: “Who else has done work with relevance to this work of yours? What did they do? What did they find? And how is your work here different?”
% spatial and temporal integration of visuo-haptic feedback as perceptual cues vs proprioception and real touch sensations
% delocalized : not at the point of contact = difficult to integrate with other perceptual cues ?
%Go back to the main objective "to understand how immersive visual and \WH feedback compare and complement each other in the context of direct hand perception and manipulation with augmented objects" and the two research challenges: "providing plausible and coherent visuo-haptic augmentations, and enabling effective manipulation of the augmented environment."
%Also go back to the \figref[introduction]{visuo-haptic-rv-continuum3} : we present previous work that either did haptic AR (the middle row), or haptic VR with visual AR, or visuo-haptic AR.
% One of the roles of haptic systems is to render virtual interactions and sensations that are \emph{similar and comparable} to those experienced by the haptic sense with real objects, particularly in \v-\VE~\cite{maclean2008it,culbertson2018haptics}. Moreover, a haptic \AR system should \enquote{modulating the feel of a real object by virtual [haptic] feedback}~\cite{jeon2009haptic}, \ie a touch interaction with a real object whose perception is modified by the addition of virtual haptic feedback.
\subsection{Influence of Visual Rendering on Haptic Perception}
\label{visual_haptic_influence}
@@ -21,6 +25,9 @@ Particularly for real textures, it is known that both touch and sight individual
%
Thus, the overall perception can be modified by changing one of the modalities, as shown by \textcite{yanagisawa2015effects}, who altered the perception of roughness, stiffness and friction of some real tactile textures touched by the finger by superimposing different real visual textures using a half-mirror.
% Spring compliance is perceived by combining the sensed force exerted by the spring with the displacement caused by the action (sensed through vision and proprioception). diluca2011effects
% The ability to discriminate whether two stimuli are simultaneous is important to determine whether stimuli should be bound together and form a single multisensory perceptual object. diluca2019perceptual
Similarly but in VR, \textcite{degraen2019enhancing} combined visual textures with different passive haptic hair-like structure that were touched with the finger to induce a larger set of visuo-haptic materials perception.
\textcite{gunther2022smooth} studied in a complementary way how the visual rendering of a virtual object touching the arm with a tangible object influenced the perception of roughness.
Likewise, visual textures were combined in VR with various tangible objects to induce a larger set of visuo-haptic material perceptions, in both active touch~\cite{degraen2019enhancing} and passive touch~\cite{gunther2022smooth} contexts.
@@ -29,6 +36,8 @@ A common finding of these studies is that haptic sensations seem to dominate the
\subsubsection{Pseudo-Haptic Feedback}
\label{pseudo_haptic}
% Visual feedback in VR and AR is known to influence haptic perception [13]. The phenomenon of ”visual dominance” was notably observed when estimating the stiffness of virtual objects. L´ecuyer et al. [13] based their ”pseudo-haptic feedback” approach on this notion of visual dominance gaffary2017ar
A few works have also used pseudo-haptic feedback to change the perception of haptic stimuli to create richer feedback by deforming the visual representation of a user input~\cite{ujitoko2021survey}.
For example, different levels of stiffness can be simulated on a grasped virtual object with the same passive haptic device~\cite{achibet2017flexifingers} or
the perceived softness of tangible objects can be altered by superimposing in AR a virtual texture that deforms when pressed by the hand~\cite{punpongsanon2015softar}, or in combination with vibrotactile rendering in VR~\cite{choi2021augmenting}.
@@ -51,14 +60,56 @@ Conversely, as discussed by \textcite{ujitoko2021survey} in their review, a co-l
Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
\subsubsection{Comparing Haptic Perception in AR \vs VR}
\subsubsection{Perception of Visuo-Haptic Rendering in AR and VR}
\label{AR_vs_VR}
A few studies specifically compared visuo-haptic perception in AR \vs VR.
Rendering a virtual piston pressed with one's real hand using a video see-through (VST) AR headset and a force feedback haptic device, \textcite{knorlein2009influence} showed that a visual delay increased the perceived stiffness of the piston, whereas a haptic delay decreased it.
\textcite{diluca2011effects} went on to explain how these delays affected the weighting of visual and haptic information in perceived stiffness.
In a similar setup, but with an optical see-through (OST) AR headset, \textcite{gaffary2017ar} found that the virtual piston was perceived as less stiff in AR than in VR, without participants noticing this difference.
While a large literature has investigated these differences in visual perception, as well as for VR, \eg , less is known about visuo-haptic perception in AR and VR.
Some studies have investigated the visuo-haptic perception of virtual objects in \AR and \VR.
They have shown how the latency of the visual rendering of an object with haptic feedback or the type of environment (\VE or \RE) can affect the perception of an identical haptic rendering.
Indeed, there are indeed inherent and unavoidable latencies in the visual and haptic rendering of virtual objects, and the visual-haptic feedback may not appear to be simultaneous.
In an immersive \VST-\AR setup, \textcite{knorlein2009influence} rendered a virtual piston using force-feedback haptics that participants pressed directly with their hand (see \figref{visuo-haptic-stiffness}).
In a \TAFC task, participants pressed two pistons and indicated which was stiffer.
One had a reference stiffness but an additional visual or haptic delay, while the other varied with a comparison stiffness but had no delay. \footnote{Participants were not told about the delays and stiffness tested, nor which piston was the reference or comparison. The order of the pistons (which one was pressed first) was also randomized.}%
Adding a visual delay increased the perceived stiffness of the reference piston, while adding a haptic delay decreased it, and adding both delays cancelled each other out (see \figref{knorlein2009influence_2}).
\begin{subfigs}{visuo-haptic-stiffness}{Perception of haptic stiffness in \VST-\AR ~\cite{knorlein2009influence}. }[
\item Participant pressing a virtual piston rendered by a force-feedback device with their hand.
\item Proportion of comparison piston perceived as stiffer than reference piston (vertical axis) as a function of the comparison stiffness (horizontal axis) and visual and haptic delays of the reference (colors).
]
\subfig[.44]{knorlein2009influence_1}
\subfig[.55]{knorlein2009influence_2}
\end{subfigs}
%explained how these delays affected the integration of the visual and haptic perceptual cues of stiffness.
The stiffness $k$ of the piston is indeed estimated by both sight and proprioception as the ratio of the exerted force $F$ and the displacement $D$ of the piston, following \eqref{stiffness}.
But a delay $\Delta t$ modify the equation to:
\begin{equation}
\label{eq:stiffness_delay}
k = \frac{F(t_A)}{D (t_B)}
\end{equation}
where $t_B = t_A + \Delta t$.
Therefore, a haptic delay (positive $\Delta t$) increases the perceived stiffness $k$, while a visual delay in displacement (negative $\Delta t$) decreases perceived $k$~\cite{diluca2011effects}.
In a similar \TAFC user study, participants compared perceived stiffness of virtual pistons in \OST-\AR and \VR~\cite{gaffary2017ar}.
However, the force-feedback device and the participant's hand were not visible (see \figref{gaffary2017ar}).
The reference piston was judged to be stiffer when seen in \VR than in \AR, without participants noticing this difference, and more force was exerted on the piston overall in \VR.
This suggests that the haptic stiffness of virtual objects feels \enquote{softer} in an \AE than in a full \VE.
%Two differences that could be worth investigating with the two previous studies are the type of \AR (visuo or optical) and to see the hand touching the virtual object.
\begin{subfigs}{gaffary2017ar}{Perception of haptic stiffness in \OST-\AR \vs \VR~\cite{gaffary2017ar}. }[
\item Experimental setup: a virtual piston was pressed with a force-feedback placed to the side of the participant.
\item View of the virtual piston seen in front of the participant in \OST-\AR and
\item in \VR.
]
\subfig[0.35]{gaffary2017ar_1}
\subfig[0.3]{gaffary2017ar_3}
\subfig[0.3]{gaffary2017ar_4}
\end{subfigs}
Finally, \textcite{diluca2019perceptual} investigated perceived simultaneity of visuo-haptic feedback in \VR.
In a user study, participants touched a virtual cube with a virtual hand: The contact was both rendered with a vibrotactile piezo-electric device on the fingertip and a visual change in the cube color.
The visuo-haptic simultaneity varied by either adding a visual delay or by triggering earlier the haptic feedback.
No participant (out of 19) was able to detect a \qty{50}{\ms} visual lag and a \qty{15}{\ms} haptic lead and only half of them detected a \qty{100}{\ms} visual lag and a \qty{70}{\ms} haptic lead.
\subsection{Wearable Haptics for AR}
@@ -132,21 +183,20 @@ These two studies were also conducted in non-immersive setups, where users looke
\subfig{maisto2017evaluation}
\end{subfigs}
\subsubsection{Wrist Bracelet Devices}
With their \enquote{Tactile And Squeeze Bracelet Interface} (Tasbi), already mentioned in \secref{belt_actuators}, \textcite{pezent2019tasbi} and \textcite{pezent2022design} explored the use of a wrist-worn bracelet actuator.
It is capable of providing a uniform pressure sensation (up to \qty{15}{\N} and \qty{10}{\Hz}) and vibration with six \LRAs (\qtyrange{150}{200}{\Hz} bandwidth).
Although the device has not been tested in \AR, a user study was conducted in \VR to compare the perception of visuo-haptic stiffness rendering~\cite{pezent2019tasbi}.
Participants pressed a virtual button with different levels of stiffness using a virtual hand, constrained by the \VE (see \figref{pezent2019tasbi_2}).
A user study was conducted in \VR to compare the perception of visuo-haptic stiffness rendering~\cite{pezent2019tasbi}.
In a \TAFC task, participants pressed a virtual button with different levels of stiffness via a virtual hand constrained by the \VE (see \figref{pezent2019tasbi_2}).
A higher visual stiffness required a larger physical displacement to press the button (C/D ratio, see \secref{pseudo_haptic}), while the haptic stiffness control the rate of the pressure feedback when pressing.
When the visual and haptic stiffness were coherent or when only the haptic stiffness changed, participants easily discriminated two buttons with different stiffness levels (see \figref{pezent2019tasbi_3}).
When the visual and haptic stiffness were coherent or when only the haptic stiffness changed, participants easily discriminated two buttons with different stiffness levels (see \figref{pezent2019tasbi_3}).
However, if only the visual stiffness changed, participants were not able to discriminate the different stiffness levels (see \figref{pezent2019tasbi_4}).
This suggests that in \VR, the haptic pressure is more important perceptual cue than the visual displacement to render stiffness.
A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered when contacting the button, but kept constant across all conditions: It may have affected the overall perception when only the visual stiffness changed.
\begin{subfigs}{pezent2019tasbi}{Visuo-haptic stiffness rendering of a virtual button in \VR with the Tasbi bracelet. }[
\item The \VE seen by the user: the virtual hand (in beige) is constrained by the virtual button. The displacement is proportional to the visual stiffness. The real hand (in green) is hidden by the \VE.
\item The \VE seen by the user: the virtual hand (in beige) is constrained by the virtual button. The displacement is proportional to the visual stiffness. The real hand (in green) is hidden by the \VE.
\item When the rendered visuo-haptic stiffness are coherents (in purple) or only the haptic stiffness change (in blue), participants easily discrimated the different levels.
\item When varying only the visual stiffness (in red) but keeping the haptic stiffness constant, participants were not able to discriminate the different stiffness levels.
]
@@ -156,6 +206,7 @@ A short vibration (\qty{25}{\ms} \qty{175}{\Hz} square-wave) was also rendered w
\subfig{pezent2019tasbi_4}
\end{subfigs}
\subsection{Conclusion}
\label{visuo_haptic_conclusion}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 414 KiB

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 64 KiB

View File

@@ -1,2 +1,7 @@
\chapter{Conclusion}
\mainlabel{conclusion}
% systematic exploration of the parameter space of the haptic rendering to determine the most important parameters their influence on the perception
% measure the difference in sensitivity to the haptic feedback and how much it affects the perception of the object properties
% design, implement and validate procedures to automatically calibrate the haptic feedback to the user's perception in accordance to what it has been designed to represent
% + let user free to easily adjust (eg can't let adjust whole spectrum of vibrotactile, reduce to two or three dimensions with sliders using MDS)

View File

@@ -38,6 +38,7 @@
\let\AE\undefined
\let\v\undefined
\acronym[TAFC]{2AFC}{two-alternative forced choice}
\acronym[TwoD]{2D}{two-dimensional}
\acronym[ThreeD]{3D}{three-dimensional}
\acronym{AE}{augmented environment}
@@ -48,11 +49,13 @@
\acronym{ERM}{eccentric rotating mass}
\acronym{h}{haptic}
\acronym{HMD}{head-mounted display}
\acronym{JND}{just noticeable difference}
\acronym{LRA}{linear resonant actuator}
\acronym{MR}{mixed reality}
\acronym{OST}{optical see-through}
\acronym{PI}{place illusion}
\acronym[PSI]{Psi}{plausibility}
\acronym{PSE}{point of subjective equality}
\acronym{RE}{real environment}
\acronym{RV}{reality-virtuality}
\acronym{SoE}{sense of embodiment}

View File

@@ -33,6 +33,7 @@
% Images
% example: \fig[1]{universe}{The Universe}[Additional caption text, not shown in the list of figures]
% reference later with: \figref{universe}
% 1 = \linewidth = 150 mm
\RenewDocumentCommand{\fig}{O{1} O{htbp} m m O{}}{% #1 = width, #2 = position, #3 = filename, #4 = caption, #5 = additional caption
\begin{figure}[#2]
\centering%

View File

@@ -1165,6 +1165,28 @@
doi = {10/gpxpgk}
}
@inproceedings{grubert2015multifi,
title = {{{MultiFi}}: {{Multi Fidelity Interaction}} with {{Displays On}} and {{Around}} the {{Body}}},
shorttitle = {{{MultiFi}}},
booktitle = {{{ACM Conference}} on {{Human Factors}} in {{Computing Systems}}},
author = {Grubert, Jens and Heinisch, Matthias and Quigley, Aaron and Schmalstieg, Dieter},
date = {2015-04-18},
pages = {3933--3942},
doi = {10/gjzbwr}
}
@article{grubert2017pervasive,
title = {Towards {{Pervasive Augmented Reality}}: {{Context-Awareness}} in {{Augmented Reality}}},
shorttitle = {Towards {{Pervasive Augmented Reality}}},
author = {Grubert, Jens and Langlotz, Tobias and Zollmann, Stefanie and Regenbrecht, Holger},
date = {2017},
journaltitle = {IEEE Transactions on Visualization and Computer Graphics},
volume = {23},
number = {6},
pages = {1706--1724},
doi = {10/f97hkg}
}
@inproceedings{grubert2018effects,
title = {Effects of {{Hand Representations}} for {{Typing}} in {{Virtual Reality}}},
booktitle = {{{IEEE Virtual Reality}}},
@@ -1462,6 +1484,16 @@
doi = {10/gr2fbm}
}
@inproceedings{jacob2008realitybased,
title = {Reality-Based Interaction: A Framework for Post-{{WIMP}} Interfaces},
shorttitle = {Reality-Based Interaction},
booktitle = {{{SIGCHI Conference}} on {{Human Factors}} in {{Computing Systems}}},
author = {Jacob, Robert J.K. and Girouard, Audrey and Hirshfield, Leanne M. and Horn, Michael S. and Shaer, Orit and Solovey, Erin Treacy and Zigelbaum, Jamie},
date = {2008},
pages = {201--210},
doi = {10/bmscgf}
}
@article{jeon2009haptic,
title = {Haptic {{Augmented Reality}}: {{Taxonomy}} and an {{Example}} of {{Stiffness Modulation}}},
shorttitle = {Haptic {{Augmented Reality}}},
@@ -2195,6 +2227,16 @@
doi = {10/gfz8jk}
}
@inproceedings{newcombe2011kinectfusion,
title = {{{KinectFusion}}: {{Real-time}} Dense Surface Mapping and Tracking},
shorttitle = {{{KinectFusion}}},
booktitle = {{{IEEE International Symposium}} on {{Mixed}} and {{Augmented Reality}}},
author = {Newcombe, Richard A. and Fitzgibbon, Andrew and Izadi, Shahram and Hilliges, Otmar and Molyneaux, David and Kim, David and Davison, Andrew J. and Kohi, Pushmeet and Shotton, Jamie and Hodges, Steve},
date = {2011},
pages = {127--136},
doi = {10/dhvm3p}
}
@article{norman2004visual,
title = {The Visual and Haptic Perception of Natural Object Shape},
author = {Norman, J Farley and Norman, Hideko F and Clayton, Anna Marie and Lianekhammy, Joann},
@@ -2635,6 +2677,16 @@
doi = {10/ggrd6q}
}
@inproceedings{roo2017onea,
title = {One {{Reality}}: {{Augmenting How}} the {{Physical World}} Is {{Experienced}} by Combining {{Multiple Mixed Reality Modalities}}},
shorttitle = {One {{Reality}}},
booktitle = {{{ACM Symposium}} on {{User Interface Software}} and {{Technology}}},
author = {Roo, Joan Sol and Hachet, Martin},
date = {2017},
pages = {787--795},
doi = {10/gftg7q}
}
@inproceedings{sabnis2023haptic,
title = {Haptic {{Servos}}: {{Self-Contained Vibrotactile Rendering System}} for {{Creating}} or {{Augmenting Material Experiences}}},
shorttitle = {Haptic {{Servos}}},