This commit is contained in:
2024-09-25 17:25:13 +02:00
parent d6c8184df8
commit 0a21557052
16 changed files with 103 additions and 96 deletions

View File

@@ -188,7 +188,7 @@ Our contributions in these two axes are summarized in \figref{contributions}.
The second axis focuses on \textbf{(II)} improving the manipulation of \VOs with the bare hand using visuo-haptic augmentations of the hand as interaction feedback. The second axis focuses on \textbf{(II)} improving the manipulation of \VOs with the bare hand using visuo-haptic augmentations of the hand as interaction feedback.
] ]
\subsectionstarbookmark{Modifying the Perception of Tangible Surfaces with Visuo-Haptic Texture Augmentations} \subsectionstarbookmark{Axis I: Modifying the Perception of Tangible Surfaces with Visuo-Haptic Texture Augmentations}
Wearable haptic devices have proven to be effective in modifying the perception of a touched tangible surface, without modifying the tangible, nor covering the fingertip, forming a haptic \AE \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}. Wearable haptic devices have proven to be effective in modifying the perception of a touched tangible surface, without modifying the tangible, nor covering the fingertip, forming a haptic \AE \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}.
%It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronised with the finger movement. %It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronised with the finger movement.
@@ -210,64 +210,63 @@ Finally, some visuo-haptic texture databases have been modeled from real texture
However, the rendering of these textures in an immersive and natural visuo-haptic \AR using wearable haptics remains to be investigated. However, the rendering of these textures in an immersive and natural visuo-haptic \AR using wearable haptics remains to be investigated.
Our third objective is to evaluate the perception of simultaneous and co-localized visuo-haptic texture augmentation of tangible surfaces in \AR, directly touched by the hand, and to understand to what extent each sensory modality contributes to the overall perception of the augmented texture. Our third objective is to evaluate the perception of simultaneous and co-localized visuo-haptic texture augmentation of tangible surfaces in \AR, directly touched by the hand, and to understand to what extent each sensory modality contributes to the overall perception of the augmented texture.
\subsectionstarbookmark{Improving Virtual Object Manipulation with Visuo-Haptic Augmentations of the Hand} \subsectionstarbookmark{Axis II: Improving Virtual Object Manipulation with Visuo-Haptic Augmentations of the Hand}
In immersive and wearable visuo-haptic \AR, the hand is free to touch and interact seamlessly with real, augmented, and virtual objects, and one can expect natural and direct contact and manipulation of \VOs with the bare hand. In immersive and wearable visuo-haptic \AR, the hand is free to touch and interact seamlessly with real, augmented, and virtual objects, and one can expect natural and direct contact and manipulation of \VOs with the bare hand.
However, the intangibility of the visual \VE, the many display limitations of current visual \AR systems and wearable haptic devices, and the potential discrepancies between these two types of feedback can make the manipulation of \VOs particularly challenging. However, the intangibility of the visual \VE, the display limitations of current visual \OST-\AR systems and the inherent spatial and temporal discrepancies between the user's hand actions and the visual feedback in the \VE can make the interaction with \VOs particularly challenging.
%However, the intangibility of the virtual visual environment, the lack of kinesthetic feedback of wearable haptics, the visual rendering limitations of current \AR systems, as well as the spatial and temporal discrepancies between the real environment, the visual feedback, and the haptic feedback, can make the interaction with \VOs with bare hands particularly challenging. %However, the intangibility of the virtual visual environment, the lack of kinesthetic feedback of wearable haptics, the visual rendering limitations of current \AR systems, as well as the spatial and temporal discrepancies between the real environment, the visual feedback, and the haptic feedback, can make the interaction with \VOs with bare hands particularly challenging.
Still two types of sensory feedback are known to improve such direct \VO manipulation, but they have not been studied in combination in immersive visual \AE: visual rendering of the hand \cite{piumsomboon2014graspshell,prachyabrued2014visual} and contact rendering with wearable haptics \cite{lopes2018adding,teng2021touch}. Two particular sensory feedbacks are known to improve such direct \VO manipulation, but they have not been properly investigated in immersive \AR: visual rendering of the hand \cite{piumsomboon2014graspshell,prachyabrued2014visual} and delocalized haptic rendering \cite{lopes2018adding,teng2021touch}.
For this second axis of research, we propose to design and evaluate the role of visuo-haptic augmentations of the hand as interaction feedback with \VOs. For this second axis of research, we propose to design and evaluate \textbf{the role of visuo-haptic augmentations of the hand as interaction feedback with \VOs in \OST-\AR}.
We consider (1) the effect of different visual augmentations of the hand as \AR avatars and (2) the effect of combination of different visuo-haptic augmentations of the hand. We consider the effect of (1) the visual rendering as hand augmentation and (2) of combination of different visuo-haptic augmentations of the hand.
First, the visual rendering of the virtual hand is a key element for interacting and manipulating \VOs in \VR \cite{prachyabrued2014visual,grubert2018effects}. First, the visual rendering of the virtual hand is a key element for interacting and manipulating \VOs in \VR \cite{prachyabrued2014visual,grubert2018effects}.
A few works have also investigated the visual rendering of the virtual hand in \AR, from simulating mutual occlusions between the hand and \VOs \cite{piumsomboon2014graspshell,al-kalbani2016analysis} to displaying the virtual hand as an avatar overlay \cite{blaga2017usability,yoon2020evaluating}, augmenting the real hand. A few works have also investigated the visual rendering of the virtual hand in \AR, from simulating mutual occlusions between the hand and \VOs \cite{piumsomboon2014graspshell,al-kalbani2016analysis} to displaying the virtual hand as an avatar overlay \cite{blaga2017usability,yoon2020evaluating}, augmenting the real hand.
But visual \AR has significant perceptual differences from \VR due to the visibility of the real hand and environment, and these visual hand augmentations have not been evaluated in the context of \VO manipulation. But \OST-\AR has significant perceptual differences from \VR due to the visibility of the real hand and environment, and these visual hand augmentations have not been evaluated in the context of \VO manipulation with the bare hand.
Thus, our fourth objective is to evaluate and compare the effect of different visual hand augmentations on direct manipulation of \VOs in \AR. Thus, our fourth objective is to \textbf{investigate the visual rendering as hand augmentation for direct manipulation of \VOs in \OST-\AR}.
Finally, as described above, wearable haptics for visual \AR rely on moving the haptic actuator away from the fingertips to not impair the hand movements, sensations, and interactions with the \RE. Second, as described above, wearable haptics for visual \AR rely on moving the haptic actuator away from the fingertips to not impair the hand movements, sensations, and interactions with the \RE.
Previous works have shown that wearable haptics that provide feedback on the hand manipulation with \VOs in \AR can significantly improve the user performance and experience \cite{maisto2017evaluation,meli2018combining}. Previous works have shown that wearable haptics that provide feedback on the hand manipulation with \VOs in \AR can significantly improve the user performance and experience \cite{maisto2017evaluation,meli2018combining}.
However, it is unclear which positioning of the actuator is the most beneficial nor how a haptic augmentation of the hand compares or complements with a visual augmentation of the hand. However, it is unclear which positioning of the actuator is the most beneficial nor how a haptic augmentation of the hand compares or complements with a visual augmentation of the hand.
Our last objective is to investigate the role of visuo-haptic augmentations of the hand in manipulating \VOs directly with the hand in \AR. Our last objective is to \textbf{investigate the role of visuo-haptic rendering of the hand manipulation with \VO in \OST-\AR}.
\section{Thesis Overview} \section{Thesis Overview}
\label{thesis_overview} \label{thesis_overview}
This thesis is divided in four parts. This thesis is divided in four parts.
In \partref{context}, we describe the context and background of our research, within which this first current \textit{Introduction} chapter we present the research challenges, and the objectives, approach, and contributions of this thesis. In \textbf{\partref{context}}, we describe the context and background of our research, within which this first current \textit{Introduction} chapter we present the research challenges, and the objectives, approach, and contributions of this thesis.
In \chapref{related_work}, we then review previous work on the perception and manipulation with virtual and augmented objects, directly with the hand, using either wearable haptics, \AR, or their combination.
In \textbf{\chapref{related_work}}, we then review previous work on the perception and manipulation with virtual and augmented objects, directly with the hand, using either wearable haptics, \AR, or their combination.
First, we overview how the hand perceives and manipulate real everyday objects. First, we overview how the hand perceives and manipulate real everyday objects.
Second, we present wearable haptics and haptic augmentations of roughness and hardness of real objects. Second, we present wearable haptics and haptic augmentations of roughness and hardness of real objects.
Third, we introduce \AR, and how \VOs can be manipulated directly with the hand. Third, we introduce \AR, and how \VOs can be manipulated directly with the hand.
Finally, we describe how multimodal visual and haptic feedback have been combined in \AR to enhance perception and interaction with the hand. Finally, we describe how multimodal visual and haptic feedback have been combined in \AR to enhance perception and interaction with the hand.
Next, we address each of our two research axes in a dedicated part.
\bigskip We then address each of our two research axes in a dedicated part.
In \partref{perception}, we describe our contributions to the first axis of research, augmenting the visuo-haptic texture perception of tangible surfaces. noindentskip
In \textbf{\partref{perception}}, we describe our contributions to the first axis of research, augmenting the visuo-haptic texture perception of tangible surfaces.
We evaluate how the visual rendering of the hand (real or virtual), the environment (\AR or \VR) and the textures (displayed or hidden) affect the roughness perception of virtual vibrotactile textures rendered on real surfaces and touched directly with the index finger. We evaluate how the visual rendering of the hand (real or virtual), the environment (\AR or \VR) and the textures (displayed or hidden) affect the roughness perception of virtual vibrotactile textures rendered on real surfaces and touched directly with the index finger.
In \chapref{vhar_system}, we detail a system for rendering visuo-haptic virtual textures that augment tangible surfaces using an immersive \AR/\VR headset and a wearable vibrotactile device. In \textbf{\chapref{vhar_system}}, we detail a system for rendering visuo-haptic virtual textures that augment tangible surfaces using an immersive \AR/\VR headset and a wearable vibrotactile device.
The haptic textures are rendered as a real-time vibrotactile signal representing a grating texture, and is provided to the middle phalanx of the index finger touching the texture using a voice-coil actuator. The haptic textures are rendered as a real-time vibrotactile signal representing a grating texture, and is provided to the middle phalanx of the index finger touching the texture using a voice-coil actuator.
The tracking of the real hand and environment is done using marker-based technique, and the visual rendering of their virtual counterparts is done using the immersive \OST \AR headset Microsoft HoloLens~2. The tracking of the real hand and environment is done using marker-based technique, and the visual rendering of their virtual counterparts is done using the immersive \OST \AR headset Microsoft HoloLens~2.
In \chapref{xr_perception}, we investigate, in a user study, how different the perception of virtual haptic textures is in \AR \vs \VR and when touched by a virtual hand \vs one's own hand. In \textbf{\chapref{xr_perception}}, we investigate, in a user study, how different the perception of virtual haptic textures is in \AR \vs \VR and when touched by a virtual hand \vs one's own hand.
We use psychophysical methods to measure the user roughness perception of the virtual textures, and extensive questionnaires to understand how this perception is affected by the visual rendering of the hand and the environment. We use psychophysical methods to measure the user roughness perception of the virtual textures, and extensive questionnaires to understand how this perception is affected by the visual rendering of the hand and the environment.
In \chapref{ar_textures}, we evaluate the perception of visuo-haptic texture augmentations, touched directly with one's own hand in \AR. In \textbf{\chapref{ar_textures}}, we evaluate the perception of visuo-haptic texture augmentations, touched directly with one's own hand in \AR.
The virtual textures are paired visual and tactile models of real surfaces \cite{culbertson2014one} that we render as visual and haptic overlays on the touched augmented surfaces. The virtual textures are paired visual and tactile models of real surfaces \cite{culbertson2014one} that we render as visual and haptic overlays on the touched augmented surfaces.
Our objective is to assess the perceived realism, coherence and roughness of the combination of nine representative visuo-haptic texture pairs. Our objective is to assess the perceived realism, coherence and roughness of the combination of nine representative visuo-haptic texture pairs.
\bigskip noindentskip
In \textbf{\partref{manipulation}}, we describe our contributions to the second axis of research, improving direct hand manipulation of \VOs with visuo-haptic augmentations of the hand.
We explore how the visual and haptic augmentation of the hand, and their combination, as interaction feedback with \VOs in \OST-\AR can improve such manipulations.
In \partref{manipulation}, we describe our contributions to the second axis of research, improving direct hand manipulation of \VOs with visuo-haptic augmentations of the hand. In \textbf{\chapref{visual_hand}}, we conduct a user study to investigate the effect of six visual renderings as hand augmentations for the direct manipulation of \VOs, as a set of the most popular hand renderings in the \AR literature.
We evaluate how the visual and haptic augmentation, and their combination, of the hand as feedback of direct manipulation with \VOs can improve such manipulations. Using the \OST-\AR headset Microsoft HoloLens~2, we evaluate the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a \VO directly with the hand.
In \chapref{visual_hand}, we explore in a user study the effect of six visual hand augmentations that provide contact feedback with the \VO, as a set of the most popular hand renderings in the \AR literature. In \textbf{\chapref{visuo_haptic_hand}}, we evaluate in a user study two vibrotactile contact techniques, provided at four different locations on the real hand, as haptic rendering of the hand-object interaction.
Using the \OST-\AR headset Microsoft HoloLens~2, the user performance and experience are evaluated in two representative manipulation tasks, \ie push-and-slide and grasp-and-place of a \VO directly with the hand.
In \chapref{visuo_haptic_hand}, we evaluate in a user study two vibrotactile contact techniques, provided at four different locations on the real hand, as haptic rendering of the hand-object interaction.
They are compared to the two most representative visual hand augmentations from the previous study, and the user performance and experience are evaluated within the same \OST-\AR setup and manipulation tasks. They are compared to the two most representative visual hand augmentations from the previous study, and the user performance and experience are evaluated within the same \OST-\AR setup and manipulation tasks.
\bigskip noindentskip
In \textbf{\partref{part:conclusion}}, we conclude this thesis and discuss short-term future work and long-term perspectives for each of our contributions and research axes.
In \partref{part:conclusion}, we conclude this thesis and discusse short-term future work and long-term perspectives for each of our contributions and research axes.

View File

@@ -184,7 +184,7 @@ The \emph{system control tasks} are changes to the system state through commands
\end{subfigs} \end{subfigs}
\subsubsection{Reducing the Real-Virtual Gap} \subsubsection{Reducing the Real-Virtual Gap}
\label{real-virtual-gap} \label{real_virtual_gap}
In \AR and \VR, the state of the system is displayed to the user as a \ThreeD spatial \VE. In \AR and \VR, the state of the system is displayed to the user as a \ThreeD spatial \VE.
In an immersive and portable \AR system, this \VE is experienced at a 1:1 scale and as an integral part of the \RE. In an immersive and portable \AR system, this \VE is experienced at a 1:1 scale and as an integral part of the \RE.
@@ -216,7 +216,7 @@ In a pick-and-place task with tangibles of different shapes, a difference in siz
This suggests the feasibility of using simplified tangibles in \AR whose spatial properties (\secref{object_properties}) abstract those of the \VOs. This suggests the feasibility of using simplified tangibles in \AR whose spatial properties (\secref{object_properties}) abstract those of the \VOs.
Similarly, in \secref{tactile_rendering} we described how a material property (\secref{object_properties}) of a touched tangible can be modified using wearable haptic devices \cite{detinguy2018enhancing,salazar2020altering}: It could be used to render coherent visuo-haptic material perceptions directly touched with the hand in \AR. Similarly, in \secref{tactile_rendering} we described how a material property (\secref{object_properties}) of a touched tangible can be modified using wearable haptic devices \cite{detinguy2018enhancing,salazar2020altering}: It could be used to render coherent visuo-haptic material perceptions directly touched with the hand in \AR.
\begin{subfigs}{ar_applications}{Manipulating \VOs with tangibles. }[][ \begin{subfigs}{ar_tangibles}{Manipulating \VOs with tangibles. }[][
\item Ubi-Touch paired the movements and screw interaction of a virtual drill with a real vaporizer held by the user \cite{jain2023ubitouch}. \item Ubi-Touch paired the movements and screw interaction of a virtual drill with a real vaporizer held by the user \cite{jain2023ubitouch}.
\item A tangible cube that can be moved into the \VE and used to grasp and manipulate \VOs \cite{issartel2016tangible}. \item A tangible cube that can be moved into the \VE and used to grasp and manipulate \VOs \cite{issartel2016tangible}.
\item Size and \item Size and
@@ -254,7 +254,7 @@ More advanced techniques simulate the friction phenomena \cite{talvas2013godfing
\item A fingertip tracking that allows to select a \VO by opening the hand \cite{lee2007handy}. \item A fingertip tracking that allows to select a \VO by opening the hand \cite{lee2007handy}.
\item Physics-based hand-object manipulation with a virtual hand made of numerous many small rigid-body spheres \cite{hilliges2012holodesk}. \item Physics-based hand-object manipulation with a virtual hand made of numerous many small rigid-body spheres \cite{hilliges2012holodesk}.
\item Grasping a through gestures when the fingers are detected as opposing on the \VO \cite{piumsomboon2013userdefined}. \item Grasping a through gestures when the fingers are detected as opposing on the \VO \cite{piumsomboon2013userdefined}.
\item A kinematic hand model with rigid-body phalanges (in beige) taht follows the real tracked hand (in green) but kept physically constrained to the \VO. Applied forces are shown as red arrows \cite{borst2006spring}. \item A kinematic hand model with rigid-body phalanges (in beige) that follows the real tracked hand (in green) but kept physically constrained to the \VO. Applied forces are shown as red arrows \cite{borst2006spring}.
] ]
\subfigsheight{37mm} \subfigsheight{37mm}
\subfig{lee2007handy} \subfig{lee2007handy}

View File

@@ -1,47 +1,22 @@
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects. \noindent Touching, grasping and manipulating \VOs are fundamental interactions in \AR (\secref[related_work]{ve_tasks}) and essential for many of its applications (\secref[related_work]{ar_applications}).
% The most common current \AR systems, in the form of portable and immersive \OST-\AR headsets \cite{hertel2021taxonomy}, allow real-time hand tracking and direct interaction with \VOs with bare hands (\secref[related_work]{real_virtual_gap}).
Virtual object manipulation is particularly critical for useful and effective \AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}. Manipulation of \VOs is achieved using a virtual hand interaction technique that represents the user's hand in the \VE and simulates interaction with \VOs (\secref[related_work]{ar_virtual_hands}).
% However, direct hand manipulation is still challenging due to the intangibility of the \VE, the lack of mutual occlusion between the hand and the \VO in \OST-\AR (\secref[related_work]{ar_displays}), and the inherent delays between the user's hand and the result of the interaction simulation (\secref[related_work]{ar_virtual_hands}).
Hand tracking technologies \cite{xiao2018mrtouch}, grasping techniques \cite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real \cite{piumsomboon2014graspshell}, without requiring controllers \cite{krichenbauer2018augmented}, gloves \cite{prachyabrued2014visual}, or predefined gesture techniques \cite{piumsomboon2013userdefined, ha2014wearhand}.
%
Optical see-through \AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction \cite{kim2018revisiting}.
However, there are still several haptic and visual limitations that affect manipulation in OST-AR, degrading the user experience. In this chapter, we investigate the \textbf{visual rendering as hand augmentation} for direct manipulation of \VOs in \OST-\AR.
% To this end, we selected in the literature and compared the most popular visual hand renderings used to interact with \VOs in \AR.
For example, it is difficult to estimate the position of one's hand in relation to a virtual content because mutual occlusion between the hand and the virtual object is often lacking \cite{macedo2023occlusion}, the depth of virtual content is underestimated \cite{diaz2017designing, peillard2019studying}, and hand tracking still has a noticeable latency \cite{xiao2018mrtouch}. The virtual hand is \textbf{displayed superimposed} on the user's hand with these visual rendering, providing a \textbf{feedback on the tracking} of the real hand, as shown in \figref{hands}.
% The movement of the virtual hand is also \textbf{constrained to the surface} of the \VO, providing an additional \textbf{feedback on the interaction} with the \VO.
Similarly, it is challenging to ensure confident and realistic contact with a virtual object due to the lack of haptic feedback and the intangibility of the virtual environment, which of course cannot apply physical constraints on the hand \cite{maisto2017evaluation, meli2018combining, lopes2018adding, teng2021touch}. We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloLens~2, the effect of six visual hand renderings on the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a \VO directly with the hand.
%
These limitations also make it difficult to confidently move a grasped object towards a target \cite{maisto2017evaluation, meli2018combining}.
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an \AR context: visual hand rendering and delocalized haptic rendering. noindentskip
% The main contributions of this chapter are:
A few works explored the effect of a visual hand rendering on interactions in \AR by simulating mutual occlusion between the real hand and virtual objects \cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent \cite{ha2014wearhand, piumsomboon2014graspshell} or opaque \cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
%
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible \cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
%
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in \AR.
%
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in \AR \cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
%
But haptic rendering for \AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking \cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment \cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
%
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in \AR.
%
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred \cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
%
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in \AR, or conversely, they can be shown to be complementary.
In this paper, we investigate the role of the visuo-haptic rendering of the hand during 3D manipulation of virtual objects in OST-AR.
%
We consider two representative manipulation tasks: push-and-slide and grasp-and-place a virtual object.
%
The main contributions of this work are:
\begin{itemize} \begin{itemize}
\item A comparison from the literature of the six most common visual hand renderings used in \AR. \item A comparison from the literature of the six most common visual hand renderings used to interact with \VOs in \AR.
\item A user study evaluating with 24 participants the performance and user experience of the six visual hand renderings superimposed on the real hand during free and direct hand manipulation of \VOs in \OST-\AR. \item A user study evaluating with 24 participants the performance and user experience of the six visual hand renderings superimposed on the real hand during free and direct hand manipulation of \VOs in \OST-\AR.
\end{itemize} \end{itemize}
noindentskip
In the next sections, we first present the six visual hand renderings considered in this study and gathered from the literature. We then describe the experimental setup and design, the two manipulation tasks, and the metrics used. We present the results of the user study and discuss the implications of these results for the manipulation of \VOs directly with the hand in \AR. In the next sections, we first present the six visual hand renderings considered in this study and gathered from the literature. We then describe the experimental setup and design, the two manipulation tasks, and the metrics used. We present the results of the user study and discuss the implications of these results for the manipulation of \VOs directly with the hand in \AR.
\begin{subfigs}{hands}{The six visual hand renderings.}[ \begin{subfigs}{hands}{The six visual hand renderings.}[

View File

@@ -50,7 +50,7 @@ We aim to investigate whether the chosen visual hand rendering affects the perfo
\subsection{Manipulation Tasks and Virtual Scene} \subsection{Manipulation Tasks and Virtual Scene}
\label{tasks} \label{tasks}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual,maisto2017evaluation,meli2018combining,blaga2017usability,vanveldhuizen2021effect}. Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual,blaga2017usability,maisto2017evaluation,meli2018combining,vanveldhuizen2021effect}.
\subsubsection{Push Task} \subsubsection{Push Task}
\label{push-task} \label{push-task}
@@ -72,15 +72,15 @@ However, this time, the target volume can spawn in eight different locations on
Users are asked to grasp, lift, and move the cube towards the target volume using their fingertips in any way they prefer. Users are asked to grasp, lift, and move the cube towards the target volume using their fingertips in any way they prefer.
As before, the task is considered completed when the cube is \emph{fully} inside the volume. As before, the task is considered completed when the cube is \emph{fully} inside the volume.
\begin{subfigs}{tasks}{The two manipulation tasks wof the user study. }[ \begin{subfigs}{tasks}{The two manipulation tasks wof the user study.}[
The cube to manipulate is in the middle of the table (5-cm-edge and opaque) and the eight possible targets to reach are arround (7-cm-edge volume and semi-transparent). The cube to manipulate is in the middle of the table (\qty{5}{cm} edge and opaque) and the eight possible targets to reach are arround (\qty{7}{cm} edge volume and semi-transparent).
Only one target at a time was shown during the experiments. Only one target at a time was shown during the experiments.
][ ][
\item Push task: pushing the virtual cube along a table towards a target placed on the same surface. \item Push task: pushing the virtual cube along a table towards a target placed on the same surface.
\item Grasp task: grasping and lifting the virtual cube towards a target placed on a \qty{20}{\cm} higher plane. \item Grasp task: grasping and lifting the virtual cube towards a target placed on a \qty{20}{\cm} higher plane.
] ]
\subfig[0.4]{method/task-push} \subfig[0.45]{method/task-push}
\subfig[0.4]{method/task-grasp} \subfig[0.45]{method/task-grasp}
\end{subfigs} \end{subfigs}
\subsection{Experimental Design} \subsection{Experimental Design}
@@ -128,7 +128,7 @@ First, participants were given a consent form that briefed them about the tasks
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a \qty{2}{min} training to familiarize with the \AR rendering and the two considered tasks. Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a \qty{2}{min} training to familiarize with the \AR rendering and the two considered tasks.
During this training, we did not use any of the six hand renderings we want to test, but rather a fully-opaque white hand rendering that completely occluded the real hand of the user. During this training, we did not use any of the six hand renderings we want to test, but rather a fully-opaque white hand rendering that completely occluded the real hand of the user.
Participants were asked to carry out the two tasks as naturally and as fast as possible. Participants were asked to carry out the two tasks as naturally and as fast as possible.
Similarly to \cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand. Similarly to \cite{prachyabrued2014visual,maisto2017evaluation,blaga2017usability,vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
The experiment took around 1 hour and 20 minutes to complete. The experiment took around 1 hour and 20 minutes to complete.
\subsection{Participants} \subsection{Participants}

View File

@@ -41,7 +41,7 @@ On the contrary, the lack of visual hand constrained the participants to give mo
Targets on the left (\level{L}, \level{LF}) and the right (\level{R}) sides had higher \response{Timer per Contact} than all the other targets (\p{0.005}). Targets on the left (\level{L}, \level{LF}) and the right (\level{R}) sides had higher \response{Timer per Contact} than all the other targets (\p{0.005}).
\begin{subfigs}{push_results}{Results of the push task performance metrics for each visual hand rendering. }[ \begin{subfigs}{push_results}{Results of the push task performance metrics for each visual hand rendering.}[
Geometric means with bootstrap 95~\% \CI Geometric means with bootstrap 95~\% \CI
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}. and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][ ][

View File

@@ -54,7 +54,7 @@ The \level{Mesh} rendering seemed to have provided the most confidence to partic
The \response{Grip Aperture} was longer on the right-front (\level{RF}) target volume, indicating a higher confidence, than on back and side targets (\level{R}, \level{RB}, \level{B}, \level{L}, \p{0.03}). The \response{Grip Aperture} was longer on the right-front (\level{RF}) target volume, indicating a higher confidence, than on back and side targets (\level{R}, \level{RB}, \level{B}, \level{L}, \p{0.03}).
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each visual hand rendering. }[ \begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each visual hand rendering.}[
Geometric means with bootstrap 95~\% \CI Geometric means with bootstrap 95~\% \CI
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}. and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][ ][

View File

@@ -14,7 +14,7 @@ Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment were then us
A complete visual hand rendering seemed to be preferred over no visual hand rendering when grasping. A complete visual hand rendering seemed to be preferred over no visual hand rendering when grasping.
\end{itemize} \end{itemize}
\begin{subfigs}{results_ranks}{Boxplots of the ranking for each visual hand rendering. }[ \begin{subfigs}{results_ranks}{Boxplots of the ranking for each visual hand rendering.}[
Lower is better. Lower is better.
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}. Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
][ ][

View File

@@ -19,7 +19,7 @@ Moreover, having no visible visual \factor{Hand} rendering was felt by users fat
Surprisingly, no clear consensus was found on \response{Rating}. Surprisingly, no clear consensus was found on \response{Rating}.
Each visual hand rendering, except for \level{Occlusion}, had simultaneously received the minimum and maximum possible notes. Each visual hand rendering, except for \level{Occlusion}, had simultaneously received the minimum and maximum possible notes.
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each visual hand rendering. }[ \begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each visual hand rendering.}[
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}. Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: ** is \pinf{0.01} and * is \pinf{0.05}.
Lower is better for \textbf{(a)} difficulty and \textbf{(b)} fatigue. Lower is better for \textbf{(a)} difficulty and \textbf{(b)} fatigue.
Higher is better for \textbf{(d)} performance, \textbf{(d)} precision, \textbf{(e)} efficiency, and \textbf{(f)} rating. Higher is better for \textbf{(d)} performance, \textbf{(d)} precision, \textbf{(e)} efficiency, and \textbf{(f)} rating.

View File

@@ -25,8 +25,8 @@ This result is consistent with \textcite{saito2021contact}, who found that displ
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in \AR. To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in \AR.
These results contrast with similar manipulation studies, but in non-immersive, on-screen \AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}. These results contrast with similar manipulation studies, but in non-immersive, on-screen \AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
Our results show the most effective visual hand rendering to be the \level{Skeleton} one. Participants appreciated that it provided a detailed and precise view of the tracking of the real hand, without hiding or masking it. Our results show the most effective visual hand rendering to be the \level{Skeleton} one.
Participants appreciated that it provided a detailed and precise view of the tracking of the real hand, without hiding or masking it.
Although the \level{Contour} and \level{Mesh} hand renderings were also highly rated, some participants felt that they were too visible and masked the real hand. Although the \level{Contour} and \level{Mesh} hand renderings were also highly rated, some participants felt that they were too visible and masked the real hand.
This result is in line with the results of virtual object manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment. This result is in line with the results of virtual object manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
This type of \level{Skeleton} rendering was also the one that provided the best sense of agency (control) in \VR \cite{argelaguet2016role, schwind2018touch}. This type of \level{Skeleton} rendering was also the one that provided the best sense of agency (control) in \VR \cite{argelaguet2016role,schwind2018touch}.

View File

@@ -1,7 +1,18 @@
\section{Conclusion} \section{Conclusion}
\label{conclusion} \label{conclusion}
This paper presented two human subject studies aimed at better understanding the role of visuo-haptic rendering of the hand during virtual object manipulation in OST-AR. In this chapter, we addressed the challenge of touching, grasping and manipulating \VOs directly with the hand in immersive \OST-\AR by providing and evaluating visual renderings as hand augmentations.
The first experiment compared six visual hand renderings in two representative manipulation tasks in \AR, \ie push-and-slide and grasp-and-place of a virtual object. Superimposed on the user's hand, these visual renderings provide feedback from the virtual hand, which tracks the real hand, and simulates the interaction with \VOs as a proxy.
Results show that a visual hand rendering improved the performance, perceived effectiveness, and user confidence. We first selected and compared the six most popular visual hand renderings used to interact with \VOs in \AR.
Then, in a user study with 24 participants and an immersive \OST-\AR headset, we evaluated the effect of these six visual hand renderings on the user performance and experience in two representative manipulation tasks.
Our results showed that a visual hand rendering overlaying the real hand improved the performance, perceived effectiveness and confidence of participants compared to none.
A skeleton rendering, providing a detailed view of the tracked joints and phalanges while not hiding the real hand, was the most performant and effective. A skeleton rendering, providing a detailed view of the tracked joints and phalanges while not hiding the real hand, was the most performant and effective.
The contour and mesh renderings were found to mask the real hand, while the tips rendering was controversial.
The occlusion rendering too much tracking latency to be effective.
This is consistent with similar manipulation studies in \VR and in non-immersive \VST-\AR setups.
This study suggests that a \ThreeD visual hand rendering is important in \AR when interacting through a virtual hand technique.
It seems particularly required for interaction tasks that involves precise movements of the fingers in relation to virtual content, such as \ThreeD windows, buttons and sliders, or stacking and assembly tasks.
A minimal but detailed rendering of the hand that does not hide the real hand, like the skeleton rendering we evaluated, seems to be the best compromise between provided feedback and effectiveness.
Still, users should be able to choose and adapt the visual hand rendering to their preferences and needs.

View File

@@ -50,7 +50,7 @@ The chosen visuo-haptic hand renderings are the combination of the two most repr
\subsection{Experimental Design} \subsection{Experimental Design}
\label{design} \label{design}
\begin{subfigs}{tasks}{The two manipulation tasks of the user study. }[ \begin{subfigs}{tasks}{The two manipulation tasks of the user study.}[
Both pictures show the cube to manipulate in the middle (\qty{5}{\cm} and opaque) and the eight possible targets to reach (\qty{7}{\cm} cube and semi-transparent). Both pictures show the cube to manipulate in the middle (\qty{5}{\cm} and opaque) and the eight possible targets to reach (\qty{7}{\cm} cube and semi-transparent).
Only one target at a time was shown during the experiments. Only one target at a time was shown during the experiments.
][ ][
@@ -61,7 +61,7 @@ The chosen visuo-haptic hand renderings are the combination of the two most repr
\subfig[0.23]{method/task-grasp} \subfig[0.23]{method/task-grasp}
\end{subfigs} \end{subfigs}
\begin{subfigs}{push_results}{Results of the grasp task performance metrics. }[ \begin{subfigs}{push_results}{Results of the grasp task performance metrics.}[
Geometric means with bootstrap 95~\% \CI for each vibrotactile positioning (a, b and c) or visual hand rendering (d) Geometric means with bootstrap 95~\% \CI for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}. and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][ ][

View File

@@ -18,7 +18,7 @@ Although the \level{Distance} technique provided additional feedback on the inte
\subsection{Questionnaire} \subsection{Questionnaire}
\label{questions} \label{questions}
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each vibrotactile positioning. }[ \begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each vibrotactile positioning.}[
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}. Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
Higher is better for \textbf{(a)} vibrotactile rendering rating, \textbf{(c)} usefulness and \textbf{(c)} fatigue. Higher is better for \textbf{(a)} vibrotactile rendering rating, \textbf{(c)} usefulness and \textbf{(c)} fatigue.
Lower is better for \textbf{(d)} workload. Lower is better for \textbf{(d)} workload.

View File

@@ -1,7 +1,7 @@
\section{Results} \section{Results}
\label{results} \label{results}
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning. }[ \begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning.}[
Geometric means with bootstrap 95~\% confidence and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}. Geometric means with bootstrap 95~\% confidence and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][ ][
\item Time to complete a trial. \item Time to complete a trial.

View File

@@ -3,10 +3,32 @@
\section*{Summary} \section*{Summary}
In this thesis, entitled \enquote{\ThesisTitle}, we presented our research on direct hand interaction with real and virtual everyday objects, visually and haptically augmented using immersive \AR and wearable haptic devices.
\noindentskip \partref{manipulation}
\noindentskip In \chapref{visual_hand}, we addressed the challenge manipulating \VOs directly with the hand by providing visual renderings as hand augmentations.
Seen as an overlay on the user's hand, such visual hand rendering provide feedback on the hand tracking and the interaction with \VOs.
We compared the six commonly used renderings in the \AR litterature in a user study with 24 participants, where we evaluated their effect on the user performance and experience in two representative manipulation tasks.
Results showed that a visual hand rendering improved the user performance, perceived effectiveness and confidence, with a skeleton-like rendering being the most performant and effective.
This rendering provided a detailed view of the tracked phalanges while being thin enough not to hide the real hand.
\section*{Future Work} \section*{Future Work}
The visuo-haptic renderings we presented and the user studies we conducted in this thesis have of course some limitations.
We present in this section some future work that could address these.
\subsection*{Visual Rendering of the Hand for Manipulating Virtual Objects in Augmented Reality} \subsection*{Visual Rendering of the Hand for Manipulating Virtual Objects in Augmented Reality}
\paragraph{Other AR Displays}
The visual hand renderings we evaluated were displayed on the Microsoft HoloLens~2, which is a common \OST-\AR headset.
\paragraph{More Ecological Conditions}
We evaluated the effect of the visual hand rendering with two manipulation tasks involving to place a virtual cube into a target volume either by pushing it on a table or by grasping it.
%While these tasks are fundamental and basics
These results have of course some limitations as they only address limited types of manipulation tasks and visual hand characteristics, evaluated in a specific \OST-\AR setup. These results have of course some limitations as they only address limited types of manipulation tasks and visual hand characteristics, evaluated in a specific \OST-\AR setup.
The two manipulation tasks were also limited to placing a virtual cube in predefined target volumes. The two manipulation tasks were also limited to placing a virtual cube in predefined target volumes.
Testing a wider range of virtual objects and more ecological tasks \eg stacking, assembly, will ensure a greater applicability of the results obtained in this work, as well as considering bimanual manipulation. Testing a wider range of virtual objects and more ecological tasks \eg stacking, assembly, will ensure a greater applicability of the results obtained in this work, as well as considering bimanual manipulation.

View File

@@ -9,6 +9,5 @@
\textbf{Erwan Normand}, Claudio Pacchierotti, Eric Marchand, and Maud Marchal. \enquote{Augmenting the Texture Perception of Tangible Surfaces in Augmented Reality using Vibrotactile Haptic Stimuli}. To appear in \textit{Proceedings of EuroHaptics 2024}, 2024. \textbf{Erwan Normand}, Claudio Pacchierotti, Eric Marchand, and Maud Marchal. \enquote{Augmenting the Texture Perception of Tangible Surfaces in Augmented Reality using Vibrotactile Haptic Stimuli}. To appear in \textit{Proceedings of EuroHaptics 2024}, 2024.
\bigskip noindentskip
\textbf{Erwan Normand}, Claudio Pacchierotti, Eric Marchand, and Maud Marchal. \enquote{How Different Is the Perception of Vibrotactile Texture Roughness in Augmented versus Virtual Reality?}. To appear in \textit{Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology (VRST '24)}, 2024.
\noindent \textbf{Erwan Normand}, Claudio Pacchierotti, Eric Marchand, and Maud Marchal. \enquote{How Different Is the Perception of Vibrotactile Texture Roughness in Augmented versus Virtual Reality?}. To appear in \textit{Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology (VRST '24)}, 2024.

View File

@@ -14,9 +14,10 @@
% Content % Content
\input{config/content} \input{config/content}
\newcommand{\ThesisTitle}{Study of the Perception and Manipulation of Virtual Objects in Augmented Reality using Wearable Haptics}
\hypersetup{ \hypersetup{
pdfauthor = {Erwan NORMAND}, pdfauthor = {Erwan NORMAND},
pdftitle = {Study of the Perception and Manipulation of Virtual Objects in Augmented Reality using Wearable Haptics}, pdftitle = \ThesisTitle,
pdfsubject = {Ph.D. Thesis of Erwan NORMAND}, pdfsubject = {Ph.D. Thesis of Erwan NORMAND},
pdfkeywords = {Augmented Reality, Wearable Haptics, Perception, Interaction, Textures, Virtual Hand}, pdfkeywords = {Augmented Reality, Wearable Haptics, Perception, Interaction, Textures, Virtual Hand},
} }
@@ -32,13 +33,13 @@
\frontmatter \frontmatter
\import{0-front}{cover} \import{0-front}{cover}
%\importchapter{0-front}{acknowledgement} \importchapter{0-front}{acknowledgement}
\importchapter{0-front}{toc} \importchapter{0-front}{toc}
\mainmatter \mainmatter
\import{1-introduction}{part} \import{1-introduction}{part}
\importchapter{1-introduction/introduction}{introduction} \importchapter{1-introduction/introduction}{introduction}
%\importchapter{1-introduction/related-work}{related-work} \importchapter{1-introduction/related-work}{related-work}
\import{2-perception}{perception} \import{2-perception}{perception}
\importchapter{2-perception/vhar-system}{vhar-system} \importchapter{2-perception/vhar-system}{vhar-system}
@@ -54,7 +55,7 @@
\appendix \appendix
\importchapter{4-conclusion}{publications} \importchapter{4-conclusion}{publications}
\importchapter{4-conclusion}{résumé} %\importchapter{4-conclusion}{résumé}
\backmatter \backmatter
\importchapter{5-back}{bibliography} \importchapter{5-back}{bibliography}