This commit is contained in:
2024-09-27 22:10:59 +02:00
parent 8a85b14d3b
commit a9319210df
13 changed files with 51 additions and 47 deletions

View File

@@ -120,8 +120,8 @@ The interactions between the virtual hand and objects are then simulated and ren
Because the visuo-haptic \VE is displayed in real time, colocalized and aligned with the real one, the user is given the illusion of directly perceiving and interacting with the virtual content as if it were part of the \RE.
\fig{interaction-loop}{The interaction loop between a user and a visuo-haptic augmented environment.}[
One interact with the visual (in blue) and haptic (in red) virtual environment through a virtual hand (in purple) interaction technique that tracks real hand movements and simulates contact with \VOs.
The virtual environment is rendered back to the user co-localized with the real one (in gray) using a visual \AR headset and a wearable haptic device.
One interact with the visual (in blue) and haptic (in red) \VE through a virtual hand (in purple) interaction technique that tracks real hand movements and simulates contact with \VOs.
The \VE is rendered back to the user co-localized with the real one (in gray) using a visual \AR headset and a wearable haptic device.
]
In this context, we identify two main research challenges that we address in this thesis:

View File

@@ -10,9 +10,9 @@ By providing timely vibrations synchronized with the movement of the tool or the
%
In that sense, data-driven haptic textures have been developed as captures and models of real surfaces, resulting in the Penn Haptic Texture Toolkit (HaTT) database \cite{culbertson2014one}.
%
While these virtual haptic textures are perceived as similar to real textures \cite{culbertson2015should}, they have been evaluated using hand-held tools and not yet in a direct finger contact with the surface context, in particular combined with visual textures in an immersive virtual environment.
While these virtual haptic textures are perceived as similar to real textures \cite{culbertson2015should}, they have been evaluated using hand-held tools and not yet in a direct finger contact with the surface context, in particular combined with visual textures in an immersive \VE.
Combined with virtual reality (VR), where the user is immersed in a visual virtual environment, wearable haptic devices have also proven to be effective in modifying the visuo-haptic perception of tangible objects touched with the finger, without needing to modify the object \cite{asano2012vibrotactile,asano2015vibrotactile,salazar2020altering}.
Combined with virtual reality (VR), where the user is immersed in a visual \VE, wearable haptic devices have also proven to be effective in modifying the visuo-haptic perception of tangible objects touched with the finger, without needing to modify the object \cite{asano2012vibrotactile,asano2015vibrotactile,salazar2020altering}.
%
Worn on the finger, but not directly on the fingertip to keep it free to interact with tangible objects, they have been used to alter perceived stiffness, softness, friction and local deformations \cite{detinguy2018enhancing,salazar2020altering}.
%
@@ -26,7 +26,7 @@ These two factors have been shown to influence the perception of haptic stiffnes
%
It remains to be investigated whether simultaneous and co-localized visual and haptic texture augmentation of tangible surfaces in \AR can be perceived in a coherent and realistic manner, and to what extent each sensory modality would contribute to the overall perception of the augmented texture.
%
Being able to coherently substitute the visuo-haptic texture of an everyday surface directly touched by a finger is an important step towards new \AR applications capable of visually and haptically augmenting the real environment of a user in a plausible way.
Being able to coherently substitute the visuo-haptic texture of an everyday surface directly touched by a finger is an important step towards new \AR applications capable of visually and haptically augmenting the \RE of a user in a plausible way.
In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through \AR (OST-AR) and wearable vibrotactile stimuli provided on the index.
%

View File

@@ -40,11 +40,11 @@ All these visual and haptic textures are isotropic: their rendering (appearance
\figref{setup} shows the experimental setup (middle) and the first person view (right) of the user study.
%
Nine 5-cm square cardboards with smooth, white melamine surface, arranged in a 3 \x 3 grid, were used as real tangible surfaces to augment.
Nine \qty{5}{\cm} square cardboards with smooth, white melamine surface, arranged in a \numproduct{3 x 3} grid, were used as real tangible surfaces to augment.
%
Their poses were estimated with three 2-cm-square AprilTag fiducial markers glued on the surfaces grid.
Their poses were estimated with three \qty{2}{\cm} square AprilTag fiducial markers glued on the surfaces grid.
%
Similarly, a 2-cm-square fiducial marker was glued on top of the vibrotactile actuator to detect the finger pose.
Similarly, a \qty{2}{\cm} square fiducial marker was glued on top of the vibrotactile actuator to detect the finger pose.
%
Positioned \qty{20}{\cm} above the surfaces, a webcam (StreamCam, Logitech) filmed the markers to track finger movements relative to the surfaces.
%
@@ -54,7 +54,7 @@ When a haptic texture was touched, a \qty{48}{kHz} audio signal was generated us
%
The normal force on the texture was assumed to be constant at \qty{1.2}{\N} to generate the audio signal from the model, as Culbertson \etal \cite{culbertson2015should}, who found that the HaTT textures can be rendered using only the speed as input without decreasing their perceived realism.
%
An amplifier (XY-502, not branded) converted this audio signal to a current transmitted to the vibrotactile voice-coil actuator (HapCoil-One, Actronika), that was encased in a 3D-printed plastic shell firmly attached to the middle index phalanx of the participant's dominant hand, similarly to previous studies \cite{asano2015vibrotactile,friesen2024perceived}.
An amplifier (XY-502, not branded) converted this audio signal to a current transmitted to the vibrotactile voice-coil actuator (HapCoil-One, Actronika), that was encased in a \ThreeD-printed plastic shell firmly attached to the middle index phalanx of the participant's dominant hand, similarly to previous studies \cite{asano2015vibrotactile,friesen2024perceived}.
%
This voice-coil actuator was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnoteurl{https://www.actronika.com/haptic-solutions}.
%

View File

@@ -45,7 +45,7 @@ While visual sensation did influence perception, as observed in previous haptic
%
This indicates that participants were more confident and relied more on the haptic roughness perception than on the visual roughness perception when integrating both in one coherent perception.
%
Several participants also described attempting to identify visual and haptic textures using spatial breaks, edges or patterns, that were not observed when these textures were displayed in non-immersive virtual environments with a screen \cite{culbertson2014modeling,culbertson2015should}.
Several participants also described attempting to identify visual and haptic textures using spatial breaks, edges or patterns, that were not observed when these textures were displayed in non-immersive \VEs with a screen \cite{culbertson2014modeling,culbertson2015should}.
%
A few participants even reported that they clearly sensed patterns on haptic textures.
%
@@ -63,7 +63,7 @@ The perception of surface roughness with the finger is actually more complex bec
%
Another limitation that may have affected the perception of haptic textures is the lack of compensation for the frequency response of the actuator and amplifier \cite{asano2012vibrotactile,culbertson2014modeling,friesen2024perceived}.
%
Finally, the visual textures used were also simple color captures not meant to be used in an immersive virtual environment.
Finally, the visual textures used were also simple color captures not meant to be used in an immersive \VE.
%
However, our objective was not to accurately reproduce real textures, but to alter the perception of simultaneous visual and haptic roughness augmentation of a real surface directly touched by the finger in \AR.
%

View File

@@ -17,10 +17,10 @@ The results showed that participants consistently identified and matched cluster
%
The texture rankings did indeed show that participants perceived the roughness of haptic textures to be very similar, but less so for visual textures, and the haptic roughness perception predominated the final roughness perception ranking of the original visuo-haptic pairs.
%
There are still many improvements to be made to the respective renderings of the haptic and visual textures used in this work to make them more realistic for finger perception and immersive virtual environment contexts.
There are still many improvements to be made to the respective renderings of the haptic and visual textures used in this work to make them more realistic for finger perception and immersive \VE contexts.
%
However, these results suggest that \AR visual textures that augments tangible surfaces can be enhanced with a set of data-driven vibrotactile haptic textures in a coherent and realistic manner.
%
This paves the way for new \AR applications capable of augmenting a real environment with virtual visuo-haptic textures, such as visuo-haptic painting in artistic, object design or interior design contexts.
This paves the way for new \AR applications capable of augmenting a \RE with virtual visuo-haptic textures, such as visuo-haptic painting in artistic, object design or interior design contexts.
%
The latter is illustrated in \figref{experiment/use_case}, where a user applies different visuo-haptic textures to a wall to compare them visually and by touch.

View File

@@ -17,6 +17,8 @@ We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloL
\noindentskip In the next sections, we first present the six visual hand renderings we considered and gathered from the literature. We then describe the experimental setup and design, the two manipulation tasks, and the metrics used. We present the results of the user study and discuss the implications of these results for the manipulation of \VOs directly with the hand in \AR.
\bigskip
\begin{subfigs}{hands}{The six visual hand renderings.}[
As seen by the user through the \AR headset during the two-finger grasping of a virtual cube.
][
@@ -25,7 +27,7 @@ We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloL
\item Rings on the fingertips \level{(Tips)}.
\item Thin outline of the hand \level{(Contour, Cont)}.
\item Fingers' joints and phalanges \level{(Skeleton, Skel)}.
\item Semi-transparent 3D hand model \level{(Mesh)}.
\item Semi-transparent \ThreeD hand model \level{(Mesh)}.
]
\subfig[0.22]{method/hands-none}
\subfig[0.22]{method/hands-occlusion}

View File

@@ -5,30 +5,30 @@ We compared a set of the most popular visual hand renderings, as found in the li
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips (\secref[related_work]{grasp_types}).
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in \cite{yoon2020evaluating,vanveldhuizen2021effect}.
All considered hand renderings are drawn following the tracked pose of the user's real hand.
However, while the real hand can of course penetrate virtual objects, the visual hand is always constrained by the virtual environment (\secref[related_work]{ar_virtual_hands}).
However, while the real hand can of course penetrate \VOs, the visual hand is always constrained by the \VE (\secref[related_work]{ar_virtual_hands}).
They are shown in \figref{hands} and described below, with an abbreviation in parentheses when needed.
\paragraph{None}
As a reference, we considered no visual hand rendering (\figref{method/hands-none}), as is common in \AR \cite{hettiarachchi2016annexing,blaga2017usability,xiao2018mrtouch,teng2021touch}.
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
As virtual content is rendered on top of the real environment, the hand of the user can be hidden by the virtual objects when manipulating them (\secref[related_work]{ar_displays}).
Users have no information about hand tracking and no feedback about contact with the \VOs, other than their movement when touched.
As virtual content is rendered on top of the \RE, the hand of the user can be hidden by the \VOs when manipulating them (\secref[related_work]{ar_displays}).
\paragraph{Occlusion (Occl)}
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible \cite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the \RE, we can carefully crop the former whenever it hides real content that should be visible \cite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
This approach is frequent in works using \VST-\AR headsets \cite{knorlein2009influence,ha2014wearhand,piumsomboon2014graspshell,suzuki2014grasping,al-kalbani2016analysis}.
\paragraph{Tips}
This rendering shows small visual rings around the fingertips of the user (\figref{method/hands-tips}), highlighting the most important parts of the hand and contact with virtual objects during fine manipulation (\secref[related_work]{grasp_types}).
This rendering shows small visual rings around the fingertips of the user (\figref{method/hands-tips}), highlighting the most important parts of the hand and contact with \VOs during fine manipulation (\secref[related_work]{grasp_types}).
Unlike work using small spheres \cite{maisto2017evaluation,meli2014wearable,grubert2018effects,normand2018enlarging,schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
\paragraph{Contour (Cont)}
This rendering is a \qty{1}{\mm} thick outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
Unlike the other renderings, it is not occluded by the \VOs, as shown in \figref{method/hands-contour}.
This rendering is not as usual as the previous others in the literature \cite{kang2020comparative}.
\paragraph{Skeleton (Skel)}
@@ -39,24 +39,24 @@ It is widely used in \VR \cite{argelaguet2016role,schwind2018touch,chessa2019gra
\paragraph{Mesh}
This rendering is a 3D semi-transparent ($a=0.2$) hand model (\figref{method/hands-mesh}), which is common in \VR \cite{prachyabrued2014visual,argelaguet2016role,schwind2018touch,chessa2019grasping,yoon2020evaluating,vanveldhuizen2021effect}.
This rendering is a \ThreeD semi-transparent ($a=0.2$) hand model (\figref{method/hands-mesh}), which is common in \VR \cite{prachyabrued2014visual,argelaguet2016role,schwind2018touch,chessa2019grasping,yoon2020evaluating,vanveldhuizen2021effect}.
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
\section{User Study}
\label{method}
We aim to investigate whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with free hands in \AR.
We aim to investigate whether the chosen visual hand rendering affects the performance and user experience of manipulating \VOs with free hands in \AR.
\subsection{Manipulation Tasks and Virtual Scene}
\label{tasks}
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual,blaga2017usability,maisto2017evaluation,meli2018combining,vanveldhuizen2021effect}.
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a \ThreeD pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual,blaga2017usability,maisto2017evaluation,meli2018combining,vanveldhuizen2021effect}.
\subsubsection{Push Task}
\label{push-task}
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
The virtual object to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
The first manipulation task consists in pushing a \VO along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
The \VO to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
At every repetition of the task, the cube to manipulate always spawns at the same place, on top of a real table in front of the user.
On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centred on the cube, at \qty{45}{\degree} from each other (again \figref{method/task-push}).
Users are asked to push the cube towards the target volume using their fingertips in any way they prefer.
@@ -66,7 +66,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
\subsubsection{Grasp Task}
\label{grasp-task}
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (\figref{method/task-grasp}).
The second manipulation task consists in grasping, lifting, and placing a \VO in a target placed on a different (higher) plane (\figref{method/task-grasp}).
The cube to manipulate and target volume are the same as in the previous task.
However, this time, the target volume can spawn in eight different locations on a plane \qty{10}{\cm} \emph{above} the table, still located on a \qty{20}{\cm} radius circle at \qty{45}{\degree} from each other.
Users are asked to grasp, lift, and move the cube towards the target volume using their fingertips in any way they prefer.
@@ -97,7 +97,7 @@ Each condition was repeated three times.
To control learning effects, we counter-balanced the orders of the two manipulation tasks and visual hand renderings following a 6 \x 6 Latin square, leading to six blocks where the position of the target volume was in turn randomized.
This design led to a total of 2 manipulation tasks \x 6 visual hand renderings \x 8 targets \x 3 repetitions $=$ 288 trials per participant.
\subsection{Apparatus and Implementation}
\subsection{Apparatus}
\label{apparatus}
We used the \OST-\AR headset HoloLens~2, as described in \secref[vhar_system]{virtual_real_alignment}.
@@ -105,13 +105,13 @@ We used the \OST-\AR headset HoloLens~2, as described in \secref[vhar_system]{vi
It is also able to track the user's fingers.
We measured the latency of the hand tracking at \qty{15}{\ms}, independent of the hand movement speed.
The implementation of our experiment was done in C\# using Unity 2022.1, PhysX 4.1, and the Mixed Reality Toolkit (MRTK) 2.8\footnoteurl{https://learn.microsoft.com/windows/mixed-reality/mrtk-unity}.
The implementation of our experiment was done using Unity 2022.1, PhysX 4.1, and the Mixed Reality Toolkit (MRTK) 2.8\footnoteurl{https://learn.microsoft.com/windows/mixed-reality/mrtk-unity}.
The compiled application ran directly on the HoloLens~2 at \qty{60}{FPS}.
The default 3D hand model from MRTK was used for all visual hand renderings.
The default \ThreeD hand model from MRTK was used for all visual hand renderings.
By changing the material properties of this hand model, we were able to achieve the six renderings shown in \figref{hands}.
A calibration was performed for every participant, to best adapt the size of the visual hand rendering to their real hand.
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the virtual objects and hand renderings, which were applied throughout the experiment.
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the \VOs and hand renderings, which were applied throughout the experiment.
The hand tracking information provided by MRTK was used to construct a virtual articulated physics-enabled hand (\secref[related_work]{ar_virtual_hands}) using PhysX.
It featured 25 DoFs, including the fingers proximal, middle, and distal phalanges.
@@ -121,10 +121,10 @@ As before, a set of empirical tests have been used to select the most effective
The room where the experiment was held had no windows, with one light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table.
This setup enabled a good and consistent tracking of the user's fingers.
\subsection{Protocol}
\label{protocol}
\subsection{Procedure}
\label{procedure}
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
First, participants were given a consent form that briefed them about the tasks and the procedure of the experiment.
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a \qty{2}{min} training to familiarize with the \AR rendering and the two considered tasks.
During this training, we did not use any of the six hand renderings we want to test, but rather a fully-opaque white hand rendering that completely occluded the real hand of the user.
Participants were asked to carry out the two tasks as naturally and as fast as possible.

View File

@@ -1,7 +1,7 @@
\section{Discussion}
\label{discussion}
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in \AR.
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two \VO manipulation tasks in \AR.
During the \level{Push} task, the \level{Skeleton} hand rendering was the fastest (\figref{results/Push-CompletionTime-Hand-Overall-Means}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (\figref{results/Push-ContactsCount-Hand-Overall-Means} and \figref{results/Push-MeanContactTime-Hand-Overall-Means}).
Participants consistently used few and continuous contacts for all visual hand renderings (Fig. 3b), with only less than ten trials, carried out by two participants, quickly completed with multiple discrete touches.
@@ -21,12 +21,12 @@ However, due to the latency of the hand tracking and the visual hand reacting to
The \level{Tips} rendering, which showed the contacts made on the virtual cube, was controversial as it received the minimum and the maximum score on every question.
Many participants reported difficulties in seeing the orientation of the visual fingers,
while others found that it gave them a better sense of the contact points and improved their concentration on the task.
This result is consistent with \textcite{saito2021contact}, who found that displaying the points of contacts was beneficial for grasping a virtual object over an opaque visual hand overlay.
This result is consistent with \textcite{saito2021contact}, who found that displaying the points of contacts was beneficial for grasping a \VO over an opaque visual hand overlay.
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in \AR.
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating \VOs with bare hands in \AR.
These results contrast with similar manipulation studies, but in non-immersive, on-screen \AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
Our results show the most effective visual hand rendering to be the \level{Skeleton} one.
Participants appreciated that it provided a detailed and precise view of the tracking of the real hand, without hiding or masking it.
Although the \level{Contour} and \level{Mesh} hand renderings were also highly rated, some participants felt that they were too visible and masked the real hand.
This result is in line with the results of virtual object manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
This result is in line with the results of \VO manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the \VE.
This type of \level{Skeleton} rendering was also the one that provided the best sense of agency (control) in \VR \cite{argelaguet2016role,schwind2018touch}.

View File

@@ -1,4 +1,4 @@
\chapter{Visual Rendering of the Hand for Manipulating Virtual Objects in Augmented Reality}
\chapter{Visual Rendering of the Hand for Manipulating Virtual Objects in AR}
\mainlabel{visual_hand}
\chaptertoc

View File

@@ -1,4 +1,4 @@
Providing haptic feedback during free-hand manipulation in \AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system \cite{pacchierotti2016hring}.
\noindent Providing haptic feedback during free-hand manipulation in \AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system \cite{pacchierotti2016hring}.
Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm.
For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand (\secref[related_work]{vhar_haptics}).
However, the impact of the positioning of the haptic rendering on the hand during direct hand manipulation in \AR has not been systematically studied.
@@ -24,6 +24,8 @@ We additionally compared these vibrotactile renderings with the \textbf{skeleton
\noindentskip In the next sections, we first describe the four delocalized positionings and the two contact vibration techniques we considered, based on previous work. We then present the experimental setup and design of the user study. Finally, we report the results and discuss them in the context of the free hand interaction with virtual content in \AR.
\bigskip
\fig[0.6]{method/locations}{Setup of the vibrotactile positionings on the hand.}[
To ensure minimal encumbrance, we used the same two motors throughout the experiment, moving them to the considered positioning before each new experimental block (in this case, on the co-located \level{Proximal} phalange).
Thin self-gripping straps were placed on the five considered positionings during the entirety of the experiment.

View File

@@ -80,10 +80,10 @@ As we did not find any relevant effect of the order in which the tasks were perf
This design led to a total of 5 vibrotactile positionings \x 2 vibration contact techniques \x 2 visual hand rendering \x (2 targets on the Push task + 4 targets on the Grasp task) \x 3 repetitions $=$ 420 trials per participant.
\subsection{Apparatus and Protocol}
\subsection{Apparatus and Procedure}
\label{apparatus}
Apparatus and protocol were very similar to the \chapref{visual_hand}, as described in \secref[visual_hand]{apparatus} and \secref[visual_hand]{protocol}, respectively.
Apparatus and experimental procedure were very similar to the \chapref{visual_hand}, as described in \secref[visual_hand]{apparatus} and \secref[visual_hand]{protocol}, respectively.
We report here only the differences.
We employed the same vibrotactile device used by \cite{devigne2020power}.

View File

@@ -1,7 +1,7 @@
\section{Discussion}
\label{discussion}
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in \AR as in the \chapref{visual_hand}, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the \chapref{visual_hand}.
We evaluated sixteen visuo-haptic renderings of the hand, in the same two \VO manipulation tasks in \AR as in the \chapref{visual_hand}, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the \chapref{visual_hand}.
In the \level{Push} task, vibrotactile haptic hand rendering has been proven beneficial with the \level{Proximal} positioning, which registered a low completion time, but detrimental with the \level{Fingertips} positioning, which performed worse (\figref{results/Push-CompletionTime-Location-Overall-Means}) than the \level{Proximal} and \level{Opposite} (on the contralateral hand) positionings.
The cause might be the intensity of vibrations, which many participants found rather strong and possibly distracting when provided at the fingertips.
@@ -33,18 +33,18 @@ Additionally, the \level{Skeleton} rendering was appreciated and perceived as mo
Participants reported that this visual hand rendering provided good feedback on the status of the hand tracking while being constrained to the cube, and helped with rotation adjustment in both tasks.
However, many also felt that it was a bit redundant with the vibrotactile hand rendering.
Indeed, receiving a vibrotactile hand rendering was found by participants as a more accurate and reliable information regarding the contact with the cube than simply seeing the cube and the visual hand reacting to the manipulation.
This result suggests that providing a visual hand rendering may not be useful during the grasping phase, but may be beneficial prior to contact with the virtual object and during position and rotation adjustment, providing valuable information about the hand pose.
This result suggests that providing a visual hand rendering may not be useful during the grasping phase, but may be beneficial prior to contact with the \VO and during position and rotation adjustment, providing valuable information about the hand pose.
It is also worth noting that the improved hand tracking and grasp helper improved the manipulation of the cube with respect to the \chapref{visual_hand}, as shown by the shorter completion time during the \level{Grasp} task.
This improvement could also be the reason for the smaller differences between the \level{Skeleton} and the \level{None} visual hand renderings in this second experiment.
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in \AR.
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating \VOs with their bare hands in \AR.
The closer the vibrotactile hand rendering was to the point of contact, the better it was perceived in terms of effectiveness, usefulness, and realism.
These subjective appreciations of wearable haptic hand rendering for manipulating virtual objects in \AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
These subjective appreciations of wearable haptic hand rendering for manipulating \VOs in \AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
However, the best performance was obtained with the farthest positioning on the contralateral hand (\level{Opposite}), which is somewhat surprising.
This apparent paradox could be explained in two ways.
On the one hand, participants behave differently when the haptic rendering was given on the fingers (\level{Fingertips} and \level{Proximal}), close to the contact point, with shorter pushes and larger grip apertures.
This behavior has likely given them a better experience of the tasks and more confidence in their actions, as well as leading to a lower interpenetration/force applied to the cube \cite{pacchierotti2015cutaneous}.
On the other hand, the unfamiliarity of the contralateral hand positioning (\level{Opposite}) caused participants to spend more time understanding the haptic stimuli, which might have made them more focused on performing the task.
In terms of the contact vibration technique, the continuous vibration technique on the finger interpenetration (\level{Distance}) did not make a difference to performance, although it provided more information.
Participants felt that vibration bursts were sufficient (\level{Distance}) to confirm contact with the virtual object.
Participants felt that vibration bursts were sufficient (\level{Distance}) to confirm contact with the \VO.
Finally, it was interesting to note that the visual hand rendering was appreciated but felt less necessary when provided together with vibrotactile hand rendering, as the latter was deemed sufficient for acknowledging the contact.

View File

@@ -1,4 +1,4 @@
\chapter{Visuo-Haptic Rendering of Hand Manipulation With Virtual Objects in Augmented Reality}
\chapter{Visuo-Haptic Rendering of Hand Manipulation with Virtual Objects in AR}
\mainlabel{visuo_haptic_hand}
\chaptertoc