Remove \VO and \AE acronym

This commit is contained in:
2024-10-18 14:49:07 +02:00
parent 625892eb9c
commit bd4a436f18
17 changed files with 164 additions and 168 deletions

View File

@@ -1,24 +1,24 @@
\section{Introduction}
\label{intro}
Touching, grasping and manipulating \VOs are fundamental interactions in \AR (\secref[related_work]{ve_tasks}) and essential for many of its applications (\secref[related_work]{ar_applications}).
The most common current \AR systems, in the form of portable and immersive \OST-\AR headsets \cite{hertel2021taxonomy}, allow real-time hand tracking and direct interaction with \VOs with bare hands (\secref[related_work]{real_virtual_gap}).
Manipulation of \VOs is achieved using a virtual hand interaction technique that represents the user's hand in the \VE and simulates interaction with \VOs (\secref[related_work]{ar_virtual_hands}).
However, direct hand manipulation is still challenging due to the intangibility of the \VE, the lack of mutual occlusion between the hand and the \VO in \OST-\AR (\secref[related_work]{ar_displays}), and the inherent delays between the user's hand and the result of the interaction simulation (\secref[related_work]{ar_virtual_hands}).
Touching, grasping and manipulating virtual objects are fundamental interactions in \AR (\secref[related_work]{ve_tasks}) and essential for many of its applications (\secref[related_work]{ar_applications}).
The most common current \AR systems, in the form of portable and immersive \OST-\AR headsets \cite{hertel2021taxonomy}, allow real-time hand tracking and direct interaction with virtual objects with bare hands (\secref[related_work]{real_virtual_gap}).
Manipulation of virtual objects is achieved using a virtual hand interaction technique that represents the user's hand in the \VE and simulates interaction with virtual objects (\secref[related_work]{ar_virtual_hands}).
However, direct hand manipulation is still challenging due to the intangibility of the \VE, the lack of mutual occlusion between the hand and the virtual object in \OST-\AR (\secref[related_work]{ar_displays}), and the inherent delays between the user's hand and the result of the interaction simulation (\secref[related_work]{ar_virtual_hands}).
In this chapter, we investigate the \textbf{visual rendering as hand augmentation} for direct manipulation of \VOs in \OST-\AR.
To this end, we selected in the literature and compared the most popular visual hand renderings used to interact with \VOs in \AR.
In this chapter, we investigate the \textbf{visual rendering as hand augmentation} for direct manipulation of virtual objects in \OST-\AR.
To this end, we selected in the literature and compared the most popular visual hand renderings used to interact with virtual objects in \AR.
The virtual hand is \textbf{displayed superimposed} on the user's hand with these visual rendering, providing a \textbf{feedback on the tracking} of the real hand, as shown in \figref{hands}.
The movement of the virtual hand is also \textbf{constrained to the surface} of the \VO, providing an additional \textbf{feedback on the interaction} with the \VO.
We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloLens~2, the effect of six visual hand renderings on the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a \VO directly with the hand.
The movement of the virtual hand is also \textbf{constrained to the surface} of the virtual object, providing an additional \textbf{feedback on the interaction} with the virtual object.
We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloLens~2, the effect of six visual hand renderings on the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a virtual object directly with the hand.
\noindentskip The main contributions of this chapter are:
\begin{itemize}
\item A comparison from the literature of the six most common visual hand renderings used to interact with \VOs in \AR.
\item A user study evaluating with 24 participants the performance and user experience of the six visual hand renderings as augmentation of the real hand during free and direct hand manipulation of \VOs in \OST-\AR.
\item A comparison from the literature of the six most common visual hand renderings used to interact with virtual objects in \AR.
\item A user study evaluating with 24 participants the performance and user experience of the six visual hand renderings as augmentation of the real hand during free and direct hand manipulation of virtual objects in \OST-\AR.
\end{itemize}
\noindentskip In the next sections, we first present the six visual hand renderings we considered and gathered from the literature. We then describe the experimental setup and design, the two manipulation tasks, and the metrics used. We present the results of the user study and discuss the implications of these results for the manipulation of \VOs directly with the hand in \AR.
\noindentskip In the next sections, we first present the six visual hand renderings we considered and gathered from the literature. We then describe the experimental setup and design, the two manipulation tasks, and the metrics used. We present the results of the user study and discuss the implications of these results for the manipulation of virtual objects directly with the hand in \AR.
\bigskip

View File

@@ -5,15 +5,15 @@ We compared a set of the most popular visual hand renderings, as found in the li
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips (\secref[related_work]{grasp_types}).
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in \cite{yoon2020evaluating,vanveldhuizen2021effect}.
All considered hand renderings are drawn following the tracked pose of the user's real hand.
However, while the real hand can of course penetrate \VOs, the visual hand is always constrained by the \VE (\secref[related_work]{ar_virtual_hands}).
However, while the real hand can of course penetrate virtual objects, the visual hand is always constrained by the \VE (\secref[related_work]{ar_virtual_hands}).
They are shown in \figref{hands} and described below, with an abbreviation in parentheses when needed.
\paragraph{None}
As a reference, we considered no visual hand rendering (\figref{method/hands-none}), as is common in \AR \cite{hettiarachchi2016annexing,blaga2017usability,xiao2018mrtouch,teng2021touch}.
Users have no information about hand tracking and no feedback about contact with the \VOs, other than their movement when touched.
As virtual content is rendered on top of the \RE, the hand of the user can be hidden by the \VOs when manipulating them (\secref[related_work]{ar_displays}).
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
As virtual content is rendered on top of the \RE, the hand of the user can be hidden by the virtual objects when manipulating them (\secref[related_work]{ar_displays}).
\paragraph{Occlusion (Occl)}
@@ -22,13 +22,13 @@ This approach is frequent in works using \VST-\AR headsets \cite{knorlein2009inf
\paragraph{Tips}
This rendering shows small visual rings around the fingertips of the user (\figref{method/hands-tips}), highlighting the most important parts of the hand and contact with \VOs during fine manipulation (\secref[related_work]{grasp_types}).
This rendering shows small visual rings around the fingertips of the user (\figref{method/hands-tips}), highlighting the most important parts of the hand and contact with virtual objects during fine manipulation (\secref[related_work]{grasp_types}).
Unlike work using small spheres \cite{maisto2017evaluation,meli2014wearable,grubert2018effects,normand2018enlarging,schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
\paragraph{Contour (Cont)}
This rendering is a \qty{1}{\mm} thick outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
Unlike the other renderings, it is not occluded by the \VOs, as shown in \figref{method/hands-contour}.
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
This rendering is not as usual as the previous others in the literature \cite{kang2020comparative}.
\paragraph{Skeleton (Skel)}
@@ -45,7 +45,7 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
\section{User Study}
\label{method}
We aim to investigate whether the chosen visual hand rendering affects the performance and user experience of manipulating \VOs with free hands in \AR.
We aim to investigate whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with free hands in \AR.
\subsection{Manipulation Tasks and Virtual Scene}
\label{tasks}
@@ -55,8 +55,8 @@ Following the guidelines of \textcite{bergstrom2021how} for designing object man
\subsubsection{Push Task}
\label{push-task}
The first manipulation task consists in pushing a \VO along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
The \VO to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (\figref{method/task-push}).
The virtual object to manipulate is a small \qty{50}{\mm} blue and opaque cube, while the target is a (slightly) bigger \qty{70}{\mm} blue and semi-transparent volume.
At every repetition of the task, the cube to manipulate always spawns at the same place, on top of a real table in front of the user.
On the other hand, the target volume can spawn in eight different locations on the same table, located on a \qty{20}{\cm} radius circle centred on the cube, at \qty{45}{\degree} from each other (again \figref{method/task-push}).
Users are asked to push the cube towards the target volume using their fingertips in any way they prefer.
@@ -66,7 +66,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
\subsubsection{Grasp Task}
\label{grasp-task}
The second manipulation task consists in grasping, lifting, and placing a \VO in a target placed on a different (higher) plane (\figref{method/task-grasp}).
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (\figref{method/task-grasp}).
The cube to manipulate and target volume are the same as in the previous task.
However, this time, the target volume can spawn in eight different locations on a plane \qty{10}{\cm} \emph{above} the table, still located on a \qty{20}{\cm} radius circle at \qty{45}{\degree} from each other.
Users are asked to grasp, lift, and move the cube towards the target volume using their fingertips in any way they prefer.
@@ -111,7 +111,7 @@ The compiled application ran directly on the HoloLens~2 at \qty{60}{FPS}.
The default \ThreeD hand model from MRTK was used for all visual hand renderings.
By changing the material properties of this hand model, we were able to achieve the six renderings shown in \figref{hands}.
A calibration was performed for every participant, to best adapt the size of the visual hand rendering to their real hand.
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the \VOs and hand renderings, which were applied throughout the experiment.
A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the virtual objects and hand renderings, which were applied throughout the experiment.
The hand tracking information provided by MRTK was used to construct a virtual articulated physics-enabled hand (\secref[related_work]{ar_virtual_hands}) using PhysX.
It featured 25 DoFs, including the fingers proximal, middle, and distal phalanges.

View File

@@ -1,7 +1,7 @@
\section{Discussion}
\label{discussion}
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two \VO manipulation tasks in \AR.
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in \AR.
During the \level{Push} task, the \level{Skeleton} hand rendering was the fastest (\figref{results/Push-CompletionTime-Hand-Overall-Means}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (\figref{results/Push-ContactsCount-Hand-Overall-Means} and \figref{results/Push-MeanContactTime-Hand-Overall-Means}).
Participants consistently used few and continuous contacts for all visual hand renderings (Fig. 3b), with only less than ten trials, carried out by two participants, quickly completed with multiple discrete touches.
@@ -21,12 +21,12 @@ However, due to the latency of the hand tracking and the visual hand reacting to
The \level{Tips} rendering, which showed the contacts made on the virtual cube, was controversial as it received the minimum and the maximum score on every question.
Many participants reported difficulties in seeing the orientation of the visual fingers,
while others found that it gave them a better sense of the contact points and improved their concentration on the task.
This result is consistent with \textcite{saito2021contact}, who found that displaying the points of contacts was beneficial for grasping a \VO over an opaque visual hand overlay.
This result is consistent with \textcite{saito2021contact}, who found that displaying the points of contacts was beneficial for grasping a virtual object over an opaque visual hand overlay.
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating \VOs with bare hands in \AR.
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in \AR.
These results contrast with similar manipulation studies, but in non-immersive, on-screen \AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
Our results show the most effective visual hand rendering to be the \level{Skeleton} one.
Participants appreciated that it provided a detailed and precise view of the tracking of the real hand, without hiding or masking it.
Although the \level{Contour} and \level{Mesh} hand renderings were also highly rated, some participants felt that they were too visible and masked the real hand.
This result is in line with the results of \VO manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the \VE.
This result is in line with the results of virtual object manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the \VE.
This type of \level{Skeleton} rendering was also the one that provided the best sense of agency (control) in \VR \cite{argelaguet2016role,schwind2018touch}.

View File

@@ -1,9 +1,9 @@
\section{Conclusion}
\label{conclusion}
In this chapter, we addressed the challenge of touching, grasping and manipulating \VOs directly with the hand in immersive \OST-\AR by providing and evaluating visual renderings as augmentation of the real hand.
Superimposed on the user's hand, these visual renderings provide feedback from the virtual hand, which tracks the real hand, and simulates the interaction with \VOs as a proxy.
We first selected and compared the six most popular visual hand renderings used to interact with \VOs in \AR.
In this chapter, we addressed the challenge of touching, grasping and manipulating virtual objects directly with the hand in immersive \OST-\AR by providing and evaluating visual renderings as augmentation of the real hand.
Superimposed on the user's hand, these visual renderings provide feedback from the virtual hand, which tracks the real hand, and simulates the interaction with virtual objects as a proxy.
We first selected and compared the six most popular visual hand renderings used to interact with virtual objects in \AR.
Then, in a user study with 24 participants and an immersive \OST-\AR headset, we evaluated the effect of these six visual hand renderings on the user performance and experience in two representative manipulation tasks.
Our results showed that a visual hand augmentation improved the performance, perceived effectiveness and confidence of participants compared to no augmentation.

View File

@@ -6,22 +6,22 @@ Moreover, it is important to leave the user capable of interacting with both vir
For this reason, it is often considered beneficial to move the point of application of the haptic feedback elsewhere on the hand (\secref[related_work]{vhar_haptics}).
However, the impact of the positioning of the haptic feedback on the hand during direct hand manipulation in \AR has not been systematically studied.
Conjointly, a few studies have explored and compared the effects of visual and haptic feedback in tasks involving the manipulation of \VOs with the hand.
Conjointly, a few studies have explored and compared the effects of visual and haptic feedback in tasks involving the manipulation of virtual objects with the hand.
\textcite{sarac2022perceived} and \textcite{palmer2022haptic} studied the effects of providing haptic feedback about contacts at the fingertips using haptic devices worn at the wrist, testing different mappings.
Their results proved that moving the haptic feedback away from the point(s) of contact is possible and effective, and that its impact is more significant when the visual feedback is limited.
A final question is whether one or the other of these (haptic or visual) hand feedback should be preferred \cite{maisto2017evaluation,meli2018combining}, or whether a combined visuo-haptic feedback is beneficial for users.
However, these studies were conducted in non-immersive setups, with a screen displaying the \VE view.
In fact, both hand feedback can provide sufficient sensory feedback for efficient direct hand manipulation of \VOs in \AR, or conversely, they can be shown to be complementary.
In fact, both hand feedback can provide sufficient sensory feedback for efficient direct hand manipulation of virtual objects in \AR, or conversely, they can be shown to be complementary.
In this chapter, we aim to investigate the role of \textbf{visuo-haptic feedback of the hand when manipulating \VO} in immersive \OST-\AR using wearable vibrotactile haptics.
In this chapter, we aim to investigate the role of \textbf{visuo-haptic feedback of the hand when manipulating virtual object} in immersive \OST-\AR using wearable vibrotactile haptics.
We selected \textbf{four different delocalized positionings on the hand} that have been previously proposed in the literature for direct hand interaction in \AR using wearable haptic devices (\secref[related_work]{vhar_haptics}): on the nails, the proximal phalanges, the wrist, and the nails of the opposite hand.
We focused on vibrotactile feedback, as it is used in most of the wearable haptic devices and has the lowest encumbrance.
In a \textbf{user study}, using the \OST-\AR headset Microsoft HoloLens~2 and two \ERM vibrotactile motors, we evaluated the effect of the four positionings with \textbf{two contact vibration techniques} on the user performance and experience with the same two manipulation tasks as in \chapref{visual_hand}.
We additionally compared these vibrotactile renderings with the \textbf{skeleton-like visual hand augmentation} established in the \chapref{visual_hand} as a complementary visuo-haptic feedback of the hand interaction with the \VOs.
We additionally compared these vibrotactile renderings with the \textbf{skeleton-like visual hand augmentation} established in the \chapref{visual_hand} as a complementary visuo-haptic feedback of the hand interaction with the virtual objects.
\noindentskip The contributions of this chapter are:
\begin{itemize}
\item The evaluation in a user study with 20 participants of the effect of providing a vibrotactile feedback of the fingertip contacts with \VOs, during direct manipulation with bare hand in \AR, at four different delocalized positionings of the haptic feedback on the hand and with two contact vibration techniques.
\item The evaluation in a user study with 20 participants of the effect of providing a vibrotactile feedback of the fingertip contacts with virtual objects, during direct manipulation with bare hand in \AR, at four different delocalized positionings of the haptic feedback on the hand and with two contact vibration techniques.
\item The comparison of these vibrotactile positionings and renderings techniques with the two most representative visual hand augmentations established in the \chapref{visual_hand}.
\end{itemize}

View File

@@ -1,13 +1,13 @@
\section{Vibrotactile Renderings of the Hand-Object Contacts}
\label{vibration}
The vibrotactile hand rendering provided information about the contacts between the \VO and the thumb and index fingers of the user, as they are the two fingers most used for grasping (\secref[related_work]{grasp_types}).
The vibrotactile hand rendering provided information about the contacts between the virtual object and the thumb and index fingers of the user, as they are the two fingers most used for grasping (\secref[related_work]{grasp_types}).
We evaluated both the delocalized positioning and the contact vibration technique of the vibrotactile hand rendering.
\subsection{Vibrotactile Positionings}
\label{positioning}
We considered five different positionings for providing the vibrotactile rendering as feedback of the contacts between the virtual hand and the \VOs, as shown in \figref{method/locations}.
We considered five different positionings for providing the vibrotactile rendering as feedback of the contacts between the virtual hand and the virtual objects, as shown in \figref{method/locations}.
They are representative of the most common locations used by wearable haptic devices in \AR to place their end-effector, as found in the literature (\secref[related_work]{vhar_haptics}), as well as other positionings that have been employed for manipulation tasks.
For each positioning, we used two vibrating actuators, for the thumb and index finger, respectively.
@@ -44,7 +44,7 @@ Similarly, we designed the distance vibration technique (Dist) so that interpene
\section{User Study}
\label{method}
This user study aims to evaluate whether a visuo-haptic rendering of the hand affects the user performance and experience of manipulation of \VOs with bare hands in \OST-\AR.
This user study aims to evaluate whether a visuo-haptic rendering of the hand affects the user performance and experience of manipulation of virtual objects with bare hands in \OST-\AR.
The chosen visuo-haptic hand renderings are the combination of the two most representative visual hand renderings established in the \chapref{visual_hand}, \ie \level{Skeleton} and \level{No Hand}, described in \secref[visual_hand]{hands}, with the two contact vibration techniques provided at the four delocalized positions on the hand described in \secref{vibration}.
\subsection{Experimental Design}

View File

@@ -1,7 +1,7 @@
\section{Discussion}
\label{discussion}
We evaluated twenty visuo-haptic renderings of the hand, in the same two \VO manipulation tasks in \AR as in the \chapref{visual_hand}, as the combination of two vibrotactile contact techniques provided at five delocalized positions on the hand with the two most representative visual hand renderings established in the \chapref{visual_hand}.
We evaluated twenty visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in \AR as in the \chapref{visual_hand}, as the combination of two vibrotactile contact techniques provided at five delocalized positions on the hand with the two most representative visual hand renderings established in the \chapref{visual_hand}.
In the \level{Push} task, vibrotactile haptic hand rendering has been proven beneficial with the \level{Proximal} positioning, which registered a low completion time, but detrimental with the \level{Fingertips} positioning, which performed worse (\figref{results/Push-CompletionTime-Location-Overall-Means}) than the \level{Proximal} and \level{Opposite} (on the contralateral hand) positionings.
The cause might be the intensity of vibrations, which many participants found rather strong and possibly distracting when provided at the fingertips.
@@ -33,18 +33,18 @@ Additionally, the \level{Skeleton} rendering was appreciated and perceived as mo
Participants reported that this visual hand rendering provided good feedback on the status of the hand tracking while being constrained to the cube, and helped with rotation adjustment in both tasks.
However, many also felt that it was a bit redundant with the vibrotactile hand rendering.
Indeed, receiving a vibrotactile hand rendering was found by participants as a more accurate and reliable information regarding the contact with the cube than simply seeing the cube and the visual hand reacting to the manipulation.
This result suggests that providing a visual hand rendering may not be useful during the grasping phase, but may be beneficial prior to contact with the \VO and during position and rotation adjustment, providing valuable information about the hand pose.
This result suggests that providing a visual hand rendering may not be useful during the grasping phase, but may be beneficial prior to contact with the virtual object and during position and rotation adjustment, providing valuable information about the hand pose.
It is also worth noting that the improved hand tracking and grasp helper improved the manipulation of the cube with respect to the \chapref{visual_hand}, as shown by the shorter completion time during the \level{Grasp} task.
This improvement could also be the reason for the smaller differences between the \level{Skeleton} and the \level{None} visual hand renderings in this second experiment.
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating \VOs with their bare hands in \AR.
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in \AR.
The closer the vibrotactile hand rendering was to the point of contact, the better it was perceived in terms of effectiveness, usefulness, and realism.
These subjective appreciations of wearable haptic hand rendering for manipulating \VOs in \AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
These subjective appreciations of wearable haptic hand rendering for manipulating virtual objects in \AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
However, the best performance was obtained with the farthest positioning on the contralateral hand (\level{Opposite}), which is somewhat surprising.
This apparent paradox could be explained in two ways.
On the one hand, participants behave differently when the haptic rendering was given on the fingers (\level{Fingertips} and \level{Proximal}), close to the contact point, with shorter pushes and larger grip apertures.
This behavior has likely given them a better experience of the tasks and more confidence in their actions, as well as leading to a lower interpenetration/force applied to the cube \cite{pacchierotti2015cutaneous}.
On the other hand, the unfamiliarity of the contralateral hand positioning (\level{Opposite}) caused participants to spend more time understanding the haptic stimuli, which might have made them more focused on performing the task.
In terms of the contact vibration technique, the continuous vibration technique on the finger interpenetration (\level{Distance}) did not make a difference to performance, although it provided more information.
Participants felt that vibration bursts were sufficient (\level{Distance}) to confirm contact with the \VO.
Participants felt that vibration bursts were sufficient (\level{Distance}) to confirm contact with the virtual object.
Finally, it was interesting to note that the visual hand rendering was appreciated but felt less necessary when provided together with vibrotactile hand rendering, as the latter was deemed sufficient for acknowledging the contact.

View File

@@ -1,8 +1,8 @@
\section{Conclusion}
\label{conclusion}
In this chapter, we investigated the visuo-haptic feedback of the hand when manipulating \VOs in immersive \OST-\AR using wearable vibrotactile haptic.
To do so, we provided vibrotactile feedback of the fingertip contacts with \VOs by moving away the haptic actuator that do not cover the inside of the hand: on the nails, the proximal phalanges, the wrist, and the nails of the opposite hand.
In this chapter, we investigated the visuo-haptic feedback of the hand when manipulating virtual objects in immersive \OST-\AR using wearable vibrotactile haptic.
To do so, we provided vibrotactile feedback of the fingertip contacts with virtual objects by moving away the haptic actuator that do not cover the inside of the hand: on the nails, the proximal phalanges, the wrist, and the nails of the opposite hand.
We selected these four different delocalized positions on the hand from the literature for direct hand interaction in \AR using wearable haptic devices.
In a user study, we compared twenty visuo-haptic feedback of the hand as the combination of two vibrotactile contact techniques, provided at five different delocalized positions on the user's hand, and with the two most representative visual hand augmentations established in the \chapref{visual_hand}, \ie the skeleton hand rendering and no hand rendering.
@@ -13,7 +13,7 @@ This study provide evidence that moving away the feedback from the inside of the
If integration with the hand tracking system allows it, and if the task requires it, a haptic ring worn on the middle or proximal phalanx seems preferable.
However, a wrist-mounted haptic device will be able to provide richer feedback by embedding more diverse haptic actuators with larger bandwidths and maximum amplitudes, while being less obtrusive than a ring.
Finally, we think that the visual hand augmentation complements the haptic contact rendering well by providing continuous feedback on the hand tracking, and that it can be disabled during the grasping phase to avoid redundancy with the haptic feedback of the contact with the \VO.
Finally, we think that the visual hand augmentation complements the haptic contact rendering well by providing continuous feedback on the hand tracking, and that it can be disabled during the grasping phase to avoid redundancy with the haptic feedback of the contact with the virtual object.
\noindentskip This work was published in Transactions on Haptics: