Complete visuo-haptic-hand chapter

This commit is contained in:
2024-09-26 20:49:03 +02:00
parent ac5b773065
commit ccbd2f3135
10 changed files with 126 additions and 101 deletions

View File

@@ -1,20 +1,30 @@
%Providing haptic feedback during free-hand manipulation in \AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system.
%Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm.
%For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand.% (\secref{haptics}).
Providing haptic feedback during free-hand manipulation in \AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system \cite{pacchierotti2016hring}.
Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm.
For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand (\secref[related_work]{vhar_haptics}).
However, the impact of the positioning of the haptic rendering on the hand during direct hand manipulation in \AR has not been systematically studied.
% Conjointly, a few studies have explored and compared the effects of visual and haptic feedback in tasks involving the manipulation of virtual objects with the hand.
% \textcite{sarac2022perceived} and \textcite{palmer2022haptic} studied the effects of providing haptic feedback about contacts at the fingertips using haptic devices worn at the wrist, testing different mappings.
% Results proved that moving the haptic feedback away from the point(s) of contact is possible and effective, and that its impact is more significant when the visual feedback is limited.
%A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred \cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
%In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in \AR, or conversely, they can be shown to be complementary.
Conjointly, a few studies have explored and compared the effects of visual and haptic feedback in tasks involving the manipulation of \VOs with the hand.
\textcite{sarac2022perceived} and \textcite{palmer2022haptic} studied the effects of providing haptic feedback about contacts at the fingertips using haptic devices worn at the wrist, testing different mappings.
Their results proved that moving the haptic feedback away from the point(s) of contact is possible and effective, and that its impact is more significant when the visual feedback is limited.
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred \cite{maisto2017evaluation,meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
However, these studies were conducted in non-immersive setups, with a screen displaying the \VE view.
In fact, both hand renderings can provide sufficient sensory feedback for efficient direct hand manipulation of \VOs in \AR, or conversely, they can be shown to be complementary.
The contributions of this chapter are:
In this chapter, we aim to investigate the role of \textbf{visuo-haptic rendering of the hand manipulation} with \VOs in immersive \OST-\AR using wearable vibrotactile haptics.
We selected \textbf{four different delocalized positionings} that have been previously proposed in the literature for direct hand interaction in \AR using wearable haptic devices (\secref[related_work]{vhar_haptics}): on the nails, the proximal phalanges, the wrist, and the nails of the opposite hand.
We focused on vibrotactile feedback, as it is used in most of the wearable haptic devices and has the lowest encumbrance.
In a \textbf{user study}, using the \OST-\AR headset Microsoft HoloLens~2 and two \ERM vibrotactile motors, we evaluated the effect of the four positionings with \textbf{two contact vibration techniques} on the user performance and experience with the same two manipulation tasks as in \chapref{visual_hand}.
We additionally compared these vibrotactile renderings with the \textbf{skeleton-like visual hand rendering} established in the \chapref{visual_hand} as a complementary visuo-haptic feedback of the hand interaction with the \VOs.
\noindentskip The contributions of this chapter are:
\begin{itemize}
\item The evaluation in a user study with 20 participants of the effect of providing a vibrotactile feedback of the fingertip contacts with \VOs, during direct manipulation with bare hand in \AR, at four different delocalized positionings of the haptic rendering on the hand and with two contact vibration techniques.
\item The comparison of these vibrotactile positionings and renderings techniques with the two most representative visual renderings of the hand established in the \chapref{visual_hand}.
\item The comparison of these vibrotactile positionings and renderings techniques with the two most representative visual hand renderings established in the \chapref{visual_hand}.
\end{itemize}
\fig[0.6]{method/locations}{Setup of the vibrotactile positionings on the hand. }{
To ensure minimal encumbrance, we used the same two motors throughout the experiment, moving them to the considered positioning before each new experimental block (in this case, on the co-located proximal phalanges, \emph{Prox}).
\noindentskip In the next sections, we first describe the four delocalized positionings and the two contact vibration techniques we considered, based on previous work. We then present the experimental setup and design of the user study. Finally, we report the results and discuss them in the context of the free hand interaction with virtual content in \AR.
\fig[0.6]{method/locations}{Setup of the vibrotactile positionings on the hand.}[
To ensure minimal encumbrance, we used the same two motors throughout the experiment, moving them to the considered positioning before each new experimental block (in this case, on the co-located \level{Proximal} phalange).
Thin self-gripping straps were placed on the five considered positionings during the entirety of the experiment.
}
]

View File

@@ -7,7 +7,7 @@ We evaluated both the delocalized positioning and the contact vibration techniqu
\subsection{Vibrotactile Positionings}
\label{positioning}
We considered five different positionings for providing the vibrotactile rendering as feedback of the contacts between the virtual hand and the \VO, as shown in \figref{method/locations}.
We considered five different positionings for providing the vibrotactile rendering as feedback of the contacts between the virtual hand and the \VOs, as shown in \figref{method/locations}.
They are representative of the most common locations used by wearable haptic devices in \AR to place their end-effector, as found in the literature (\secref[related_work]{vhar_haptics}), as well as other positionings that have been employed for manipulation tasks.
For each positioning, we used two vibrating actuators, for the thumb and index finger, respectively.
@@ -45,7 +45,7 @@ Similarly, we designed the distance vibration technique (Dist) so that interpene
\label{method}
This user study aims to evaluate whether a visuo-haptic rendering of the hand affects the user performance and experience of manipulation of \VOs with bare hands in \OST-\AR.
The chosen visuo-haptic hand renderings are the combination of the two most representative visual hand renderings established in the \chapref{visual_hand}, \ie \level{Skeleton} and \level{None}, described in \secref[visual_hand]{hands}, with the two contact vibration techniques provided at the four delocalized positions on the hand described in \secref{vibration}.
The chosen visuo-haptic hand renderings are the combination of the two most representative visual hand renderings established in the \chapref{visual_hand}, \ie \level{Skeleton} and \level{No Hand}, described in \secref[visual_hand]{hands}, with the two contact vibration techniques provided at the four delocalized positions on the hand described in \secref{vibration}.
\subsection{Experimental Design}
\label{design}
@@ -55,7 +55,7 @@ We considered the same two \level{Push} and \level{Grasp} tasks as described in
\begin{itemize}
\item \factor{Positioning}: the five positionings for providing vibrotactile hand rendering of the virtual contacts, as described in \secref{positioning}.
\item \factor{Vibration Technique}: the two contact vibration techniques, as described in \secref{technique}.
\item \factor{Hand}: two visual hand renderings from the first experiment, \level{Skeleton} (Skel) and \level{None}, as described in \secref[visual_hand]{hands}; we considered \level{Skeleton} as it performed the best in terms of performance and perceived effectiveness and \level{None} as reference.
\item \factor{Hand}: two visual hand renderings from the \chapref{visual_hand}, \level{Skeleton} (Skel) and \level{No Hand}, as described in \secref[visual_hand]{hands}; we considered \level{Skeleton} as it performed the best in terms of performance and perceived effectiveness and \level{No Hand} as reference.
\item \factor{Target}: we considered the target volumes (\figref{tasks}), from the participant's point of view, located at:
\begin{itemize}
\item left-bottom (\level{LB}) and left-right (\level{LF}) during the \level{Push} task; and
@@ -76,18 +76,18 @@ We considered the same two \level{Push} and \level{Grasp} tasks as described in
To account for learning and fatigue effects, the order of the \factor{Positioning} conditions were counter-balanced using a balanced \numproduct{10 x 10} Latin square.
In these ten blocks, all possible \factor{Technique} \x \factor{Hand} \x \factor{Target} combination conditions were repeated three times in a random order.
As we did not find any relevant effect of the order in which the tasks were performed in the first experiment, we fixed the order of the tasks: first, the \level{Push} task and then the \level{Grasp} task.
As we did not find any relevant effect of the order in which the tasks were performed in the \chapref{visual_hand}, we fixed the order of the tasks: first, the \level{Push} task and then the \level{Grasp} task.
This design led to a total of 5 vibrotactile positionings \x 2 vibration contact techniques \x 2 visual hand rendering \x (2 targets on the Push task + 4 targets on the Grasp task) \x 3 repetitions $=$ 420 trials per participant.
\subsection{Apparatus and Protocol}
\label{apparatus}
Apparatus and protocol were very similar to the first experiment, as described in \secref[visual_hand]{apparatus} and \secref[visual_hand]{protocol}, respectively.
Apparatus and protocol were very similar to the \chapref{visual_hand}, as described in \secref[visual_hand]{apparatus} and \secref[visual_hand]{protocol}, respectively.
We report here only the differences.
We employed the same vibrotactile device used by \cite{devigne2020power}.
It is composed of two encapsulated Eccentric Rotating Mass (ERM) vibration motors (Pico-Vibe 304-116, Precision Microdrive, UK).
It is composed of two encapsulated \ERM (\secref[related_work]{vibrotactile_actuators}) vibration motors (Pico-Vibe 304-116, Precision Microdrive, UK).
They are small and very light (\qty{5}{\mm} \x \qty{20}{\mm}, \qty{1.2}{\g}) actuators capable of vibration frequencies from \qtyrange{120}{285}{\Hz} and
amplitudes from \qtyrange{0.2}{1.15}{\g}.
They have a latency of \qty{20}{\ms} that we partially compensated for at the software level with slightly larger colliders to trigger the vibrations very close the moment the finger touched the cube.
@@ -127,7 +127,7 @@ Other expertise correlations were low ($r<0.35$).
\subsection{Collected Data}
\label{metrics}
During the experiment, we collected the same data as in the first experiment, see \secref[visual_hand]{metrics}.
During the experiment, we collected the same data as in the \chapref{visual_hand}, see \secref[visual_hand]{metrics}.
At the end of the experiment, participants were asked if they recognized the different contact vibration techniques.
They then rated the ten combinations of \factor{Positioning} \x \factor{Vibration Technique} using a 7-item Likert scale (1=Not at all, 7=Extremely):
\begin{itemize}

View File

@@ -1,20 +1,6 @@
\section{Results}
\label{results}
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning.}[
Geometric means with bootstrap \percent{95} confidence and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][
\item Time to complete a trial.
\item Number of contacts with the cube.
\item Time spent on each contact.
\item Distance between thumb and the other fingertips when grasping.
]
\subfig[0.24]{results/Grasp-CompletionTime-Location-Overall-Means}
\subfig[0.24]{results/Grasp-Contacts-Location-Overall-Means}
\subfig[0.24]{results/Grasp-TimePerContact-Location-Overall-Means}
\subfig[0.24]{results/Grasp-GripAperture-Location-Overall-Means}
\end{subfigs}
Results were analyzed similarly as for the first experiment (\secref{results}).
Results were analyzed similarly as in the user study of the visual hand renderings (\secref{visual_hand}).
The \LMM were fitted with the order of the five vibrotactile positionings (\factor{Order}), the vibrotactile positionings (\factor{Positioning}), the visual hand rendering (\factor{Hand}), the {contact vibration techniques} (\factor{Vibration Technique}), and the target volume position (\factor{Target}), and their interactions as fixed effects and Participant as random intercept.

View File

@@ -1,8 +1,7 @@
\subsection{Push Task}
\label{push}
\subsubsection{Completion Time}
\label{push_tct}
\paragraph{Completion Time}
On the time to complete a trial, there were two statistically significant effects:
\factor{Positioning} (\anova{4}{1990}{3.8}, \p{0.004}, see \figref{results/Push-CompletionTime-Location-Overall-Means}) %
@@ -12,16 +11,14 @@ There was no evidence of an advantage of \level{Proximal} or \level{Opposite} on
Yet, there was a tendency of faster trials with \level{Proximal} and \level{Opposite}.
The \level{LB} target volume was also faster than the \level{LF} (\p{0.05}).
\subsubsection{Contacts}
\label{push_contacts_count}
\paragraph{Contacts}
On the number of contacts, there was one statistically significant effect of
\factor{Positioning} (\anova{4}{1990}{2.4}, \p{0.05}, see \figref{results/Push-Contacts-Location-Overall-Means}).
More contacts were made with \level{Fingertips} than with \level{Opposite} (\percent{+12}, \p{0.03}).
This could indicate more difficulties to adjust the virtual cube inside the target volume.
\subsubsection{Time per Contact}
\label{push_time_per_contact}
\paragraph{Time per Contact}
On the mean time spent on each contact, there were two statistically significant effects of
\factor{Positioning} (\anova{4}{1990}{11.5}, \pinf{0.001}, see \figref{results/Push-TimePerContact-Location-Overall-Means}) %
@@ -31,7 +28,7 @@ It was shorter with \level{Fingertips} than with \level{Wrist} (\percent{-15}, \
and shorter with \level{Proximal} than with \level{Wrist} (\percent{-16}, \pinf{0.001}), \level{Opposite} (\percent{-12}, \p{0.005}), or \level{Nowhere} (\percent{-16}, \pinf{0.001}).
This showed different strategies to adjust the cube inside the target volume, with faster repeated pushes with the \level{Fingertips} and \level{Proximal} positionings.
It was also shorter with \level{None} than with \level{Skeleton} (\percent{-9}, \pinf{0.001}).
This indicates, as for the first experiment, more confidence with a visual hand rendering.
This indicates, as for the \chapref{visual_hand}, more confidence with a visual hand rendering.
\begin{subfigs}{push_results}{Results of the grasp task performance metrics.}[
Geometric means with bootstrap \percent{95} \CI for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
@@ -42,8 +39,9 @@ This indicates, as for the first experiment, more confidence with a visual hand
\item Mean time spent on each contact.
\item Mean time spent on each contact.
]
\subfig[0.24]{results/Push-CompletionTime-Location-Overall-Means}
\subfig[0.24]{results/Push-Contacts-Location-Overall-Means}
\subfig[0.24]{results/Push-TimePerContact-Location-Overall-Means}
\subfig[0.24]{results/Push-TimePerContact-Hand-Overall-Means}
\subfig[0.4]{results/Push-CompletionTime-Location-Overall-Means}
\subfig[0.4]{results/Push-Contacts-Location-Overall-Means}
\par
\subfig[0.4]{results/Push-TimePerContact-Location-Overall-Means}
\subfig[0.4]{results/Push-TimePerContact-Hand-Overall-Means}
\end{subfigs}

View File

@@ -1,8 +1,7 @@
\subsection{Grasp Task}
\label{grasp}
\subsubsection{Completion Time}
\label{grasp_tct}
\paragraph{Completion Time}
On the time to complete a trial, there were two statistically significant effects:
\factor{Positioning} (\anova{4}{3990}{13.6}, \pinf{0.001}, see \figref{results/Grasp-CompletionTime-Location-Overall-Means})
@@ -12,8 +11,7 @@ and \factor{Target} (\anova{3}{3990}{18.8}, \pinf{0.001}).
\level{RF} was faster than \level{RB} (\pinf{0.001}), \level{LB} (\pinf{0.001}), and \level{LF} (\pinf{0.001});
and \level{LF} was faster than \level{RB} (\p{0.03}).
\subsubsection{Contacts}
\label{grasp_contacts_count}
\paragraph{Contacts}
On the number of contacts, there were two statistically significant effects:
\factor{Positioning} (\anova{4}{3990}{15.1}, \pinf{0.001}, see \figref{results/Grasp-Contacts-Location-Overall-Means}) %
@@ -22,8 +20,7 @@ Fewer contacts were made with \level{Opposite} than with \level{Fingertips} (\pe
but more with \level{Fingertips} than with \level{Wrist} (\percent{+13}, \p{0.002}) or \level{Nowhere} (\percent{+17}, \pinf{0.001}).
It was also easier on \level{LF} than on \level{RB} (\pinf{0.001}), \level{LB} (\p{0.006}), or \level{RF} (\p{0.03}).
\subsubsection{Time per Contact}
\label{grasp_time_per_contact}
\paragraph{Time per Contact}
On the mean time spent on each contact, there were two statistically significant effects:
\factor{Positioning} (\anova{4}{3990}{2.9}, \p{0.02}, see \figref{results/Grasp-TimePerContact-Location-Overall-Means})
@@ -32,8 +29,7 @@ It was shorter with \level{Fingertips} than with \level{Opposite} (\percent{+7},
It was also shorter on \level{RF} than on \level{RB}, \level{LB} or \level{LF} (\pinf{0.001});
but longer on \level{LF} than on \level{RB} or \level{LB} (\pinf{0.001}).
\subsubsection{Grip Aperture}
\label{grasp_grip_aperture}
\paragraph{Grip Aperture}
On the average distance between the thumb's fingertip and the other fingertips during grasping, there were two
statistically significant effects:
@@ -43,3 +39,18 @@ It was longer with \level{Fingertips} than with \level{Proximal} (\pinf{0.001}),
and longer with \level{Proximal} than with \level{Wrist} (\pinf{0.001}) or \level{Nowhere} (\pinf{0.001}).
But, it was shorter with \level{RB} than with \level{LB} or \level{LF} (\pinf{0.001});
and shorter with \level{RF} than with \level{LB} or \level{LF} (\pinf{0.001}).
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning.}[
Geometric means with bootstrap \percent{95} confidence and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
][
\item Time to complete a trial.
\item Number of contacts with the cube.
\item Time spent on each contact.
\item Distance between thumb and the other fingertips when grasping.
]
\subfig[0.4]{results/Grasp-CompletionTime-Location-Overall-Means}
\subfig[0.4]{results/Grasp-Contacts-Location-Overall-Means}
\par
\subfig[0.4]{results/Grasp-TimePerContact-Location-Overall-Means}
\subfig[0.4]{results/Grasp-GripAperture-Location-Overall-Means}
\end{subfigs}

View File

@@ -18,42 +18,42 @@ Statistically significant effects were further analyzed with post-hoc pairwise c
Wilcoxon signed-rank tests were used for main effects and \ART contrasts procedure for interaction effects.
Only significant results are reported.
\subsubsection{Vibrotactile Rendering Rating}
\paragraph{Vibrotactile Rendering Rating}
\label{vibration_ratings}
There was a main effect of \factor{Positioning} (\anova{4}{171}{27.0}, \pinf{0.001}).
There was a main effect of \factor{Positioning} (\anova{4}{171}{27.0}, \pinf{0.001}, see \figref{results/Question-Vibration Rating-Positioning-Overall}).
Participants preferred \level{Fingertips} more than \level{Wrist} (\p{0.01}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Proximal} more than \level{Wrist} (\p{0.007}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
And \level{Wrist} more than \level{Opposite} (\p{0.01}) and \level{Nowhere} (\pinf{0.001}).
\subsubsection{Positioning \x Hand Rating}
\paragraph{Positioning \x Hand Rating}
\label{positioning_hand}
There were two main effects of \factor{Positioning} (\anova{4}{171}{20.6}, \pinf{0.001}) and of \factor{Hand} (\anova{1}{171}{12.2}, \pinf{0.001}).
There were two main effects of \factor{Positioning} (\anova{4}{171}{20.6}, \pinf{0.001}, see \figref{results/Question-Positioning-Overall}) and of \factor{Hand} (\anova{1}{171}{12.2}, \pinf{0.001}).
Participants preferred \level{Fingertips} more than \level{Wrist} (\p{0.03}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Proximal} more than \level{Wrist} (\p{0.003}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Wrist} more than \level{Opposite} (\p{0.03}) and \level{Nowhere} (\pinf{0.001});
And \level{Skeleton} more than No \factor{Hand} (\pinf{0.001}).
And \level{Skeleton} more than \level{No Hand} (\pinf{0.001}).
\subsubsection{Workload}
\paragraph{Workload}
\label{workload}
There was a main effect of \factor{Positioning} (\anova{4}{171}{3.9}, \p{0.004}).
There was a main effect of \factor{Positioning} (\anova{4}{171}{3.9}, \p{0.004}, see \figref{results/Question-Workload-Positioning-Overall}).
Participants found \level{Opposite} more fatiguing than \level{Fingertips} (\p{0.01}), \level{Proximal} (\p{0.003}), and \level{Wrist} (\p{0.02}).
\subsubsection{Usefulness}
\paragraph{Usefulness}
\label{usefulness}
There was a main effect of \factor{Positioning} (\anova{4}{171}{38.0}, \p{0.041}).
There was a main effect of \factor{Positioning} (\anova{4}{171}{38.0}, \p{0.041}, see \figref{results/Question-Usefulness-Positioning-Overall}).
Participants found \level{Fingertips} the most useful, more than \level{Proximal} (\p{0.02}), \level{Wrist} (\pinf{0.001}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Proximal} more than \level{Wrist} (\p{0.008}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Wrist} more than \level{Opposite} (\p{0.008}) and \level{Nowhere} (\pinf{0.001});
And \level{Opposite} more than \level{Nowhere} (\p{0.004}).
\subsubsection{Realism}
\paragraph{Realism}
\label{realism}
There was a main effect of \factor{Positioning} (\anova{4}{171}{28.8}, \pinf{0.001}).
There was a main effect of \factor{Positioning} (\anova{4}{171}{28.8}, \pinf{0.001}, see \figref{results/Question-Realism-Positioning-Overall}).
Participants found \level{Fingertips} the most realistic, more than \level{Proximal} (\p{0.05}), \level{Wrist} (\p{0.004}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Proximal} more than \level{Wrist} (\p{0.03}), \level{Opposite} (\pinf{0.001}), and \level{Nowhere} (\pinf{0.001});
\level{Wrist} more than \level{Opposite} (\p{0.03}) and \level{Nowhere} (\pinf{0.001});
@@ -64,8 +64,9 @@ And \level{Opposite} more than \level{Nowhere} (\p{0.03}).
Higher is better for \textbf{(a)} vibrotactile rendering rating, \textbf{(c)} usefulness and \textbf{(c)} fatigue.
Lower is better for \textbf{(d)} workload.
]
\subfig[0.24]{results/Question-Vibration Rating-Positioning-Overall}
\subfig[0.24]{results/Question-Usefulness-Positioning-Overall}
\subfig[0.24]{results/Question-Realism-Positioning-Overall}
\subfig[0.24]{results/Question-Workload-Positioning-Overall}
\subfig[0.4]{results/Question-Vibration Rating-Positioning-Overall}
\subfig[0.4]{results/Question-Workload-Positioning-Overall}
\par
\subfig[0.4]{results/Question-Usefulness-Positioning-Overall}
\subfig[0.4]{results/Question-Realism-Positioning-Overall}
\end{subfigs}

View File

@@ -1,16 +1,16 @@
\section{Discussion}
\label{discussion}
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in \AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in \AR as in the \chapref{visual_hand}, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the \chapref{visual_hand}.
In the \level{Push} task, vibrotactile haptic hand rendering has been proven beneficial with the \level{Proximal} positioning, which registered a low completion time, but detrimental with the \level{Fingertips} positioning, which performed worse (\figref{results/Push-CompletionTime-Location-Overall-Means}) than the \level{Proximal} and \level{Opposite} (on the contralateral hand) positionings.
The cause might be the intensity of vibrations, which many participants found rather strong and possibly distracting when provided at the fingertips.
This result was also observed by \textcite{bermejo2021exploring}, who provided vibrotactile cues when pressing a virtual keypad.
Another reason could be the visual impairment caused by the vibrotactile motors when worn on the fingertips, which could have disturbed the visualization of the virtual cube.
We observed different strategies than in the first experiment for the two tasks.
We observed different strategies than in the \chapref{visual_hand} for the two tasks.
During the \level{Push} task, participants made more and shorter contacts to adjust the cube inside the target volume (\figref{results/Push-Contacts-Location-Overall-Means} and \figref{results/Push-TimePerContact-Location-Overall-Means}).
During the \level{Grasp} task, participants pressed the cube 25~ harder on average (\figref{results/Grasp-GripAperture-Location-Overall-Means}).
During the \level{Grasp} task, participants pressed the cube \percent{25} harder on average (\figref{results/Grasp-GripAperture-Location-Overall-Means}).
The \level{Fingertips} and \level{Proximal} positionings led to a slightly larger grip aperture than the others.
We think that the proximity of the vibrotactile rendering to the point of contact made users to take more time to adjust their grip in a more realistic manner, \ie closer to the surface of the cube.
This could also be the cause of the higher number of failed grasps or cube drops: indeed, we observed that the larger the grip aperture, the higher the number of contacts.
@@ -29,12 +29,12 @@ While the majority of participants discriminated the two different techniques, o
It seemed that the Impact technique was sufficient to provide contact information compared to the \level{Distance} technique, which provided additional feedback on interpenetration, as reported by participants.
No difference in performance was found between the two visual hand renderings, except for the \level{Push} task, where the \level{Skeleton} hand rendering resulted again in longer contacts.
Additionally, the \level{Skeleton} rendering was appreciated and perceived as more effective than having no visual hand rendering, confirming the results of our first experiment.
Additionally, the \level{Skeleton} rendering was appreciated and perceived as more effective than having no visual hand rendering, confirming the results of our \chapref{visual_hand}.
Participants reported that this visual hand rendering provided good feedback on the status of the hand tracking while being constrained to the cube, and helped with rotation adjustment in both tasks.
However, many also felt that it was a bit redundant with the vibrotactile hand rendering.
Indeed, receiving a vibrotactile hand rendering was found by participants as a more accurate and reliable information regarding the contact with the cube than simply seeing the cube and the visual hand reacting to the manipulation.
This result suggests that providing a visual hand rendering may not be useful during the grasping phase, but may be beneficial prior to contact with the virtual object and during position and rotation adjustment, providing valuable information about the hand pose.
It is also worth noting that the improved hand tracking and grasp helper improved the manipulation of the cube with respect to the first experiment, as shown by the shorter completion time during the \level{Grasp} task.
It is also worth noting that the improved hand tracking and grasp helper improved the manipulation of the cube with respect to the \chapref{visual_hand}, as shown by the shorter completion time during the \level{Grasp} task.
This improvement could also be the reason for the smaller differences between the \level{Skeleton} and the \level{None} visual hand renderings in this second experiment.
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in \AR.
@@ -47,4 +47,4 @@ This behavior has likely given them a better experience of the tasks and more co
On the other hand, the unfamiliarity of the contralateral hand positioning (\level{Opposite}) caused participants to spend more time understanding the haptic stimuli, which might have made them more focused on performing the task.
In terms of the contact vibration technique, the continuous vibration technique on the finger interpenetration (\level{Distance}) did not make a difference to performance, although it provided more information.
Participants felt that vibration bursts were sufficient (\level{Distance}) to confirm contact with the virtual object.
Finally, it was interesting to note that the visual hand renderings was appreciated but felt less necessary when provided together with vibrotactile hand rendering, as the latter was deemed sufficient for acknowledging the contact.
Finally, it was interesting to note that the visual hand rendering was appreciated but felt less necessary when provided together with vibrotactile hand rendering, as the latter was deemed sufficient for acknowledging the contact.

View File

@@ -1,16 +1,16 @@
\section{Conclusion}
\label{conclusion}
This paper presented two human subject studies aimed at better understanding the role of visuo-haptic rendering of the hand during virtual object manipulation in OST-AR.
%
The second experiment compared, in the same two manipulation tasks as before, sixteen visuo-haptic renderings of the hand as the combination of two vibrotactile contact techniques, provided at four different delocalized positions on the hand, and with the two most representative visual hand renderings established in the first experiment, \ie the skeleton hand rendering and no hand rendering.
%
Results show that delocalized vibrotactile haptic hand rendering improved the perceived effectiveness, realism, and usefulness when it is provided close to the contact point.
%
However, the farthest positioning on the contralateral hand gave the best performance even though it was disliked: the unfamiliarity of the positioning probably caused the participants to take more effort to consider the haptic stimuli and to focus more on the task.
%
The visual hand rendering was perceived less necessary than the vibrotactile haptic hand rendering, but still provided a useful feedback on the hand tracking.
In this chapter, we investigated the visuo-haptic rendering as feedback of the hand manipulation with \VOs in immersive \OST-\AR using wearable vibrotactile haptic.
To do so, we provided vibrotactile feedback of the fingertip contacts with \VOs during direct hand manipulation by moving away the haptic actuator that do not cover the inside of the hand: on the nails, the proximal phalanges, the wrist, and the nails of the opposite hand.
%We selected these four different delocalized positions on the hand from the literature for direct hand interaction in \AR using wearable haptic devices.
In a user study, we compared sixteen visuo-haptic renderings of the hand as the combination of two vibrotactile contact techniques, provided at four different delocalized positions on the user's hand, and with the two most representative visual hand renderings established in the \chapref{visual_hand}, \ie the skeleton hand rendering and no hand rendering.
Future work will focus on including richer types of haptic feedback, such as pressure and skin stretch, analyzing the best compromise between well-round haptic feedback and wearability of the system with respect to \AR constraints.
%
As delocalizing haptic feedback seems to be a simple but very promising approach for haptic-enabled \AR, we will keep including this dimension in our future study, even when considering other types of haptic sensations.
Results showed that delocalized vibrotactile haptic hand rendering improved the perceived effectiveness, realism, and usefulness when it is provided close to the contact point.
%However, the farthest positioning on the contralateral hand gave the best performance even though it was disliked: the unfamiliarity of the positioning probably caused the participants to take more effort to consider the haptic stimuli and to focus more on the task.
The visual hand rendering was perceived less necessary than the vibrotactile haptic hand rendering, but still provided a useful feedback on the hand tracking.
This study provide evidence that moving away the feedback from the inside of the hand is a simple but very promising approach for wearable haptics in \AR.
If integration with the hand tracking system allows it, and if the task requires it, a haptic ring worn on the middle or proximal phalanx seems preferable.
However, a wrist-mounted haptic device will be able to provide richer feedback by embedding more diverse haptic actuators with larger bandwidths and maximum amplitudes, while being less obtrusive than a ring.
Finally, we think that the visual hand rendering complements the haptic hand rendering very well by providing continuous feedback on the hand tracking, and that it can be disabled during the grasping phase to avoid redundancy with the haptic feedback of the contact with the \VO.

View File

@@ -1,11 +1,13 @@
\chapter{Conclusion}
\mainlabel{conclusion}
\section*{Summary}
\section{Summary}
In this thesis, entitled \enquote{\ThesisTitle}, we presented our research on direct hand interaction with real and virtual everyday objects, visually and haptically augmented using immersive \AR and wearable haptic devices.
\noindentskip \partref{manipulation}
\noindentskip In \partref{manipulation}, we addressed the challenge of improving the manipulation of \VOs directly with the hand in immersive \OST-\AR.
Our approach was to design, based on the literature, and evaluate in user studies the effect of visual rendering of the hand and the delocalized haptic rendering
We first focused on (1) \textbf{the visual rendering as hand augmentation} and then on the (2) \textbf{combination of different visuo-haptic rendering of the hand manipulation with \VOs}.
\noindentskip In \chapref{visual_hand}, we investigated the visual rendering as hand augmentation.
Seen as an \textbf{overlay on the user's hand}, such visual hand rendering provide feedback on the hand tracking and the interaction with \VOs.
@@ -13,10 +15,14 @@ We compared the six commonly used renderings in the \AR litterature in a user st
The results showed that a visual hand rendering improved the user performance, perceived effectiveness and confidence, with a \textbf{skeleton-like rendering being the most performant and effective}.
This rendering provided a detailed view of the tracked phalanges while being thin enough not to hide the real hand.
\section*{Future Work}
\noindentskip In \chapref{visuo_haptic_hand}, we then investigated the visuo-haptic rendering as feedback of the direct hand manipulation with \VOs using wearable vibrotactile haptics.
In a user study with a similar design and 20 participants, we compared two vibrotactile contact techniques, provided at \textbf{four different delocalized positions on the user's hand}, and combined with the two most representative visual hand renderings from the previous chapter.
The results showed that providing vibrotactile feedback \textbf{improved the perceived effectiveness, realism, and usefulness when it is provided close to the fingertips}, and that the visual hand rendering complemented the haptic hand rendering well in giving a continuous feedback on the hand tracking.
The visuo-haptic renderings we presented and the user studies we conducted in this thesis have of course some limitations.
We present in this section some future work that could address these.
\section{Future Work}
The wearable visuo-haptic augmentations of perception and manipulation we presented and the user studies we conducted in this thesis have of course some limitations.
In this section, we present some future work for each chapter that could address these.
\subsection*{Visual Rendering of the Hand for Manipulating Virtual Objects in Augmented Reality}
@@ -35,19 +41,32 @@ While these tasks are fundamental building blocks for more complex manipulation
Similarly, a broader experimental study might shed light on the role of gender and age, as our subject pool was not sufficiently diverse in this regard.
Finally, all visual hand renderings received low and high rank rates from different participants, suggesting that users should be able to choose and personalize some aspects of the visual hand rendering according to their preferences or needs, and this should be also be evaluated.
\subsection*{Haptic Rendering of the Hand for Manipulating Virtual Objects in Augmented Reality}
\subsection*{Visuo-Haptic Rendering of Hand Manipulation With Virtual Objects in Augmented Reality}
As we already said in \secref[visual_hand]{discussion}, these results have some limitations as they address limited types of visuo-haptic renderings and manipulations were restricted to the thumb and index fingertips.
While the simpler vibration technique (Impact technique) was sufficient to confirm contacts with the cube, richer vibrotactile renderings may be required for more complex interactions, such as collision or friction renderings between objects \cite{kuchenbecker2006improving, pacchierotti2015cutaneous} or texture rendering \cite{culbertson2014one, asano2015vibrotactile}.
More generally, a broader range of haptic sensations should be considered, such as pressure or stretching of the skin \cite{maisto2017evaluation, teng2021touch}.
However, moving the point of application of the sensation away may be challenging for some types of haptic rendering.
Similarly, as the interactions were limited to the thumb and index fingertips, positioning a delocalized haptic rendering over a larger area of the hand could be challenging and remains to be explored.
Also, given that some users found the vibration rendering too strong, adapting/personalizing the haptic feedback to one's preference (and body positioning) might also be a promising approach.
Indeed, personalized haptics is recently gaining interest in the community \cite{malvezzi2021design, umair2021exploring}.
\paragraph{Richer Haptic Feedback}
\section*{Perspectives}
The haptic rendering we considered was limited to vibrotactile feedback using \ERM motors.
While the simpler contact vibration technique (Impact technique) was sufficient to confirm contacts with the cube, richer vibrotactile renderings may be required for more complex interactions, such as rendering hardness (\secref[related_work]{hardness_rendering}), textures (\secref[related_work]{texture_rendering}), friction (\secref{konyo2008alternative,jeon2011extensions,salazar2020altering}), or edges and shape of \VOs.
This will require to consider a broader ranger of haptic actuators and sensations (\secref[related_work]{wearable_haptic_devices}), such as pressure or stretching of the skin.
More importantly, the best compromise between well-round haptic feedback and wearability of the system with respect to \AR constraints should be analyzed (\secref[related_work]{vhar_haptics}).
\paragraph{Personalized Haptics}
Some users found the vibration rendering to be too strong, suggesting that adapting and personalizing the haptic feedback to one's preference is a promising approach.
In addition, although it was perceived as more effective and realistic when provided close to the point of contact, other positionings, such as the wrist, may be preferred and still be sufficient for a given task.
The interactions in our user study were also restricted to the thumb and index fingertips, with the haptic feedback provided only for these contact points, as these are the most commonly used parts of the hand for manipulation tasks.
It remains to be explored how to support rendering for different and larger areas of the hand, and how to position a delocalized rendering for points other than the fingertips could be challenging.
Indeed, personalized haptics is gaining interest in the community \cite{malvezzi2021design, umair2021exploring}.
\section{Perspectives}
\subsection*{Towards Universal Wearable Haptic Augmentation}
% systematic exploration of the parameter space of the haptic rendering to determine the most important parameters their influence on the perception
% measure the difference in sensitivity to the haptic feedback and how much it affects the perception of the object properties
\subsection*{Responsive Visuo-Haptic Augmented Reality}
%Given these three points, and the diversity of haptic actuators and renderings, one might be able to interact with the \VOs with any haptic device, worn anywhere on the body and providing personalized feedback on any other part of the hand, and the visuo-haptic system should be able to support such a adapted usage.s
% design, implement and validate procedures to automatically calibrate the haptic feedback to the user's perception in accordance to what it has been designed to represent
% + let user free to easily adjust (eg can't let adjust whole spectrum of vibrotactile, reduce to two or three dimensions with sliders using MDS)