Files
phd-thesis/4-manipulation/visuo-haptic-hand/2-method.tex

142 lines
12 KiB
TeX

\section{Vibrotactile Renderings of the Hand-Object Contacts}
\label{vibration}
The vibrotactile hand rendering provided information about the contacts between the virtual object and the thumb and index fingers of the user, as they are the two fingers most used for grasping (\secref[related_work]{grasp_types}).
We evaluated both the delocalized positioning and the contact vibration technique of the vibrotactile hand rendering.
\subsection{Vibrotactile Positionings}
\label{positioning}
We considered five different positionings for providing the vibrotactile rendering as feedback of the contacts between the virtual hand and the virtual objects, as shown in \figref{method/locations}.
They are representative of the most common locations used by wearable haptic devices in \AR to place their end-effector, as found in the literature (\secref[related_work]{vhar_haptics}), as well as other positionings that have been employed for manipulation tasks.
For each positioning, we used two vibrating actuators, for the thumb and index finger, respectively.
They are described as follows, with the corresponding abbreviation in brackets:
\begin{itemize}
\item \level{Fingertips} (Tips): Vibrating actuators were placed right above the nails, similarly to \cite{ando2007fingernailmounted}. This is the positioning closest to the fingertips.
\item \level{Proximal} Phalanges (Prox): Vibrating actuators were placed on the dorsal side of the proximal phalanges, similarly to \cite{maisto2017evaluation,meli2018combining,chinello2020modular}.
\item \level{Wrist} (Wris): Vibrating actuators providing contacts rendering for the index and thumb were placed on ulnar and radial sides of the wrist, similarly to \cite{pezent2019tasbi,palmer2022haptic,sarac2022perceived}.
\item \level{Opposite} Fingertips (Oppo): Vibrating actuators were placed on the fingertips of contralateral hand, also above the nails, similarly to \cite{prattichizzo2012cutaneous,detinguy2018enhancing}.
\item \level{Nowhere} (Nowh): As a reference, we also considered the case where we provided no vibrotactile rendering, as in \chapref{visual_hand}.
\end{itemize}
\subsection{Contact Vibration Techniques}
\label{technique}
When a fingertip contacts the virtual cube, we activate the corresponding vibrating actuator.
We considered two representative contact vibration techniques, \ie two ways of rendering such contacts through vibrations:
\begin{itemize}
\item \level{Impact} (Impa): a \qty{200}{\ms}--long vibration burst is applied when the fingertip makes contact with the object.
The amplitude of the vibration is proportional to the speed of the fingertip at the moment of the contact.
This technique is inspired by the impact vibrations modelled by tapping on real surfaces, as described in \secref[related_work]{hardness_rendering}.
\item \level{Distance} (Dist): a continuous vibration is applied whenever the fingertip is in contact with the object.
The amplitude of the vibration is proportional to the interpenetration between the fingertip and the virtual cube surface.
\end{itemize}
The implementation of these two techniques have been tuned according to the results of a preliminary experiment.
Three participants were asked to carry out a series of push and grasp tasks similar to those used in the actual experiment.
Results showed that \percent{95} of the contacts between the fingertip and the virtual cube happened at speeds below \qty{1.5}{\m\per\s}.
We also measured the perceived minimum amplitude to be \percent{15} (\qty{0.6}{\g}) of the maximum amplitude of the motors we used.
For this reason, we designed the Impact vibration technique (Impa) so that contact speeds from \qtyrange{0}{1.5}{\m\per\s} are linearly mapped into \qtyrange{15}{100}{\%} amplitude commands for the motors.
Similarly, we designed the distance vibration technique (Dist) so that interpenetrations from \qtyrange{0}{2.5}{\cm} are linearly mapped into \qtyrange{15}{100}{\%} amplitude commands for the motors, recalling that the virtual cube has an edge of \qty{5}{\cm}.
\section{User Study}
\label{method}
This user study aims to evaluate whether a visuo-haptic rendering of the hand affects the user performance and experience of manipulation of virtual objects with bare hands in \OST-\AR.
The chosen visuo-haptic hand renderings are the combination of the two most representative visual hand augmentations established in the \chapref{visual_hand}, \ie \level{Skeleton} and \level{No Hand}, described in \secref[visual_hand]{hands}, with the two contact vibration techniques provided at the four delocalized positions on the hand described in \secref{vibration}.
\subsection{Experimental Design}
\label{design}
We considered the same two \level{Push} and \level{Grasp} tasks as described in \secref[visual_hand]{tasks}, that we analyzed separately, considering four independent, within-subject variables:
\begin{itemize}
\item \factor{Positioning}: the five positionings for providing vibrotactile hand rendering of the virtual contacts, as described in \secref{positioning}.
\item \factor{Vibration Technique}: the two contact vibration techniques, as described in \secref{technique}.
\item \factor{Hand}: two visual hand augmentations from the \chapref{visual_hand}, \level{Skeleton} (Skel) and \level{No Hand}, as described in \secref[visual_hand]{hands}; we considered \level{Skeleton} as it performed the best in terms of performance and perceived effectiveness and \level{No Hand} as reference.
\item \factor{Target}: we considered the target volumes (\figref{tasks}), from the participant's point of view, located at:
\begin{itemize}
\item left-bottom (\level{LB}) and left-right (\level{LF}) during the \level{Push} task; and
\item right-bottom (\level{RB}), left-bottom (\level{LB}), left-right (\level{LF}) and right-front (\level{RF}) during the \level{Grasp} task.
\end{itemize}. We considered these targets because they presented different difficulties.
\end{itemize}
\begin{subfigs}{tasks}{The two manipulation tasks of the user study.}[
Both pictures show the cube to manipulate in the middle (\qty{5}{\cm} and opaque) and the eight possible targets to reach (\qty{7}{\cm} cube and semi-transparent).
Only one target at a time was shown during the experiments.
][
\item Push task: pushing the virtual cube along a table towards a target placed on the same surface.
\item Grasp task: grasping and lifting the virtual cube towards a target placed on a \qty{20}{\cm} higher plane.
]
\subfig[0.45]{method/task-push-2}
\subfig[0.45]{method/task-grasp-2}
\end{subfigs}
To account for learning and fatigue effects, the order of the \factor{Positioning} conditions were counter-balanced using a balanced \numproduct{10 x 10} Latin square.
In these ten blocks, all possible \factor{Technique} \x \factor{Hand} \x \factor{Target} combination conditions were repeated three times in a random order.
As we did not find any relevant effect of the order in which the tasks were performed in the \chapref{visual_hand}, we fixed the order of the tasks: first, the \level{Push} task and then the \level{Grasp} task.
This design led to a total of 5 vibrotactile positionings \x 2 vibration contact techniques \x 2 visual hand augmentation \x (2 targets on the Push task + 4 targets on the Grasp task) \x 3 repetitions $=$ 420 trials per participant.
\subsection{Apparatus and Procedure}
\label{apparatus}
Apparatus and experimental procedure were similar to the \chapref{visual_hand}, as described in \secref[visual_hand]{apparatus} and \secref[visual_hand]{procedure}, respectively.
We report here only the differences.
We employed the same vibrotactile device used by \cite{devigne2020power}.
It is composed of two encapsulated \ERM (\secref[related_work]{vibrotactile_actuators}) vibration motors (Pico-Vibe 304-116, Precision Microdrive, UK).
They are small and light (\qty{5}{\mm} \x \qty{20}{\mm}, \qty{1.2}{\g}) actuators capable of vibration frequencies from \qtyrange{120}{285}{\Hz} and
amplitudes from \qtyrange{0.2}{1.15}{\g}.
They have a latency of \qty{20}{\ms} that we partially compensated for at the software level with slightly larger colliders to trigger the vibrations close the moment the finger touched the cube.
These two outputs vary linearly together, based on the tension applied.
They were controlled by an Arduino Pro Mini (\qty{3.3}{\V}) and a custom board that delivered the tension independently to each motor.
A small \qty{400}{mAh} Li-ion battery allowed for 4 hours of constant vibration at maximum intensity.
A Bluetooth module (RN42XV module, Microchip Technology Inc., USA) mounted on the Arduino ensured wireless communication with the HoloLens~2.
To ensure minimal encumbrance, we used the same two motors throughout the experiment, moving them to the considered positioning before each new block.
Thin self-gripping straps were placed on the five positionings, with an elastic strap stitched on top to place the motor, as shown in \figref{method/locations}.
The straps were fixed during the entirety of the experiment to ensure similar hand tracking conditions.
We confirmed that this setup ensured a good transmission of the rendering and guaranteed a good hand tracking performance, that was measured to be constant (\qty{15}{\ms}) with and without motors, regardless their positioning.
The control board was fastened to the arm with an elastic strap.
Finally, participants wore headphones diffusing brown noise to mask the sound of the vibrotactile motors.
We improved the hand tracking performance of the system by placing on the table a black sheet that absorbs the infrared light as well as placing the participants in front of a wall to ensure a more constant exposure to the light.
We also made grasping easier by adding a grasping helper, similar to UltraLeap's Physics Hands.\footnoteurl{https://docs.ultraleap.com/unity-api/Preview/physics-hands.html}.
When a phalanx collider of the tracked hand contacts the virtual cube,
a spring with a low stiffness is created and attached between the cube and the collider.
The spring pulls gently the cube toward the phalanxes in contact with the object to help maintain a natural and stable grasp.
When the contact is lost, the spring is destroyed.
Preliminary tests confirmed this approach.
\subsection{Participants}
\label{participants}
Twenty subjects participated in the study (mean age = 26.8, \sd{4.1}; 19~males, 1~female).
One was left-handed, while the other nineteen were right-handed. They all used their dominant hand during the trials.
They all had a normal or corrected-to-normal vision.
Thirteen subjects participated also in the previous experiment.
Participants rated their expertise (\enquote{I use it more than once a year}) with \VR, \AR, and haptics in a pre-experiment questionnaire.
There were twelve experienced with \VR, eight experienced with \AR, and ten experienced with haptics.
VR and haptics expertise were highly correlated (\pearson{0.9}), as well as \AR and haptics expertise (\pearson{0.6}).
Other expertise correlations were low ($r<0.35$).
\subsection{Collected Data}
\label{metrics}
During the experiment, we collected the same data as in the \chapref{visual_hand}, see \secref[visual_hand]{metrics}.
At the end of the experiment, participants were asked if they recognized the different contact vibration techniques.
They then rated the ten combinations of \factor{Positioning} \x \factor{Vibration Technique} using a 7-item Likert scale (1=Not at all, 7=Extremely):
\begin{itemize}
\item \response{Vibration Rating}: How much do you like each vibrotactile rendering?
\item \response{Workload}: How demanding or frustrating was each vibrotactile rendering?
\item \response{Usefulness}: How useful was each vibrotactile rendering?
\item \response{Realism}: How realistic was each vibrotactile rendering?
\end{itemize}
Finally, they rated the ten combinations of \factor{Positioning} \x factor{Hand} on a 7-item Likert scale (1=Not at all, 7=Extremely):
\response{Positioning \x Hand Rating}: How much do you like each combination of vibrotactile location for each visual hand rendering?