Split xr-perception chapter

This commit is contained in:
2024-09-24 15:09:19 +02:00
parent b8b799df3d
commit 20a37dd955
21 changed files with 198 additions and 147 deletions

View File

@@ -1,16 +1,5 @@
\section{Introduction}
\label{introduction}
% Delivers the motivation for your paper. It explains why you did the work you did.
% Insist on the advantage of wearable : augment any surface see bau2012revel
\fig[1]{teaser/teaser2}{%
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
%
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
}
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
@@ -56,12 +45,17 @@ By understanding how these visual factors influence the perception of haptically
Our contributions are:
%
\begin{itemize}
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in AR, and with the same virtual hand in VR.
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in AR, and with the same virtual hand in VR.
\end{itemize}
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
%
%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask.
%
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
%
\fig[1]{teaser/teaser2}{%
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
%
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
}

View File

@@ -1,183 +0,0 @@
\section{Visuo-Haptic Texture Rendering in Mixed Reality}
\label{method}
\figwide[1]{method/diagram}{%
Diagram of the visuo-haptic texture rendering system.
%
Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera.
%
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
%
%These poses are transformed to the AR/VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the real environment.
These poses are used to move and display the virtual model replicas aligned with the real environment.
%
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
%
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
%
The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
%
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
%
All computation steps except signal sampling are performed at 60~Hz and in separate threads to parallelize them.
}
%With a vibrotactile actuator attached to a hand-held device or directly on the finger, it is possible to simulate virtual haptic sensations as vibrations, such as texture, friction or contact vibrations \cite{culbertson2018haptics}.
%
In this section, we describe a system for rendering vibrotactile roughness texture in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
%
We also describe how to pair this tactile rendering with an immersive AR or VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment.
The visuo-haptic texture rendering system is based on
%
\begin{enumerate*}[label=(\arabic*)]
\item a real-time interaction loop between the finger movements and a coherent visuo-haptic feedback simulating the sensation of a touched texture,
\item a precise alignement of the virtual environment with its real counterpart,
\item and a modulation of the signal frequency by the estimated finger speed with a phase matching.
\end{enumerate*}
%
\figref{method/diagram} shows the diagram of the interaction loop and \eqref{signal} the definition of the vibrotactile signal.
%
The system is composed of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering.
\subsection{Pose Estimation and Virtual Environment Alignment}
\label{virtual_real_alignment}
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup}[%
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger. %
\item HoloLens~2 AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset. %
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
]
\hidesubcaption
\subfig[0.325]{method/device}
\subfig[0.65]{method/headset}
\par\vspace{2.5pt}
\subfig[0.992]{method/apparatus}
\end{subfigs}
A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{method/apparatus}).
%
Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (\figref{setup}).
%
Contrary to similar work which either constrained hand to a constant speed to keep the signal frequency constant \cite{asano2015vibrotactile,friesen2024perceived}, or used mechanical sensors attached to the hand \cite{friesen2024perceived,strohmeier2017generating}, using vision-based tracking allows both to free the hand movements and to augment any tangible surface.
%
A camera external to the AR/VR headset with a marker-based technique is employed to provide accurate and robust tracking with a constant view of the markers \cite{marchand2016pose}.
%
To reduce the noise the pose estimation while maintaining a good responsiveness, the 1€ filter \cite{casiez2012filter} is applied.
%
It is a low-pass filter with an adaptive cutoff frequency, specifically designed for tracking human motion.
%
The optimal filter parameters were determined using the method of \textcite{casiez2012filter}, with a minimum cutoff frequency of \qty{10}{\hertz} and a slope of \num{0.01}.
%
The velocity of the marker is estimated using the discrete derivative of the position and an other 1€ filter with the same parameters.
To be able to compare virtual and augmented realities, we then create a virtual environment that closely replicate the real one.
%Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the real environment during the experiment.
%
Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (\figref{renderings}).
%
In addition, the pose and size of the virtual textures are defined on the virtual replicas.
%
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
%
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (\figref{renderings}), using the considered AR or VR headset.
In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK).
%
The visual rendering is achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV), a \qty{60}{\Hz} refresh rate, and self-localisation capabilities.
%
It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment \cite{macedo2023occlusion}.
%
Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
%
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{method/headset}).
\subsection{Vibrotactile Signal Generation and Rendering}
\label{texture_generation}
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
%
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{method/device}).
%
The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil \cite{mcmahan2014dynamic}.
%
The amplifier is connected to the audio output of a computer that generates the signal using the WASAPI driver in exclusive mode and the NAudio library.
The represented haptic texture is a series of parallels virtual grooves and ridges, similar to real grating textures manufactured for psychophysical roughness perception studies \cite{friesen2024perceived,klatzky2003feeling,unger2011roughness}.
%
It is generated as a square wave audio signal, sampled at \qty{48}{\kilo\hertz}, with a period $\lambda$ (usually in the millimetre range) and an amplitude $A$.
%
A sample $s_k$ of the audio signal at sampling time $t_k$ is given by:
%
\begin{subequations}
\label{eq:signal}
\begin{align}
s(x_{f,j}, t_k) & = A \text{\,sgn} ( \sin (2 \pi \frac{\dot{x}_{f,j}}{\lambda} t_k + \phi_j) ) & \label{eq:signal_speed} \\
\phi_j & = \phi_{j-1} + 2 \pi \frac{x_{f,j} - x_{f,{j-1}}}{\lambda} t_k & \label{eq:signal_phase}
\end{align}
\end{subequations}
%
This is a common rendering method for vibrotactile textures, with well-defined parameters, that has been employed to modify perceived haptic roughness of a tangible surface \cite{asano2015vibrotactile,konyo2005tactile,ujitoko2019modulating}.
%
As the finger position is estimated at a far lower rate (\qty{60}{\hertz}) than the audio signal, the finger position $x_f$ cannot be directly used to render the signal if the finger moves fast or if the texture period is small.
%
The best strategy instead is to modulate the frequency of the signal $s$ as a ratio of the finger velocity $\dot{x}_f$ and the texture period $\lambda$ \cite{friesen2024perceived}.
%
This is important because it preserves the sensation of a constant spatial frequency of the virtual texture while the finger moves at various speeds, which is crucial for the perception of roughness \cite{klatzky2003feeling,unger2011roughness}.
%
Note that the finger position and velocity are transformed from the camera frame $\mathcal{F}_c$ to the texture frame $\mathcal{F}_t$, with the $x$ axis aligned with the texture direction.
%
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
%
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (\figref{method/phase_adjustment}) and, contrary to previous work \cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
%
Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures \cite{unger2011roughness}.
%
%And secondly, to be able to render low frequencies that occurs when the finger moves slowly or the texture period is large, as the actuator cannot render frequencies below \qty{\approx 20}{\Hz} with enough amplitude to be perceived with a pure sine wave signal.
%
The tactile texture is described and rendered in this work as a one dimensional signal by integrating the relative finger movement to the texture on a single direction, but it is easily extended to a two-dimensional texture by simply generating a second signal for the orthogonal direction and summing the two signals in the rendering.
\fig[1]{method/phase_adjustment}{%
Change in frequency of a sinusoidal signal with (light green) and without phase matching (in dark green).
%
The phase matching ensures a continuity in the signal and avoids glitches in the rendering of the signal.
%
A sinusoidal signal is show here for clarity, but a different waveform, such as a square wave, will give a similar effect.
}
\subsection{System Latency}
\label{latency}
%As shown in \figref{method/diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
%
Because the chosen AR headset is a standalone device (like most current AR/VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
%
All computation steps run in a separate thread to parallelize them and reduce latency, and are synchronised with the headset via a local network and the ZeroMQ library.
%
This complex assembly inevitably introduces latency, which must be measured.
The rendering system provides a user with two interaction loops between the movements of their hand and the visual (loop 1) and haptic (loop 2) feedbacks.
%
Measures are shown as mean $\pm$ standard deviation (when it is known).
%
The end-to-end latency from finger movement to feedback is measured at \qty{36 +- 4}{\ms} in the haptic loop and \qty{43 +- 9}{\ms} in the visual loop.
%
Both are the result of latency in image capture \qty{16 +- 1}{\ms}, markers tracking \qty{2 +- 1}{\ms} and network communication \qty{4 +- 1}{\ms}.
%
The haptic loop also includes the voice-coil latency \qty{15}{\ms} (as specified by the manufacturer\footnotemark[1]), whereas the visual loop includes the latency in 3D rendering \qty{16 +- 5}{\ms} (60 frames per second) and display \qty{5}{\ms}.
%
The total haptic latency is below the \qty{60}{\ms} detection threshold in vibrotactile feedback \cite{okamoto2009detectability}.
%
The total visual latency can be considered slightly high, yet it is typical for an AR rendering involving vision-based tracking \cite{knorlein2009influence}.
The two filters also introduce a constant lag between the finger movement and the estimated position and velocity, measured at \qty{160 +- 30}{\ms}.
%
With respect to the real hand position, it causes a distance error in the displayed virtual hand position, and thus a delay in the triggering of the vibrotactile signal.
%
This is proportional to the speed of the finger, \eg distance error is \qty{12 +- 2.3}{\mm} when the finger moves at \qty{75}{\mm\per\second}.
%
%and of the vibrotactile signal frequency with respect to the finger speed.%, that is proportional to the speed of the finger.
%

View File

@@ -25,7 +25,6 @@ In a two-alternative forced choice (2AFC) task, participants compared the roughn
%
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
\subsection{Participants}
\label{participants}
@@ -43,7 +42,6 @@ Participants were recruited at the university on a voluntary basis.
%
They all signed an informed consent form before the user study and were unaware of its purpose.
\subsection{Apparatus}
\label{apparatus}
@@ -99,7 +97,6 @@ They also wore headphones with a pink noise masking the sound of the voice-coil.
%
The user study was held in a quiet room with no windows.
\subsection{Procedure}
\label{procedure}
@@ -135,15 +132,14 @@ All textures were rendered as described in \secref{texture_generation} with peri
%
Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable, and the reference texture was chosen to be the one with the middle amplitude.
\subsection{Experimental Design}
\label{experimental_design}
The user study was a within-subjects design with two factors:
%
\begin{itemize}
\item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (\figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (\figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (\figref{renderings}, \level{Virtual}).
\item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}.
\item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (\figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (\figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (\figref{renderings}, \level{Virtual}).
\item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}.
\end{itemize}
A trial consisted on a two-alternative forced choice (2AFC) task where a participant had to touch two virtual vibrotactile textures one after the other and decide which one was the roughest.
@@ -154,7 +150,6 @@ Within each condition, the order of presentation of the reference and comparison
%
A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentation order \x 3 repetitions = 107 trials were performed by each participant.
\subsection{Collected Data}
\label{collected_data}
@@ -172,44 +167,44 @@ For all questions, participants were shown only labels (\eg \enquote{Not at all}
\newcommand{\scalegroup}[2]{\multirow{#1}{1\linewidth}{#2}}
\begin{tabwide}{questions}
{Questions asked to participants after each \factor{Visual Rendering} block of trials.}
[
Unipolar scale questions were 5-point Likert scales (1 = Not at all, 2 = Slightly, 3 = Moderately, 4 = Very and 5 = Extremely), and %
bipolar scale questions were 7-point Likert scales (1 = Extremely A, 2 = Moderately A, 3 = Slightly A, 4 = Neither A nor B, 5 = Slightly B, 6 = Moderately B, 7 = Extremely B), %
where A and B are the two poles of the scale (indicated in parentheses in the Scale column of the questions).
%, and NASA TLX questions were bipolar 100-points scales (0 = Very Low and 100 = Very High, except for Performance where 0 = Perfect and 100 = Failure). %
Participants were shown only the labels for all questions.
]
\begin{tabularx}{\linewidth}{l X p{0.2\linewidth}}
\toprule
\textbf{Code} & \textbf{Question} & \textbf{Scale} \\
\midrule
Texture Agency & Did the tactile sensations of texture seem to be caused by your movements? & \scalegroup{4}{Unipolar (1-5)} \\
Texture Realism & How realistic were the tactile textures? & \\
Texture Plausibility & Did you feel like you were actually touching textures? & \\
Texture Latency & Did the sensations of texture seem to lag behind your movements? & \\
\midrule
Vibration Location & Did the vibrations seem to come from the surface you were touching or did you feel them on the top of your finger? & Bipolar (1=surface, 7=top of finger) \\
Vibration Strength & Overall, how weak or strong were the vibrations? & Bipolar (1=weak, 7=strong) \\
Device Distraction & To what extent did the vibrotactile device distract you from the task? & \scalegroup{2}{Unipolar (1-5)} \\
Device Discomfort & How uncomfortable was it to use the vibrotactile device? & \\
\midrule
Hand Agency & Did the movements of the virtual hand seem to be caused by your movements? & \scalegroup{5}{Unipolar (1-5)} \\
Hand Similarity & How similar was the virtual hand to your own hand in appearance? & \\
Hand Ownership & Did you feel the virtual hand was your own hand? & \\
Hand Latency & Did the virtual hand seem to lag behind your movements? & \\
Hand Distraction & To what extent did the virtual hand distract you from the task? & \\
Hand Reference & Overall, did you focus on your own hand or the virtual hand to complete the task? & Bipolar (1=own hand, 7=virtual hand) \\
\midrule
Virtual Realism & How realistic was the virtual environment? & \scalegroup{2}{Unipolar (1-5)} \\
Virtual Similarity & How similar was the virtual environment to the real one? & \\
%\midrule
%Mental Demand & How mentally demanding was the task? & \scalegroup{6}{Bipolar (0-100)} \\
%Temporal Demand & How hurried or rushed was the pace of the task? & \\
%Physical Demand & How physically demanding was the task? & \\
%Performance & How successful were you in accomplishing what you were asked to do? & \\
%Effort & How hard did you have to work to accomplish your level of performance? & \\
%Frustration & How insecure, discouraged, irritated, stressed, and annoyed were you? & \\
\bottomrule
\end{tabularx}
{Questions asked to participants after each \factor{Visual Rendering} block of trials.}
[
Unipolar scale questions were 5-point Likert scales (1 = Not at all, 2 = Slightly, 3 = Moderately, 4 = Very and 5 = Extremely), and %
bipolar scale questions were 7-point Likert scales (1 = Extremely A, 2 = Moderately A, 3 = Slightly A, 4 = Neither A nor B, 5 = Slightly B, 6 = Moderately B, 7 = Extremely B), %
where A and B are the two poles of the scale (indicated in parentheses in the Scale column of the questions).
%, and NASA TLX questions were bipolar 100-points scales (0 = Very Low and 100 = Very High, except for Performance where 0 = Perfect and 100 = Failure). %
Participants were shown only the labels for all questions.
]
\begin{tabularx}{\linewidth}{l X p{0.2\linewidth}}
\toprule
\textbf{Code} & \textbf{Question} & \textbf{Scale} \\
\midrule
Texture Agency & Did the tactile sensations of texture seem to be caused by your movements? & \scalegroup{4}{Unipolar (1-5)} \\
Texture Realism & How realistic were the tactile textures? & \\
Texture Plausibility & Did you feel like you were actually touching textures? & \\
Texture Latency & Did the sensations of texture seem to lag behind your movements? & \\
\midrule
Vibration Location & Did the vibrations seem to come from the surface you were touching or did you feel them on the top of your finger? & Bipolar (1=surface, 7=top of finger) \\
Vibration Strength & Overall, how weak or strong were the vibrations? & Bipolar (1=weak, 7=strong) \\
Device Distraction & To what extent did the vibrotactile device distract you from the task? & \scalegroup{2}{Unipolar (1-5)} \\
Device Discomfort & How uncomfortable was it to use the vibrotactile device? & \\
\midrule
Hand Agency & Did the movements of the virtual hand seem to be caused by your movements? & \scalegroup{5}{Unipolar (1-5)} \\
Hand Similarity & How similar was the virtual hand to your own hand in appearance? & \\
Hand Ownership & Did you feel the virtual hand was your own hand? & \\
Hand Latency & Did the virtual hand seem to lag behind your movements? & \\
Hand Distraction & To what extent did the virtual hand distract you from the task? & \\
Hand Reference & Overall, did you focus on your own hand or the virtual hand to complete the task? & Bipolar (1=own hand, 7=virtual hand) \\
\midrule
Virtual Realism & How realistic was the virtual environment? & \scalegroup{2}{Unipolar (1-5)} \\
Virtual Similarity & How similar was the virtual environment to the real one? & \\
%\midrule
%Mental Demand & How mentally demanding was the task? & \scalegroup{6}{Bipolar (0-100)} \\
%Temporal Demand & How hurried or rushed was the pace of the task? & \\
%Physical Demand & How physically demanding was the task? & \\
%Performance & How successful were you in accomplishing what you were asked to do? & \\
%Effort & How hard did you have to work to accomplish your level of performance? & \\
%Frustration & How insecure, discouraged, irritated, stressed, and annoyed were you? & \\
\bottomrule
\end{tabularx}
\end{tabwide}

View File

@@ -14,7 +14,6 @@ Post-hoc pairwise comparisons were performed using the Tukey's Honest Significan
%
Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
\subsubsection{Discrimination Accuracy}
\label{discrimination_accuracy}
@@ -57,7 +56,6 @@ Participants took longer on average to respond with the \level{Virtual} renderin
%
The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}).
\subsubsection{Finger Position and Speed}
\label{finger_position_speed}
@@ -75,19 +73,18 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
%
%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in VR (\level{Virtual} condition).
\begin{subfigs}{results_finger}{%
Boxplots and geometric means of response time at the end of a trial, and finger position and finger speed measures when exploring the comparison texture, with pairwise Tukey's HSD tests: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
}[%
\item Response time of a trial.
\item Distance traveled by the finger in a trial.
\item Speed of the finger in a trial.
]
\subfig[0.32]{results/trial_response_times}
\subfig[0.32]{results/trial_distances}
\subfig[0.32]{results/trial_speeds}
\begin{subfigs}{results_finger}{Results of the performance metrics for the rendering condition. }[
Boxplots and geometric means with bootstrap 95~\% confidence interval, with pairwise Tukey's HSD tests: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
][
\item Response time at the end of a trial.
\item Distance travelled by the finger in a trial.
\item Speed of the finger in a trial.
]
\subfig[0.32]{results/trial_response_times}
\subfig[0.32]{results/trial_distances}
\subfig[0.32]{results/trial_speeds}
\end{subfigs}
\subsection{Questionnaires}
\label{questions}
@@ -95,13 +92,13 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
%
Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
%
\figref{question_plots} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
\figref{results_questions} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
%
\begin{itemize}
\item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 +- 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 +- 0.9}, \pinf{0.001}).
\item \response{Hand Latency}: the virtual hand was found to have a moderate latency with the \level{Mixed} rendering (\num{2.8 +- 1.2}) but a low one with the \level{Virtual} rendering (\num{1.9 +- 0.7}, \pinf{0.001}).
\item \response{Hand Reference}: participants focused slightly more on their own hand with the \level{Mixed} rendering (\num{3.2 +- 2.0}) but slightly more on the virtual hand with the \level{Virtual} rendering (\num{5.3 +- 2.1}, \pinf{0.001}).
\item \response{Hand Distraction}: the virtual hand was slightly distracting with the \level{Mixed} rendering (\num{2.1 +- 1.1}) but not at all with the \level{Virtual} rendering (\num{1.2 +- 0.4}, \p{0.004}).
\item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 +- 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 +- 0.9}, \pinf{0.001}).
\item \response{Hand Latency}: the virtual hand was found to have a moderate latency with the \level{Mixed} rendering (\num{2.8 +- 1.2}) but a low one with the \level{Virtual} rendering (\num{1.9 +- 0.7}, \pinf{0.001}).
\item \response{Hand Reference}: participants focused slightly more on their own hand with the \level{Mixed} rendering (\num{3.2 +- 2.0}) but slightly more on the virtual hand with the \level{Virtual} rendering (\num{5.3 +- 2.1}, \pinf{0.001}).
\item \response{Hand Distraction}: the virtual hand was slightly distracting with the \level{Mixed} rendering (\num{2.1 +- 1.1}) but not at all with the \level{Virtual} rendering (\num{1.2 +- 0.4}, \p{0.004}).
\end{itemize}
%
Overall, participants' sense of control over the virtual hand was very high (\response{Hand Agency}, \num{4.4 +- 0.6}), felt the virtual hand was quite similar to their own hand (\response{Hand Similarity}, \num{3.5 +- 0.9}), and that the virtual environment was very realistic (\response{Virtual Realism}, \num{4.2 +- 0.7}) and very similar to the real one (\response{Virtual Similarity}, \num{4.5 +- 0.7}).
@@ -125,11 +122,16 @@ The vibrations were felt a slightly weak overall (\response{Vibration Strength},
% (Right) Load Index (NASA-TLX) questionnaire (lower values are better).
%}
\begin{subfigs}{question_plots}{%
Boxplots of responses to questions with significant differences and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
}
\subfig[0.24]{results/questions_hand_ownership}
\subfig[0.24]{results/questions_hand_latency}
\subfig[0.24]{results/questions_hand_reference}
\subfig[0.24]{results/questions_hand_distraction}
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for the virtual hand renderings. }[
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
][
\item Hand ownership.
\item Hand latency.
\item Hand reference.
\item Hand distraction.
]
\subfig[0.24]{results/questions_hand_ownership}
\subfig[0.24]{results/questions_hand_latency}
\subfig[0.24]{results/questions_hand_reference}
\subfig[0.24]{results/questions_hand_distraction}
\end{subfigs}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 994 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.7 MiB

View File

@@ -4,7 +4,6 @@
\chaptertoc
\input{1-introduction}
\input{2-method}
\input{3-experiment}
\input{4-results}
\input{5-discussion}