diff --git a/2-perception/perception.tex b/2-perception/perception.tex index 78efee5..94912cf 100644 --- a/2-perception/perception.tex +++ b/2-perception/perception.tex @@ -1,2 +1,2 @@ \part{Augmenting the Visuo-haptic Texture Perception of Tangible Surfaces} -\label{part:texture} \ No newline at end of file +\label{part:texture} diff --git a/2-perception/xr-perception/1-introduction.tex b/2-perception/xr-perception/1-introduction.tex new file mode 100644 index 0000000..0cd0067 --- /dev/null +++ b/2-perception/xr-perception/1-introduction.tex @@ -0,0 +1,65 @@ +\section{Introduction} +\sublabel{introduction} + +% Delivers the motivation for your paper. It explains why you did the work you did. + +% Insist on the advantage of wearable : augment any surface see bau2012revel + +\fig[1]{teaser/teaser2}{% + Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger. + % + Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR. +} + +%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object. +% +%But it is too fragile to touch directly. +% +%What if you could still grasp it and manipulate it through a tangible object in your hand, whose visual appearance has been modified using Augmented Reality (AR)? +% +%And what if you could also feel its shape or texture? +% +%Such tactile augmentation is made possible by wearable haptic devices, which are worn directly on the finger or hand and can provide a variety of sensations on the skin, while being small, light and discreet~\cite{pacchierotti2017wearable}. +% +Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in VR~\cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or AR~\cite{maisto2017evaluation,meli2018combining,teng2021touch}. +% +They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects~\cite{asano2015vibrotactile,detinguy2018enhancing,normand2024augmenting,salazar2020altering}. +% +Such techniques place the actuator \emph{close} to the point of contact with the real environment, leaving the user free to directly touch the tangible. +% +This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR)~\cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback. + +The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic AR has been little explored with VR and (visual) AR~\cite{choi2021augmenting,normand2024augmenting}. +% +Although AR and VR are closely related, they have significant differences that can affect the user experience~\cite{genay2021virtual,macedo2023occlusion}. +% +%By integrating visual virtual content into the real environment, AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike VR where they are hidden by immersing the user into a visual virtual environment. +% +%Current AR systems also suffer from display and rendering limitations not present in VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment~\cite{kim2018revisiting,macedo2023occlusion}. +% +It therefore seems necessary to investigate and understand the potential effect of these differences in visual rendering on the perception of haptically augmented tangible objects. +% +Previous works have shown, for example, that the stiffness of a virtual piston rendered with a force feedback haptic system seen in AR is perceived as less rigid than in VR~\cite{gaffary2017ar} or when the visual rendering is ahead of the haptic rendering~\cite{diluca2011effects,knorlein2009influence}. +% +%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in VR. +% +%But how different is the perception of the haptic augmentation in AR compared to VR, with a virtual hand instead of the real hand? + +The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger. +% +We focus on the perception of roughness, one of the main tactile sensations of materials~\cite{baumgartner2013visual,hollins1993perceptual,okamoto2013psychophysical} and one of the most studied haptic augmentations~\cite{asano2015vibrotactile,culbertson2014modeling,friesen2024perceived,normand2024augmenting,strohmeier2017generating,ujitoko2019modulating}. +% +By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with AR can be better applied and new visuo-haptic renderings adapted to AR can be designed. + +Our contributions are: +% +\begin{itemize} + \item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment. + \item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in AR, and with the same virtual hand in VR. +\end{itemize} +%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment. +% +%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask. +% +%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR. +% diff --git a/2-perception/xr-perception/2-related-work.tex b/2-perception/xr-perception/2-related-work.tex new file mode 100644 index 0000000..bf4d4ec --- /dev/null +++ b/2-perception/xr-perception/2-related-work.tex @@ -0,0 +1,102 @@ +\section{Related Work} +\sublabel{related_work} + +% Answer the following four questions: “Who else has done work with relevance to this work of yours? What did they do? What did they find? And how is your work here different?” + +Many works investigated the haptic rendering of virtual textures to modify the perception of real, tangible surface, but few have considered the influence of visual rendering, or integrated both in an immersive virtual environment such as AR or VR. +% +Yet visual and haptic sensations are often combined in everyday life, and it is important to understand how they interact to design more realistic and effective interfaces. + + +\subsection{Augmenting Haptic Texture Roughness} +\sublabel{vibrotactile_roughness} + +When running a finger over a surface, the deformations and vibrations of the skin caused by the micro-height differences of the material induce the sensation of roughness~\cite{klatzky2003feeling}. +% +%Several approaches have been proposed to render virtual haptic texture~\cite{culbertson2018haptics}. +% +%High-fidelity force feedback devices can reproduce patterned textures with great precision and provide similar perceptions to real textures, but they are expensive, have a limited workspace, and impose to hold a probe to explore the texture~\cite{unger2011roughness}. +% +%As more traditional force feedback systems are unable to accurately render such micro-details on a simulated surface, vibrotactile devices attached to the end effector instead generate vibrations to simulate interaction with the virtual texture~\cite{culbertson2018haptics}. +% +%In this way, physics-based models~\cite{chan2021hasti,okamura1998vibration} and data-based models~\cite{culbertson2015should,romano2010automatic} have been developed and evaluated, the former being simpler but more approximate to real textures, and the latter being more realistic but limited to the captured textures. +% +%Notably, \citeauthorcite{okamura1998vibration} rendered grating textures with exponentially decaying sinudoids that simulated the strokes of the grooves and ridges of the surface, while \citeauthorcite{culbertson2014modeling} captured and modelled the roughness of real surfaces to render them using the speed and force of the user. +% +An effective approach to rendering virtual roughness is to generate vibrations to simulate interaction with the virtual texture~\cite{culbertson2018haptics}, relying on the user's real-time measurements of position, velocity and force. % to modulate the frequencies and amplitudes of the vibrations, with position and velocity being the most important parameters~\cite{culbertson2015should}. +% +The perceived roughness of real surfaces can be then modified when touched by a tool with a vibrotactile actuator attached~\cite{culbertson2014modeling,ujitoko2019modulating} or directly with the finger wearing the vibrotactile actuator~\cite{asano2015vibrotactile,normand2024augmenting}, creating a haptic texture augmentation. +% +%The objective is not just to render a virtual texture, but to alter the perception of a real, tangible surface, usually with wearable haptic devices, in what is known as haptic augmented reality (HAR)~\cite{bhatia2024augmenting,jeon2009haptic}. +% +One additional challenge of augmenting the finger touch is to keep the fingertip free to touch the real environment, thus delocalizing the actuator elsewhere on the hand~\cite{ando2007fingernailmounted,friesen2024perceived,normand2024visuohaptic,teng2021touch}. +% +Of course, the fingertip skin is not deformed by the virtual texture and only vibrations are felt, but it has been shown that the vibrations produced on the fingertip skin running over a real surface are texture specific and similar between individuals~\cite{manfredi2014natural}. +% +A common method vibrotactile rendering of texture is to use a sinusoidal signal whose frequency is modulated by the finger position or velocity~\cite{asano2015vibrotactile,friesen2024perceived,strohmeier2017generating,ujitoko2019modulating}. +% +It remains unclear whether such vibrotactile texture augmentation is perceived the same when integrated into visual AR or VR environments or touched with a virtual hand instead of the real hand. +% +%We also add a phase adjustment to this sinusoidal signal to allow free exploration movements of the finger with a simple camera-based tracking system. + +%Another approach is to use ultrasonic vibrating screens, which are able to modulate their friction~\cite{brahimaj2023crossmodal,rekik2017localized}. +% +%Combined with vibrotactile rendering of roughness using a voice-coil actuator attached to the screen, they can produce realistic haptic texture sensations~\cite{ito2019tactile}. +% +%However, this method is limited to the screen and does not allow to easily render textures on virtual (visual) objects or to alter the perception of real surfaces. + +%In our study, we attached a voice-coil actuator to the middle phalanx of the finger and used a squared sinusoidal signal to render grating textures sensations, but we corrected its phase to allow a simple camera-based tracking and free exploration movements of the finger. + +\subsection{Influence of Visual Rendering on Haptic Perception} +\sublabel{influence_visual_haptic} + +When the same object property is sensed simultaneously by vision and touch, the two modalities are integrated into a single perception. +% +The phychophysical model of \citeauthorcite{ernst2002humans} established that the sense with the least variability dominates perception. +% +%In particular, this effect has been used to better understand the visuo-haptic perception of texture and to design better feedback for virtual objects. +Particularly for real textures, it is known that both touch and sight individually perceive textures equally well and similarly~\cite{bergmanntiest2007haptic,baumgartner2013visual,vardar2019fingertip}. +% +Thus, the overall perception can be modified by changing one of the modalities, as shown by \citeauthorcite{yanagisawa2015effects}, who altered the perception of roughness, stiffness and friction of some real tactile textures touched by the finger by superimposing different real visual textures using a half-mirror. +% +%Similarly but in VR, \citeauthorcite{degraen2019enhancing} combined visual textures with different passive haptic hair-like structure that were touched with the finger to induce a larger set of visuo-haptic materials perception. +% +%\citeauthorcite{gunther2022smooth} studied in a complementary way how the visual rendering of a virtual object touching the arm with a tangible object influenced the perception of roughness. +Likewise, visual textures were combined in VR with various tangible objects to induce a larger set of visuo-haptic material perceptions, in both active touch~\cite{degraen2019enhancing} and passive touch~\cite{gunther2022smooth} contexts. +% +\citeauthorcite{normand2024augmenting} also investigated the roughness perception of tangible surfaces touched with the finger and augmented with visual textures in AR and with wearable vibrotactile textures. +% +%A common finding of these studies is that haptic sensations seem to dominate the perception of roughness, suggesting that a smaller set of haptic textures can support a larger set of visual textures. +% +Conversely, virtual hand rendering is also known to influence how an object is grasped in VR~\cite{prachyabrued2014visual,blaga2020too} and AR~\cite{normand2024visuohaptic}, or even how real bumps and holes are perceived in VR~\cite{schwind2018touch}, but its effect on the perception of a haptic texture augmentation has not yet been investigated. + +% \cite{degraen2019enhancing} and \cite{gunther2022smooth} showed that the visual rendering of a virtual object can influence the perception of its haptic properties. +% \cite{yanagisawa2015effects} with real visual textures superimposed on touched real textures affected the perception of the touched textures. + +A few works have also used pseudo-haptic feedback to change the perception of haptic stimuli to create richer feedback by deforming the visual representation of a user input~\cite{ujitoko2021survey}. +% +For example, %different levels of stiffness can be simulated on a grasped virtual object with the same passive haptic device~\cite{achibet2017flexifingers} or +the perceived softness of tangible objects can be altered by superimposing in AR a virtual texture that deforms when pressed by the hand~\cite{punpongsanon2015softar}, or in combination with vibrotactile rendering in VR~\cite{choi2021augmenting}. +% +The vibrotactile sinusoidal rendering of virtual texture cited above was also combined with visual oscillations of a cursor on a screen to increase the roughness perception of the texture~\cite{ujitoko2019modulating}. +% +%However, the visual representation was a virtual cursor seen on a screen while the haptic feedback was felt with a hand-held device. +% +%Conversely, as discussed by \citeauthorcite{ujitoko2021survey} in their review, a co-localised visuo-haptic rendering can cause the user to notice the mismatch between their real movements and the visuo-haptic feedback. +% +Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work. +%it remains unclear whether touching the same tactile texture augmentation in immersive AR or VR with one's own hand or with a virtual hand can be perceived differently. + +A few studies specifically compared visuo-haptic perception in AR \vs VR. +% +Rendering a virtual piston pressed with one's real hand using a video see-through (VST) AR headset and a force feedback haptic device, \citeauthorcite{diluca2011effects} showed that a visual delay increased the perceived stiffness of the piston, whereas a haptic delay decreased it. +% +%\citeauthorcite{diluca2011effects} went on to explain how these delays affected the weighting of visual and haptic information in perceived stiffness. +% +In a similar setup, but with an optical see-through (OST) AR headset, \citeauthorcite{gaffary2017ar} found that the virtual piston was perceived as less stiff in AR than in VR, without participants noticing this difference. +% +Using a VST-AR headset have notable consequences, as the "real" view of the environment and the hand is actually a visual stream from a camera, which has a noticeable delay and lower quality (\eg resolution, frame rate, field of view) compared to the direct view of the real environment with OST-AR~\cite{macedo2023occlusion}. +% +While a large literature has investigated these differences in visual perception, as well as for VR, \eg distances are underestimated~\cite{adams2022depth,peillard2019studying}, less is known about visuo-haptic perception in AR and VR. +% +In this work we studied (1) the perception of a \emph{haptic texture augmentation} of a tangible surface and (2) the possible influence of the visual rendering of the environment (OST-AR or VR) and the hand touching the surface (real or virtual) on this perception. diff --git a/2-perception/xr-perception/3-method.tex b/2-perception/xr-perception/3-method.tex new file mode 100644 index 0000000..062a535 --- /dev/null +++ b/2-perception/xr-perception/3-method.tex @@ -0,0 +1,187 @@ +\section{Visuo-Haptic Texture Rendering in Mixed Reality} +\sublabel{method} + +\figwide[1]{method/diagram}{% + Diagram of the visuo-haptic texture rendering system. + % + Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera. + % + The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter. + % + %These poses are transformed to the AR/VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the real environment. + These poses are used to move and display the virtual model replicas aligned with the real environment. + % + A collision detection algorithm detects a contact of the virtual hand with the virtual textures. + % + If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$. + % + The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (see \eqref{signal}). + % + The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier. + % + All computation steps except signal sampling are performed at 60~Hz and in separate threads to parallelize them. +} + +%With a vibrotactile actuator attached to a hand-held device or directly on the finger, it is possible to simulate virtual haptic sensations as vibrations, such as texture, friction or contact vibrations~\cite{culbertson2018haptics}. +% +In this section, we describe a system for rendering vibrotactile roughness texture in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose. +% +We also describe how to pair this tactile rendering with an immersive AR or VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment. + +The visuo-haptic texture rendering system is based on +% +\begin{enumerate*}[label=(\arabic*)] + \item a real-time interaction loop between the finger movements and a coherent visuo-haptic feedback simulating the sensation of a touched texture, + \item a precise alignement of the virtual environment with its real counterpart, + \item and a modulation of the signal frequency by the estimated finger speed with a phase matching. +\end{enumerate*} +% +\figref{method/diagram} shows the diagram of the interaction loop and \eqref{signal} the definition of the vibrotactile signal. +% +The system is composed of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering. + + +\subsection{Pose Estimation and Virtual Environment Alignment} +\sublabel{virtual_real_alignment} + +\begin{subfigs}{setup}{% + Visuo-haptic texture rendering system setup. + % + (a) HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger. % + % + (b) HoloLens~2 AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset. % + % + (c) User exploring a virtual vibrotactile texture on a tangible sheet of paper. + } + \hidesubcaption + \subfig[0.325][]{method/device} + \subfig[0.65][]{method/headset} + \par\vspace{2.5pt} + \subfig[0.992][]{method/apparatus} +\end{subfigs} + +A fiducial marker (AprilTag) is glued to the top of the actuator (see \figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (see \figref{method/apparatus}). +% +Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (see \figref{setup}). +% +Contrary to similar work which either constrained hand to a constant speed to keep the signal frequency constant~\cite{asano2015vibrotactile,friesen2024perceived}, or used mechanical sensors attached to the hand~\cite{friesen2024perceived,strohmeier2017generating}, using vision-based tracking allows both to free the hand movements and to augment any tangible surface. +% +A camera external to the AR/VR headset with a marker-based technique is employed to provide accurate and robust tracking with a constant view of the markers~\cite{marchand2016pose}. +% +To reduce the noise the pose estimation while maintaining a good responsiveness, the 1€ filter~\cite{casiez2012filter} is applied. +% +It is a low-pass filter with an adaptive cutoff frequency, specifically designed for tracking human motion. +% +The optimal filter parameters were determined using the method of \citeauthorcite{casiez2012filter}, with a minimum cutoff frequency of \qty{10}{\hertz} and a slope of \num{0.01}. +% +The velocity of the marker is estimated using the discrete derivative of the position and an other 1€ filter with the same parameters. + +To be able to compare virtual and augmented realities, we then create a virtual environment that closely replicate the real one. +%Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the real environment during the experiment. +% +Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (see \figref{renderings}). +% +In addition, the pose and size of the virtual textures are defined on the virtual replicas. +% +During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested. +% +This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (see \figref{renderings}), using the considered AR or VR headset. + +In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK). +% +The visual rendering is achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV), a \qty{60}{\Hz} refresh rate, and self-localisation capabilities. +% +It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment~\cite{macedo2023occlusion}. +% +Indeed, one of our objectives (see \secref{xr_perception:experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations. +% +To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (see \figref{method/headset}). + + +\subsection{Vibrotactile Signal Generation and Rendering} +\sublabel{texture_generation} + +A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}. +% +The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (see \figref{method/device}). +% +The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil~\cite{mcmahan2014dynamic}. +% +The amplifier is connected to the audio output of a computer that generates the signal using the WASAPI driver in exclusive mode and the NAudio library. + +The represented haptic texture is a series of parallels virtual grooves and ridges, similar to real grating textures manufactured for psychophysical roughness perception studies~\cite{friesen2024perceived,klatzky2003feeling,unger2011roughness}. +% +It is generated as a square wave audio signal, sampled at \qty{48}{\kilo\hertz}, with a period $\lambda$ (usually in the millimetre range) and an amplitude $A$. +% +A sample $s_k$ of the audio signal at sampling time $t_k$ is given by: +% +\begin{subequations} + \label{eq:signal} + \begin{align} + s(x_{f,j}, t_k) & = A \text{\,sgn} ( \sin (2 \pi \frac{\dot{x}_{f,j}}{\lambda} t_k + \phi_j) ) & \label{eq:signal_speed} \\ + \phi_j & = \phi_{j-1} + 2 \pi \frac{x_{f,j} - x_{f,{j-1}}}{\lambda} t_k & \label{eq:signal_phase} + \end{align} +\end{subequations} +% +This is a common rendering method for vibrotactile textures, with well-defined parameters, that has been employed to modify perceived haptic roughness of a tangible surface~\cite{asano2015vibrotactile,konyo2005tactile,ujitoko2019modulating}. +% +As the finger position is estimated at a far lower rate (\qty{60}{\hertz}) than the audio signal, the finger position $x_f$ cannot be directly used to render the signal if the finger moves fast or if the texture period is small. +% +The best strategy instead is to modulate the frequency of the signal $s$ as a ratio of the finger velocity $\dot{x}_f$ and the texture period $\lambda$~\cite{friesen2024perceived}. +% +This is important because it preserves the sensation of a constant spatial frequency of the virtual texture while the finger moves at various speeds, which is crucial for the perception of roughness~\cite{klatzky2003feeling,unger2011roughness}. +% +Note that the finger position and velocity are transformed from the camera frame $\mathcal{F}_c$ to the texture frame $\mathcal{F}_t$, with the $x$ axis aligned with the texture direction. +% +However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal_phase}. +% +This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (see \figref{method/phase_adjustment}) and, contrary to previous work~\cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed. +% +Finally, as \citeauthorcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures~\cite{unger2011roughness}. +% +%And secondly, to be able to render low frequencies that occurs when the finger moves slowly or the texture period is large, as the actuator cannot render frequencies below \qty{\approx 20}{\Hz} with enough amplitude to be perceived with a pure sine wave signal. +% +The tactile texture is described and rendered in this work as a one dimensional signal by integrating the relative finger movement to the texture on a single direction, but it is easily extended to a two-dimensional texture by simply generating a second signal for the orthogonal direction and summing the two signals in the rendering. + +\fig[1]{method/phase_adjustment}{% + Change in frequency of a sinusoidal signal with (light green) and without phase matching (in dark green). + % + The phase matching ensures a continuity in the signal and avoids glitches in the rendering of the signal. + % + A sinusoidal signal is show here for clarity, but a different waveform, such as a square wave, will give a similar effect. +} + + +\subsection{System Latency} +\sublabel{latency} + +%As shown in \figref{method/diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation. +% +Because the chosen AR headset is a standalone device (like most current AR/VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer. +% +All computation steps run in a separate thread to parallelize them and reduce latency, and are synchronised with the headset via a local network and the ZeroMQ library. +% +This complex assembly inevitably introduces latency, which must be measured. + +The rendering system provides a user with two interaction loops between the movements of their hand and the visual (loop 1) and haptic (loop 2) feedbacks. +% +Measures are shown as mean $\pm$ standard deviation (when it is known). +% +The end-to-end latency from finger movement to feedback is measured at \qty{36 +- 4}{\ms} in the haptic loop and \qty{43 +- 9}{\ms} in the visual loop. +% +Both are the result of latency in image capture \qty{16 +- 1}{\ms}, markers tracking \qty{2 +- 1}{\ms} and network communication \qty{4 +- 1}{\ms}. +% +The haptic loop also includes the voice-coil latency \qty{15}{\ms} (as specified by the manufacturer\footnotemark[1]), whereas the visual loop includes the latency in 3D rendering \qty{16 +- 5}{\ms} (60 frames per second) and display \qty{5}{\ms}. +% +The total haptic latency is below the \qty{60}{\ms} detection threshold in vibrotactile feedback~\cite{okamoto2009detectability}. +% +The total visual latency can be considered slightly high, yet it is typical for an AR rendering involving vision-based tracking~\cite{knorlein2009influence}. + +The two filters also introduce a constant lag between the finger movement and the estimated position and velocity, measured at \qty{160 +- 30}{\ms}. +% +With respect to the real hand position, it causes a distance error in the displayed virtual hand position, and thus a delay in the triggering of the vibrotactile signal. +% +This is proportional to the speed of the finger, \eg distance error is \qty{12 +- 2.3}{\mm} when the finger moves at \qty{75}{\mm\per\second}. +% +%and of the vibrotactile signal frequency with respect to the finger speed.%, that is proportional to the speed of the finger. +% diff --git a/2-perception/xr-perception/4-experiment.tex b/2-perception/xr-perception/4-experiment.tex new file mode 100644 index 0000000..581f814 --- /dev/null +++ b/2-perception/xr-perception/4-experiment.tex @@ -0,0 +1,219 @@ +\section{User Study} +\sublabel{experiment} + +\begin{subfigswide}{renderings}{% + The three visual rendering conditions and the experimental procedure of the two-alternative forced choice (2AFC) psychophysical study. + % + During a trial, two tactile textures were rendered on the augmented area of the paper sheet (black rectangle) for 3\,s each, one after the other, then the participant chose which one was the roughest. + % + The visual rendering stayed the same during the trial. + % + (\level{Real}) The real environment and real hand view without any visual augmentation. + % + (\level{Mixed}) The real environment and hand view with the virtual hand. + % + (\level{Virtual}) Virtual environment with the virtual hand. + % + %The pictures are captured directly from the Microsoft HoloLens 2 headset. + } + \hidesubcaption + \subfig[0.32][]{experiment/real} + \subfig[0.32][]{experiment/mixed} + \subfig[0.32][]{experiment/virtual} +\end{subfigswide} + +Our visuo-haptic rendering system, described in \secref{xr_perception:method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in AR or VR. +% +The user study aimed to investigate the effect of visual hand rendering in AR or VR on the perception of roughness texture augmentation. % of a touched tangible surface. +% +In a two-alternative forced choice (2AFC) task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (see \figref{renderings}, \level{Real}), in AR with a realistic virtual hand superimposed on the real hand (see \figref{renderings}, \level{Mixed}), and in VR with the same virtual hand as an avatar (see \figref{renderings}, \level{Virtual}). +% +In order not to influence the perception, as vision is an important source of information and influence for the perception of texture~\cite{bergmanntiest2007haptic,yanagisawa2015effects,normand2024augmenting,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed. + + +\subsection{Participants} +\sublabel{participants} + +Twenty participants were recruited for the study (16 males, 3 females, 1 prefer not to say), aged between 18 and 61 years old (\median{26}{}, \iqr{6.8}{}). +% +All participants had normal or corrected-to-normal vision, none of them had a known hand or finger impairment. +% +One was left-handed while the rest were right-handed; they all performed the task with their right index. +% +In rating their experience with haptics, AR and VR (\enquote{I use it several times a year}), 12 were experienced with haptics, 5 with AR, and 10 with VR. +% +Experiences were correlated between haptics and VR (\pearson{0.59}), and AR and VR (\pearson{0.67}) but not haptics and AR (\pearson{0.20}) nor haptics, AR, or VR with age (\pearson{0.05} to \pearson{0.12}). +% +Participants were recruited at the university on a voluntary basis. +% +They all signed an informed consent form before the user study and were unaware of its purpose. + + +\subsection{Apparatus} +\sublabel{apparatus} + +An experimental environment similar as \citeauthorcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (see \figref{renderings}). +% +It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside, and a \qtyproduct{15 x 5}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered. +% +A single light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table fully illuminated the inside of the box. +% +Participants rated the roughness of the paper (without any texture augmentation) before the experiment on a 7-point Likert scale (1 = Extremely smooth, 7 = Extremely rough) as quite smooth (\mean{2.5}, \sd{1.3}). + +%The visual rendering of the virtual hand and environment was achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV) and a \qty{60}{\Hz} refresh rate, running a custom application made with Unity 2021.1.0f1 and Mixed Reality Toolkit (MRTK) 2.7.2. +%f +The virtual environment was carefully reproducing the real environment including the geometry of the box, the textures, the lighting, and the shadows (see \figref{renderings}, \level{Virtual}). +% +The virtual hand model was a gender-neutral human right hand with realistic skin texture, similar to the one used by \citeauthorcite{schwind2017these}. +% +Its size was adjusted to match the real hand of the participants before the experiment. +% +%An OST-AR headset (Microsoft HoloLens~2) was chosen over a VST-AR headset because the former only adds virtual content to the real environment, while the latter streams a real-time video capture of the real environment, and one of our objectives was to directly compare a virtual environment replicating a real one, not to a video feed that introduces many other visual limitations~\cite{macedo2023occlusion}. +% +The visual rendering of the virtual hand and environment is described in \secref{xr_perception:xr_perception:virtual_real_alignment}. +% +%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (see \figref{method/headset}). +% +To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the AR headset (see \figref{method/headset}). +% +In the \level{Virtual} rendering, the mask had only holes for sensors to block the view of the real environment and simulate a VR headset. +% +In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the FoV of the HoloLens~2 (see \figref{method/headset}). +% +\figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions. + +%A vibrotactile voice-coil device (HapCoil-One, Actronika), incased in a 3D-printed plastic shell, was firmly attached to the right index finger of the participants using a Velcro strap (see \figref{method/device}), was used to render the textures +% +%This voice-coil was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnotemark[1]. +% +%It was driven by an audio amplifier (XY-502, not branded) connected to a computer that generated the audio signal of the textures as described in \secref{xr_perception:xr_perception:method}, using the NAudio library and the WASAPI driver in exclusive mode. +% +%The position of the finger relative to the sheet was estimated using a webcam placed on top of the box (StreamCam, Logitech) and the OpenCV library by tracking a \qty{2}{\cm} square fiducial marker (AprilTag) glued to top of the vibrotactile actuator. +% +%The total texture latency was measured to \qty{36 \pm 4}{\ms}, as a result of latency in image acquisition \qty{16 \pm 1}{\ms}, fiducial marker detection \qty{2 \pm 1}{\ms}, audio sampling \qty{3 \pm 1}{\ms}, and the vibrotactile actuator latency (\qty{15}{\ms}, as specified by the manufacturer\footnotemark[1]), and was below the \qty{60}{\ms} threshold for vibrotactile feedback \cite{okamoto2009detectability}. +% +%The virtual hand followed the position of the fiducial marker with a slightly higher latency due to the network synchronization \qty{4 \pm 1}{\ms} between the computer and the HoloLens~2. + +Participants sat comfortably in front of the box at a distance of \qty{30}{\cm}, wearing the HoloLens~2 with a cardboard mask attached, so that only the inside of the box was visible, as shown in \figref{method/apparatus}. +% +%A vibrotactile voice-coil actuator (HapCoil-One, Actronika) was encased in a 3D printed plastic shell with a \qty{2}{\cm} AprilTag glued to top, and firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap. +% +The generation of the virtual texture and the control of the virtual hand is described in \secref{xr_perception:xr_perception:method}. +% +They also wore headphones with a pink noise masking the sound of the voice-coil. +% +The user study was held in a quiet room with no windows. + + +\subsection{Procedure} +\sublabel{procedure} + +Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire. +% +%They were then asked to sit in front of the box and wear the HoloLens~2 and headphones while the experimenter firmly attached the vibrotactile device to the middle phalanx of their right index finger (see \figref{method/apparatus}). +% +A calibration was then performed to adjust the HoloLens~2 to the participant's interpupillary distance, the virtual hand to the real hand size, and the fiducial marker to the finger position. +% +They familiarised themselves with the task by completing four training trials with the most different pair of textures. +% +The trials were divided into three blocks, one for each \factor{Visual Rendering} condition, with a break and questionnaire between each block. +% +Before each block, the experimenter ensured that the virtual environment and the virtual hand were correctly aligned with their real equivalents, that the haptic device was in place, and attached the cardboard mask corresponding to the next \factor{Visual Rendering} condition to the headset. + +The participant started the trial by clicking the middle button of a mouse with the left hand. +% +The first texture was then rendered on the augmented area of the paper sheet for \qty{3}{\s} and, after a \qty{1}{\s} pause, the second texture was also rendered for \qty{3}{\s}. +% +The participant then had to decide which texture was the roughest by clicking the left (for the first texture) or right (for the second texture) button of the mouse and confirming their choice by clicking the middle button again. +% +If the participant moved their finger away from the texture area, the texture timer was paused until they returned. +% +Participants were asked to explore the textures as they would in real life by moving their finger back and forth over the texture area at different speeds. + +One of the textures in the tested pair was always the reference texture, while the other was the comparison texture. +% +Participants were not told that there was a reference and a comparison texture. +% +The order of presentation was randomised and not revealed to the participants. +% +All textures were rendered as described in \secref{xr_perception:xr_perception:texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness. +% +Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable, and the reference texture was chosen to be the one with the middle amplitude. + + +\subsection{Experimental Design} +\sublabel{experimental_design} + +The user study was a within-subjects design with two factors: +% +\begin{itemize} + \item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (see \figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (see \figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (see \figref{renderings}, \level{Virtual}). + \item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}. +\end{itemize} + +A trial consisted on a two-alternative forced choice (2AFC) task where a participant had to touch two virtual vibrotactile textures one after the other and decide which one was the roughest. +% +To avoid any order effect, the order of \factor{Visual Rendering} conditions was counterbalanced between participants using a balanced Latin square design. +% +Within each condition, the order of presentation of the reference and comparison textures was also counterbalanced, and all possible texture pairs were presented in random order and repeated three times. +% +A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentation order \x 3 repetitions = 107 trials were performed by each participant. + + +\subsection{Collected Data} +\sublabel{collected_data} + +For each trial, the \textit{Texture Choice} by the participant as the roughest of the pair was recorded. +% +The \textit{Response Time} between the end of the trial and the choice of the participant was also measured as an indicator of the difficulty of the task. +% +At each frame the \textit{Finger Position} and \textit{Finger Speed} were recorded to control for possible differences in texture exploration behaviour. +% +After each \factor{Visual Rendering} block of trials, participants rated their experience with the vibrotactile textures (all blocks), the vibrotactile device (all blocks), the virtual hand rendering (all except \level{Mixed} block) and the virtual environment (\level{Virtual} block) using the questions shown in \tabref{questions}. +% +%They also assessed their workload with the NASA Task Load Index (\textit{NASA-TLX}) questionnaire after each blocks of trials. +% +For all questions, participants were shown only labels (\eg \enquote{Not at all} or \enquote{Extremely}) and not the actual scale values (\eg 1 or 5), following the recommendations of \citeauthorcite{muller2014survey}. + +\newcommand{\scalegroup}[2]{\multirow{#1}{1\linewidth}{#2}} +\begin{tabwide}{questions}{% + Questions asked to participants after each \factor{Visual Rendering} block of trials. % + Unipolar scale questions were 5-point Likert scales (1 = Not at all, 2 = Slightly, 3 = Moderately, 4 = Very and 5 = Extremely), and % + bipolar scale questions were 7-point Likert scales (1 = Extremely A, 2 = Moderately A, 3 = Slightly A, 4 = Neither A nor B, 5 = Slightly B, 6 = Moderately B, 7 = Extremely B), % + where A and B are the two poles of the scale (indicated in parentheses in the Scale column of the questions). + %, and NASA TLX questions were bipolar 100-points scales (0 = Very Low and 100 = Very High, except for Performance where 0 = Perfect and 100 = Failure). % + Participants were shown only the labels for all questions. + } + \begin{tabularx}{\linewidth}{l X p{0.2\linewidth}} + \toprule + \textbf{Code} & \textbf{Question} & \textbf{Scale} \\ + \midrule + Texture Agency & Did the tactile sensations of texture seem to be caused by your movements? & \scalegroup{4}{Unipolar (1-5)} \\ + Texture Realism & How realistic were the tactile textures? & \\ + Texture Plausibility & Did you feel like you were actually touching textures? & \\ + Texture Latency & Did the sensations of texture seem to lag behind your movements? & \\ + \midrule + Vibration Location & Did the vibrations seem to come from the surface you were touching or did you feel them on the top of your finger? & Bipolar (1=surface, 7=top of finger) \\ + Vibration Strength & Overall, how weak or strong were the vibrations? & Bipolar (1=weak, 7=strong) \\ + Device Distraction & To what extent did the vibrotactile device distract you from the task? & \scalegroup{2}{Unipolar (1-5)} \\ + Device Discomfort & How uncomfortable was it to use the vibrotactile device? & \\ + \midrule + Hand Agency & Did the movements of the virtual hand seem to be caused by your movements? & \scalegroup{5}{Unipolar (1-5)} \\ + Hand Similarity & How similar was the virtual hand to your own hand in appearance? & \\ + Hand Ownership & Did you feel the virtual hand was your own hand? & \\ + Hand Latency & Did the virtual hand seem to lag behind your movements? & \\ + Hand Distraction & To what extent did the virtual hand distract you from the task? & \\ + Hand Reference & Overall, did you focus on your own hand or the virtual hand to complete the task? & Bipolar (1=own hand, 7=virtual hand) \\ + \midrule + Virtual Realism & How realistic was the virtual environment? & \scalegroup{2}{Unipolar (1-5)} \\ + Virtual Similarity & How similar was the virtual environment to the real one? & \\ + %\midrule + %Mental Demand & How mentally demanding was the task? & \scalegroup{6}{Bipolar (0-100)} \\ + %Temporal Demand & How hurried or rushed was the pace of the task? & \\ + %Physical Demand & How physically demanding was the task? & \\ + %Performance & How successful were you in accomplishing what you were asked to do? & \\ + %Effort & How hard did you have to work to accomplish your level of performance? & \\ + %Frustration & How insecure, discouraged, irritated, stressed, and annoyed were you? & \\ + \bottomrule + \end{tabularx} +\end{tabwide} diff --git a/2-perception/xr-perception/5-results.tex b/2-perception/xr-perception/5-results.tex new file mode 100644 index 0000000..d92ad89 --- /dev/null +++ b/2-perception/xr-perception/5-results.tex @@ -0,0 +1,142 @@ +\section{Results} +\sublabel{results} + +\subsection{Trial Measures} +\sublabel{results_trials} + +All measures from trials were analysed using linear mixed models (LMM) or generalised linear mixed models (GLMM) with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts. +% +Depending on the data, different random effect structures were tested. +% +Only the best converging models are reported, with the lowest Akaike Information Criterion (AIC) values. +% +Post-hoc pairwise comparisons were performed using the Tukey's Honest Significant Difference (HSD) test. +% +Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}. + + +\subsubsection{Discrimination Accuracy} +\sublabel{discrimination_accuracy} + +A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (see \figref{results/trial_predictions}). +% +The points of subjective equality (PSEs, see \figref{results/trial_pses}) and just-noticeable differences (JNDs, see \figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95\% CI, using a non-parametric bootstrap procedure (1000 samples). +% +The PSE represents the estimated amplitude difference at which the comparison texture was perceived as rougher than the reference texture 50\% of the time. %, \ie it is the accuracy of participants in discriminating vibrotactile roughness. +% +The \level{Real} rendering had the highest PSE (\percent{7.9} \ci{1.2}{4.1}) and was statistically significantly different from the \level{Mixed} rendering (\percent{1.9} \ci{-2.4}{6.1}) and from the \level{Virtual} rendering (\percent{5.1} \ci{2.4}{7.6}). +% +The JND represents the estimated minimum amplitude difference between the comparison and reference textures that participants could perceive, +% \ie the sensitivity to vibrotactile roughness differences, +calculated at the 84th percentile of the predictions of the GLMM (\ie one standard deviation of the normal distribution)~\cite{ernst2002humans}. +% +The \level{Real} rendering had the lowest JND (\percent{26} \ci{23}{29}), the \level{Mixed} rendering had the highest (\percent{33} \ci{30}{37}), and the \level{Virtual} rendering was in between (\percent{30} \ci{28}{32}). +% +All pairwise differences were statistically significant. + +\begin{subfigs}{discrimination_accuracy}{% + Generalized Linear Mixed Model (GLMM) results in the vibrotactile texture roughness discrimination task, with non-parametric bootstrap 95\% confidence intervals. + % + (a) Percentage of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering. + % + Curves represent predictions from the GLMM (probit link function) and points are estimated marginal means. + % + (b) Estimated points of subjective equality (PSE) of each visual rendering. + %, defined as the amplitude difference at which both reference and comparison textures are perceived to be equivalent, \ie the accuracy in discriminating vibrotactile roughness. + % + (c) Estimated just-noticeable difference (JND) of each visual rendering. + %, defined as the minimum perceptual amplitude difference, \ie the sensitivity to vibrotactile roughness differences. + } + \subfig[0.85][]{results/trial_predictions}\\ + \subfig[0.45][]{results/trial_pses} + \subfig[0.45][]{results/trial_jnds} +\end{subfigs} + + +\subsubsection{Response Time} +\sublabel{response_time} + +A LMM analysis of variance (AOV) with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effects on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}). +% +Participants took longer on average to respond with the \level{Virtual} rendering (\geomean{1.65}{s} \ci{1.59}{1.72}) than with the \level{Real} rendering (\geomean{1.38}{s} \ci{1.32}{1.43}), which is the only statistically significant difference (\ttest{19}{0.3}, \p{0.005}). +% +The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}). + + +\subsubsection{Finger Position and Speed} +\sublabel{finger_position_speed} + +The frames analysed were those in which the participants actively touched the comparison textures with a finger speed greater than \SI{1}{\mm\per\second}. +% +A LMM AOV with by-participant random slopes for \factor{Visual Rendering} indicated only one statistically significant effect on the total distance traveled by the finger in a trial of \factor{Visual Rendering} (\anova{2}{18}{3.9}, \p{0.04}, see \figref{results/trial_distances}). +% +On average, participants explored a larger distance with the \level{Real} rendering (\geomean{20.0}{\cm} \ci{19.4}{20.7}) than with \level{Virtual} rendering (\geomean{16.5}{\cm} \ci{15.8}{17.1}), which is the only statistically significant difference (\ttest{19}{1.2}, \p{0.03}), with the \level{Mixed} rendering (\geomean{17.4}{\cm} \ci{16.8}{18.0}) in between. +% +Another LMM AOV with by-trial and by-participant random intercepts but no random slopes indicated only one statistically significant effect on \response{Finger Speed} of \factor{Visual Rendering} (\anova{2}{2142}{2.0}, \pinf{0.001}, see \figref{results/trial_speeds}). +% +On average, the textures were explored with the highest speed with the \level{Real} rendering (\geomean{5.12}{\cm\per\second} \ci{5.08}{5.17}), the lowest with the \level{Virtual} rendering (\geomean{4.40}{\cm\per\second} \ci{4.35}{4.45}), and the \level{Mixed} rendering (\geomean{4.67}{\cm\per\second} \ci{4.63}{4.71}) in between. +% +All pairwise differences were statistically significant: \level{Real} \vs \level{Virtual} (\ttest{19}{1.17}, \pinf{0.001}), \level{Real} \vs \level{Mixed} (\ttest{19}{1.10}, \pinf{0.001}), and \level{Mixed} \vs \level{Virtual} (\ttest{19}{1.07}, \p{0.02}). +% +%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in VR (\level{Virtual} condition). + +\begin{subfigs}{results_finger}{% + Boxplots and geometric means of response time at the end of a trial, and finger position and finger speed measures when exploring the comparison texture, with pairwise Tukey's HSD tests: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}. + % + (a) Response time of a trial. + % + (b) Distance traveled by the finger in a trial. + % + (b) Speed of the finger in a trial. + } + \subfig[0.32][]{results/trial_response_times} + \subfig[0.32][]{results/trial_distances} + \subfig[0.32][]{results/trial_speeds} +\end{subfigs} + + +\subsection{Questionnaires} +\sublabel{questions} + +%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire. +% +Friedman tests were employed to compare the ratings to the questions (see \tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests. +% +\figref{question_plots} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation): +% +\begin{itemize} + \item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 +- 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 +- 0.9}, \pinf{0.001}). + \item \response{Hand Latency}: the virtual hand was found to have a moderate latency with the \level{Mixed} rendering (\num{2.8 +- 1.2}) but a low one with the \level{Virtual} rendering (\num{1.9 +- 0.7}, \pinf{0.001}). + \item \response{Hand Reference}: participants focused slightly more on their own hand with the \level{Mixed} rendering (\num{3.2 +- 2.0}) but slightly more on the virtual hand with the \level{Virtual} rendering (\num{5.3 +- 2.1}, \pinf{0.001}). + \item \response{Hand Distraction}: the virtual hand was slightly distracting with the \level{Mixed} rendering (\num{2.1 +- 1.1}) but not at all with the \level{Virtual} rendering (\num{1.2 +- 0.4}, \p{0.004}). +\end{itemize} +% +Overall, participants' sense of control over the virtual hand was very high (\response{Hand Agency}, \num{4.4 +- 0.6}), felt the virtual hand was quite similar to their own hand (\response{Hand Similarity}, \num{3.5 +- 0.9}), and that the virtual environment was very realistic (\response{Virtual Realism}, \num{4.2 +- 0.7}) and very similar to the real one (\response{Virtual Similarity}, \num{4.5 +- 0.7}). +% +The textures were also overall found to be very much caused by the finger movements (\response{Texture Agency}, \num{4.5 +- 1.0}) with a very low perceived latency (\response{Texture Latency}, \num{1.6 +- 0.8}), and to be quite realistic (\response{Texture Realism}, \num{3.6 +- 0.9}) and quite plausible (\response{Texture Plausibility}, \num{3.6 +- 1.0}). +% +Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 +- 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (42.5\% more on surface or on finger top, 15\% neutral), but there was a trend towards the top of the finger in VR renderings (65\% \vs 25\% more on surface and 10\% neutral), but this difference was not statistically significant neither. +% +The vibrations were felt a slightly weak overall (\response{Vibration Strength}, \num{4.2 +- 1.1}), and the vibrotactile device was perceived as neither distracting (\response{Device Distraction}, \num{1.2 +- 0.4}) nor uncomfortable (\response{Device Discomfort}, \num{1.3 +- 0.6}). +% +%Finally, the overall workload (mean NASA-TLX score) was low (\num{21 +- 14}), with no statistically significant differences found between the visual renderings for any of the subscales or the overall score. + +%\figwide{results/question_heatmaps}{% +% +% Heatmaps of the questionnaire responses, with the median rating and the interquartile range in parentheses on each cell. +% +% (Left) point Likert scale questions (1=Not at all, 2=Slightly, 3=Moderately, 4=Very, 5=Extremely). +% +% (Middle) point Likert scale questions (1=Extremely A, 2=Moderately A, 3=Slightly A, 4=Neither A nor B, 5=Slightly B, 6=Moderately B, 7=Extremely B) with A and B being the two poles of the scale. +% +% (Right) Load Index (NASA-TLX) questionnaire (lower values are better). +%} + +\begin{subfigs}{question_plots}{% + Boxplots of responses to questions with significant differences and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}. + } + \subfig[0.24][]{results/questions_hand_ownership} + \subfig[0.24][]{results/questions_hand_latency} + \subfig[0.24][]{results/questions_hand_reference} + \subfig[0.24][]{results/questions_hand_distraction} +\end{subfigs} diff --git a/2-perception/xr-perception/6-discussion.tex b/2-perception/xr-perception/6-discussion.tex new file mode 100644 index 0000000..6d35cde --- /dev/null +++ b/2-perception/xr-perception/6-discussion.tex @@ -0,0 +1,67 @@ +\section{Discussion} +\sublabel{discussion} + +%Interpret the findings in results, answer to the problem asked in the introduction, contrast with previous articles, draw possible implications. Give limitations of the study. + +% But how different is the perception of the haptic augmentation in AR compared to VR, with a virtual hand instead of the real hand? +% The goal of this paper is to study the visual rendering of the hand (real or virtual) and its environment (AR or VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device mounted on the finger. + +The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions. +% +Given the estimated point of subjective equality (PSE), the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (see \figref{results/trial_pses}). +% +\citeauthorcite{gaffary2017ar} found a PSE difference in the same range between AR and VR for perceived stiffness, with the VR perceived as \enquote{stiffer} and the AR as \enquote{softer}. +% +%However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant. +% +Surprisingly, the PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (see \figref{results/trial_predictions}). +% +The sensitivity of participants to roughness differences (just-noticeable differences, JND) also varied between all the visual renderings, with the \level{Real} rendering having the best JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (see \figref{results/trial_jnds}). +% +These JND values are in line with and at the upper end of the range of previous studies~\cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the middle phalanx of the finger, being less sensitive to vibration than the fingertip. +% +Thus, compared to no visual rendering (\level{Real}), the addition of a visual rendering of the hand or environment reduced the roughness sensitivity (JND) and the average roughness perception (PSE), as if the virtual haptic textures felt \enquote{smoother}. + +Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures). +% +On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in VR (\level{Virtual} rendering) (see \figref{results_finger}). +% +The \level{Mixed} rendering, displaying both the real and virtual hands, was always in between, with no significant difference from the other two renderings. +% +This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in VR. +% +This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (see \secref{xr_perception:questions}) in both the \level{Mixed} and \level{Virtual} renderings. +% +Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (see \secref{xr_perception:questions}). +% +However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the PSEs). +% +The \level{Mixed} rendering had the lowest PSE and highest perceived latency, the \level{Virtual} rendering had a higher PSE and lower perceived latency, and the \level{Real} rendering had the highest PSE and no virtual hand latency (as it was not displayed). + +Our visuo-haptic augmentation system aimed to provide a coherent multimodal virtual rendering integrated with the real environment. +% +Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (see \figref{method/diagram}), which are subject to different latencies and may not be in synchronised with each other, or may even being inconsistent with other sensory modalities such as proprioception. +% +When a user runs their finger over a vibrotactile virtual texture, the haptic sensations and eventual display of the virtual hand lag behind the visual displacement and proprioceptive sensations of the real hand. +% +Conversely, when interacting with a real texture, there is no lag between any of these sensory modalities. +% +Thereby, we hypothesise that the differences in the perception of vibrotactile roughness are less due to the visual rendering of the hand or environment and their associated difference in exploration behaviour, but rather to the difference in the perceived latency between one's own hand (visually and proprioceptively) and the virtual hand (visually and haptically). +% +\citeauthorcite{diluca2011effects} demonstrated, in a VST-AR setup, how visual latency relative to proprioception increased the perception of stiffness of a virtual piston, while haptic latency decreased it. +% +Another complementary explanation could be a pseudo-haptic effect of the displacement of the virtual hand, as already observed with this vibrotactile texture rendering, but seen on a screen in a non-immersive context~\cite{ujitoko2019modulating}. +% +Such hypotheses could be tested by manipulating the latency and tracking accuracy of the virtual hand or the vibrotactile feedback. % to observe their effects on the roughness perception of the virtual textures. + +The main limitation of our study is, of course, the absence of a visual representation of the touched virtual texture. +% +This is indeed a source of information as important as haptic sensations for perception for both real textures~\cite{baumgartner2013visual,bergmanntiest2007haptic,vardar2019fingertip} and virtual textures~\cite{degraen2019enhancing,gunther2022smooth,normand2024augmenting}. +% +%Specifically, it remains to be investigated how to visually represent vibrotactile textures in an immersive AR or VR context, as the visuo-haptic coupling of such grating textures is not trivial~\cite{unger2011roughness} even with real textures~\cite{klatzky2003feeling}. +% +The interactions between the visual and haptic sensory modalities is complex and deserves further investigations, in particular in the context of visuo-haptic AR. +% +Also, our study was conducted with an OST-AR headset, but the results may be different with a VST-AR headset. +% +More generally, we focused on the perception of roughness sensations using wearable haptics in AR \vs VR, but many other haptic feedbacks could be investigated using the same system and methodology, such as stiffness, friction, local deformations, or temperature. diff --git a/2-perception/xr-perception/7-conclusion.tex b/2-perception/xr-perception/7-conclusion.tex new file mode 100644 index 0000000..c3dede3 --- /dev/null +++ b/2-perception/xr-perception/7-conclusion.tex @@ -0,0 +1,15 @@ +\section{Conclusion} +\sublabel{conclusion} + +%Summary of the research problem, method, main findings, and implications. + +We designed and implemented a system for rendering virtual haptic grating textures on a real tangible surface touched directly with the fingertip, using a wearable vibrotactile voice-coil device mounted on the middle phalanx of the finger. %, and allowing free explorative movements of the hand on the surface. +% +This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an AR and a VR view. +% +We investigated then with a psychophysical user study the effect of visual rendering of the hand and its environment on the roughness perception of the designed tactile texture augmentations: without visual augmentation (\level{Real} rendering), in AR with a realistic virtual hand superimposed on the real hand (\level{Mixed} rendering), and in VR with the same virtual hand as an avatar (\level{Virtual} rendering). +% +%Only the amplitude $A$ varied between the reference and comparison textures to create the different levels of roughness. +% +%Participants were not informed there was a reference and comparison textures, and +No texture was represented visually, to avoid any influence on the perception~\cite{bergmanntiest2007haptic,normand2024augmenting,yanagisawa2015effects}. \ No newline at end of file diff --git a/2-perception/xr-perception/figures/experiment/conditions.jpg b/2-perception/xr-perception/figures/experiment/conditions.jpg new file mode 100644 index 0000000..8a5c65c Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/conditions.jpg differ diff --git a/2-perception/xr-perception/figures/experiment/mixed.jpg b/2-perception/xr-perception/figures/experiment/mixed.jpg new file mode 100644 index 0000000..10f263e Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/mixed.jpg differ diff --git a/2-perception/xr-perception/figures/experiment/mixed.odg b/2-perception/xr-perception/figures/experiment/mixed.odg new file mode 100644 index 0000000..4aacb53 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/mixed.odg differ diff --git a/2-perception/xr-perception/figures/experiment/mixed.pdf b/2-perception/xr-perception/figures/experiment/mixed.pdf new file mode 100644 index 0000000..f9a7539 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/mixed.pdf differ diff --git a/2-perception/xr-perception/figures/experiment/real.jpg b/2-perception/xr-perception/figures/experiment/real.jpg new file mode 100644 index 0000000..0edf334 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/real.jpg differ diff --git a/2-perception/xr-perception/figures/experiment/real.odg b/2-perception/xr-perception/figures/experiment/real.odg new file mode 100644 index 0000000..0b139ae Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/real.odg differ diff --git a/2-perception/xr-perception/figures/experiment/real.pdf b/2-perception/xr-perception/figures/experiment/real.pdf new file mode 100644 index 0000000..80eac64 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/real.pdf differ diff --git a/2-perception/xr-perception/figures/experiment/reference.jpg b/2-perception/xr-perception/figures/experiment/reference.jpg new file mode 100644 index 0000000..6673097 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/reference.jpg differ diff --git a/2-perception/xr-perception/figures/experiment/virtual.jpg b/2-perception/xr-perception/figures/experiment/virtual.jpg new file mode 100644 index 0000000..42c3a38 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/virtual.jpg differ diff --git a/2-perception/xr-perception/figures/experiment/virtual.odg b/2-perception/xr-perception/figures/experiment/virtual.odg new file mode 100644 index 0000000..55fb892 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/virtual.odg differ diff --git a/2-perception/xr-perception/figures/experiment/virtual.pdf b/2-perception/xr-perception/figures/experiment/virtual.pdf new file mode 100644 index 0000000..d231c89 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/virtual.pdf differ diff --git a/2-perception/xr-perception/figures/experiment/visual_renderings.xcf b/2-perception/xr-perception/figures/experiment/visual_renderings.xcf new file mode 100644 index 0000000..f22c709 Binary files /dev/null and b/2-perception/xr-perception/figures/experiment/visual_renderings.xcf differ diff --git a/2-perception/xr-perception/figures/method/apparatus.jpg b/2-perception/xr-perception/figures/method/apparatus.jpg new file mode 100644 index 0000000..224811c Binary files /dev/null and b/2-perception/xr-perception/figures/method/apparatus.jpg differ diff --git a/2-perception/xr-perception/figures/method/apparatus.odg b/2-perception/xr-perception/figures/method/apparatus.odg new file mode 100644 index 0000000..e0cec5a Binary files /dev/null and b/2-perception/xr-perception/figures/method/apparatus.odg differ diff --git a/2-perception/xr-perception/figures/method/apparatus.pdf b/2-perception/xr-perception/figures/method/apparatus.pdf new file mode 100644 index 0000000..46cb61c Binary files /dev/null and b/2-perception/xr-perception/figures/method/apparatus.pdf differ diff --git a/2-perception/xr-perception/figures/method/device.jpg b/2-perception/xr-perception/figures/method/device.jpg new file mode 100644 index 0000000..183739f Binary files /dev/null and b/2-perception/xr-perception/figures/method/device.jpg differ diff --git a/2-perception/xr-perception/figures/method/device.odg b/2-perception/xr-perception/figures/method/device.odg new file mode 100644 index 0000000..12c7631 Binary files /dev/null and b/2-perception/xr-perception/figures/method/device.odg differ diff --git a/2-perception/xr-perception/figures/method/device.pdf b/2-perception/xr-perception/figures/method/device.pdf new file mode 100644 index 0000000..cc6b96f Binary files /dev/null and b/2-perception/xr-perception/figures/method/device.pdf differ diff --git a/2-perception/xr-perception/figures/method/diagram.odg b/2-perception/xr-perception/figures/method/diagram.odg new file mode 100644 index 0000000..1bdef4b Binary files /dev/null and b/2-perception/xr-perception/figures/method/diagram.odg differ diff --git a/2-perception/xr-perception/figures/method/diagram.pdf b/2-perception/xr-perception/figures/method/diagram.pdf new file mode 100644 index 0000000..88139d6 Binary files /dev/null and b/2-perception/xr-perception/figures/method/diagram.pdf differ diff --git a/2-perception/xr-perception/figures/method/headset.jpg b/2-perception/xr-perception/figures/method/headset.jpg new file mode 100644 index 0000000..405161c Binary files /dev/null and b/2-perception/xr-perception/figures/method/headset.jpg differ diff --git a/2-perception/xr-perception/figures/method/headset.odg b/2-perception/xr-perception/figures/method/headset.odg new file mode 100644 index 0000000..b43192d Binary files /dev/null and b/2-perception/xr-perception/figures/method/headset.odg differ diff --git a/2-perception/xr-perception/figures/method/headset.pdf b/2-perception/xr-perception/figures/method/headset.pdf new file mode 100644 index 0000000..073140f Binary files /dev/null and b/2-perception/xr-perception/figures/method/headset.pdf differ diff --git a/2-perception/xr-perception/figures/method/phase_adjustment.pdf b/2-perception/xr-perception/figures/method/phase_adjustment.pdf new file mode 100644 index 0000000..d7be5d4 Binary files /dev/null and b/2-perception/xr-perception/figures/method/phase_adjustment.pdf differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_distraction.odg b/2-perception/xr-perception/figures/results/questions_hand_distraction.odg new file mode 100644 index 0000000..08d0bab Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_distraction.odg differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_distraction.pdf b/2-perception/xr-perception/figures/results/questions_hand_distraction.pdf new file mode 100644 index 0000000..6c12320 Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_distraction.pdf differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_latency.odg b/2-perception/xr-perception/figures/results/questions_hand_latency.odg new file mode 100644 index 0000000..14b39de Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_latency.odg differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_latency.pdf b/2-perception/xr-perception/figures/results/questions_hand_latency.pdf new file mode 100644 index 0000000..9a634a9 Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_latency.pdf differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_ownership.odg b/2-perception/xr-perception/figures/results/questions_hand_ownership.odg new file mode 100644 index 0000000..db42200 Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_ownership.odg differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_ownership.pdf b/2-perception/xr-perception/figures/results/questions_hand_ownership.pdf new file mode 100644 index 0000000..fdf94d8 Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_ownership.pdf differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_reference.odg b/2-perception/xr-perception/figures/results/questions_hand_reference.odg new file mode 100644 index 0000000..9f380e4 Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_reference.odg differ diff --git a/2-perception/xr-perception/figures/results/questions_hand_reference.pdf b/2-perception/xr-perception/figures/results/questions_hand_reference.pdf new file mode 100644 index 0000000..d7a548a Binary files /dev/null and b/2-perception/xr-perception/figures/results/questions_hand_reference.pdf differ diff --git a/2-perception/xr-perception/figures/results/trial_distances.odg b/2-perception/xr-perception/figures/results/trial_distances.odg new file mode 100644 index 0000000..f884ac2 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_distances.odg differ diff --git a/2-perception/xr-perception/figures/results/trial_distances.pdf b/2-perception/xr-perception/figures/results/trial_distances.pdf new file mode 100644 index 0000000..7cd382e Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_distances.pdf differ diff --git a/2-perception/xr-perception/figures/results/trial_jnds.odg b/2-perception/xr-perception/figures/results/trial_jnds.odg new file mode 100644 index 0000000..8ad0511 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_jnds.odg differ diff --git a/2-perception/xr-perception/figures/results/trial_jnds.pdf b/2-perception/xr-perception/figures/results/trial_jnds.pdf new file mode 100644 index 0000000..9ed8322 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_jnds.pdf differ diff --git a/2-perception/xr-perception/figures/results/trial_predictions.odg b/2-perception/xr-perception/figures/results/trial_predictions.odg new file mode 100644 index 0000000..8f12fb1 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_predictions.odg differ diff --git a/2-perception/xr-perception/figures/results/trial_predictions.pdf b/2-perception/xr-perception/figures/results/trial_predictions.pdf new file mode 100644 index 0000000..cc0cbe5 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_predictions.pdf differ diff --git a/2-perception/xr-perception/figures/results/trial_pses.odg b/2-perception/xr-perception/figures/results/trial_pses.odg new file mode 100644 index 0000000..c855d88 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_pses.odg differ diff --git a/2-perception/xr-perception/figures/results/trial_pses.pdf b/2-perception/xr-perception/figures/results/trial_pses.pdf new file mode 100644 index 0000000..feadcaa Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_pses.pdf differ diff --git a/2-perception/xr-perception/figures/results/trial_response_times.odg b/2-perception/xr-perception/figures/results/trial_response_times.odg new file mode 100644 index 0000000..9a86299 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_response_times.odg differ diff --git a/2-perception/xr-perception/figures/results/trial_response_times.pdf b/2-perception/xr-perception/figures/results/trial_response_times.pdf new file mode 100644 index 0000000..48a005c Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_response_times.pdf differ diff --git a/2-perception/xr-perception/figures/results/trial_speeds.odg b/2-perception/xr-perception/figures/results/trial_speeds.odg new file mode 100644 index 0000000..80c1508 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_speeds.odg differ diff --git a/2-perception/xr-perception/figures/results/trial_speeds.pdf b/2-perception/xr-perception/figures/results/trial_speeds.pdf new file mode 100644 index 0000000..9d385e3 Binary files /dev/null and b/2-perception/xr-perception/figures/results/trial_speeds.pdf differ diff --git a/2-perception/xr-perception/figures/teaser/teaser1.jpg b/2-perception/xr-perception/figures/teaser/teaser1.jpg new file mode 100644 index 0000000..21e86db Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser1.jpg differ diff --git a/2-perception/xr-perception/figures/teaser/teaser1.xcf b/2-perception/xr-perception/figures/teaser/teaser1.xcf new file mode 100644 index 0000000..5105bb5 Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser1.xcf differ diff --git a/2-perception/xr-perception/figures/teaser/teaser2.odg b/2-perception/xr-perception/figures/teaser/teaser2.odg new file mode 100644 index 0000000..33bf2ef Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser2.odg differ diff --git a/2-perception/xr-perception/figures/teaser/teaser2.pdf b/2-perception/xr-perception/figures/teaser/teaser2.pdf new file mode 100644 index 0000000..aa026d0 Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser2.pdf differ diff --git a/2-perception/xr-perception/figures/teaser/teaser2.xcf b/2-perception/xr-perception/figures/teaser/teaser2.xcf new file mode 100644 index 0000000..a6c31b7 Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser2.xcf differ diff --git a/2-perception/xr-perception/figures/teaser/teaser2_augmented.jpg b/2-perception/xr-perception/figures/teaser/teaser2_augmented.jpg new file mode 100644 index 0000000..1dab194 Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser2_augmented.jpg differ diff --git a/2-perception/xr-perception/figures/teaser/teaser2_real.jpg b/2-perception/xr-perception/figures/teaser/teaser2_real.jpg new file mode 100644 index 0000000..067fb3f Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser2_real.jpg differ diff --git a/2-perception/xr-perception/figures/teaser/teaser2_virtual.jpg b/2-perception/xr-perception/figures/teaser/teaser2_virtual.jpg new file mode 100644 index 0000000..83b73ca Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/teaser2_virtual.jpg differ diff --git a/2-perception/xr-perception/figures/teaser/texture.odg b/2-perception/xr-perception/figures/teaser/texture.odg new file mode 100644 index 0000000..a9dfbf8 Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/texture.odg differ diff --git a/2-perception/xr-perception/figures/teaser/texture.pdf b/2-perception/xr-perception/figures/teaser/texture.pdf new file mode 100644 index 0000000..d2180ab Binary files /dev/null and b/2-perception/xr-perception/figures/teaser/texture.pdf differ diff --git a/2-perception/xr-perception/xr-perception.tex b/2-perception/xr-perception/xr-perception.tex index 59876cf..508ac05 100644 --- a/2-perception/xr-perception/xr-perception.tex +++ b/2-perception/xr-perception/xr-perception.tex @@ -1,3 +1,12 @@ \mainchapter{Perception of Visual-Haptic Texture Augmentation in Augmented and Virtual Reality} -\label{ch:xr-perception} +\renewcommand{\labelprefix}{xr_perception} +\label{ch:\labelprefix} + +\input{1-introduction} +\input{2-related-work} +\input{3-method} +\input{4-experiment} +\input{5-results} +\input{6-discussion} +\input{7-conclusion}