Split xr-perception chapter
This commit is contained in:
59
2-perception/vhar-system/1-introduction.tex
Normal file
59
2-perception/vhar-system/1-introduction.tex
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
|
||||||
|
|
||||||
|
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
||||||
|
%
|
||||||
|
%But it is too fragile to touch directly.
|
||||||
|
%
|
||||||
|
%What if you could still grasp it and manipulate it through a tangible object in your hand, whose visual appearance has been modified using Augmented Reality (AR)?
|
||||||
|
%
|
||||||
|
%And what if you could also feel its shape or texture?
|
||||||
|
%
|
||||||
|
%Such tactile augmentation is made possible by wearable haptic devices, which are worn directly on the finger or hand and can provide a variety of sensations on the skin, while being small, light and discreet \cite{pacchierotti2017wearable}.
|
||||||
|
%
|
||||||
|
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
||||||
|
%
|
||||||
|
They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects \cite{asano2015vibrotactile,detinguy2018enhancing,salazar2020altering}.
|
||||||
|
%
|
||||||
|
Such techniques place the actuator \emph{close} to the point of contact with the real environment, leaving the user free to directly touch the tangible.
|
||||||
|
%
|
||||||
|
This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR) \cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback.
|
||||||
|
|
||||||
|
The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic AR has been little explored with VR and (visual) AR \cite{choi2021augmenting}.
|
||||||
|
%
|
||||||
|
Although AR and VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
||||||
|
%
|
||||||
|
%By integrating visual virtual content into the real environment, AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike VR where they are hidden by immersing the user into a visual virtual environment.
|
||||||
|
%
|
||||||
|
%Current AR systems also suffer from display and rendering limitations not present in VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
||||||
|
%
|
||||||
|
It therefore seems necessary to investigate and understand the potential effect of these differences in visual rendering on the perception of haptically augmented tangible objects.
|
||||||
|
%
|
||||||
|
Previous works have shown, for example, that the stiffness of a virtual piston rendered with a force feedback haptic system seen in AR is perceived as less rigid than in VR \cite{gaffary2017ar} or when the visual rendering is ahead of the haptic rendering \cite{diluca2011effects,knorlein2009influence}.
|
||||||
|
%
|
||||||
|
%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in VR.
|
||||||
|
%
|
||||||
|
%But how different is the perception of the haptic augmentation in AR compared to VR, with a virtual hand instead of the real hand?
|
||||||
|
|
||||||
|
The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger.
|
||||||
|
%
|
||||||
|
We focus on the perception of roughness, one of the main tactile sensations of materials \cite{baumgartner2013visual,hollins1993perceptual,okamoto2013psychophysical} and one of the most studied haptic augmentations \cite{asano2015vibrotactile,culbertson2014modeling,friesen2024perceived,strohmeier2017generating,ujitoko2019modulating}.
|
||||||
|
%
|
||||||
|
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with AR can be better applied and new visuo-haptic renderings adapted to AR can be designed.
|
||||||
|
|
||||||
|
Our contributions are:
|
||||||
|
%
|
||||||
|
\begin{itemize}
|
||||||
|
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
||||||
|
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in AR, and with the same virtual hand in VR.
|
||||||
|
\end{itemize}
|
||||||
|
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
||||||
|
%
|
||||||
|
%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask.
|
||||||
|
%
|
||||||
|
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
||||||
|
|
||||||
|
%\fig[1]{teaser/teaser2}{%
|
||||||
|
% Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
||||||
|
% %
|
||||||
|
% Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
|
||||||
|
%}
|
||||||
@@ -1,33 +1,12 @@
|
|||||||
\section{Visuo-Haptic Texture Rendering in Mixed Reality}
|
|
||||||
\label{method}
|
|
||||||
|
|
||||||
\figwide[1]{method/diagram}{%
|
|
||||||
Diagram of the visuo-haptic texture rendering system.
|
|
||||||
%
|
|
||||||
Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera.
|
|
||||||
%
|
|
||||||
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
|
|
||||||
%
|
|
||||||
%These poses are transformed to the AR/VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the real environment.
|
|
||||||
These poses are used to move and display the virtual model replicas aligned with the real environment.
|
|
||||||
%
|
|
||||||
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
|
||||||
%
|
|
||||||
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
|
|
||||||
%
|
|
||||||
The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
|
||||||
%
|
|
||||||
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
|
|
||||||
%
|
|
||||||
All computation steps except signal sampling are performed at 60~Hz and in separate threads to parallelize them.
|
|
||||||
}
|
|
||||||
|
|
||||||
%With a vibrotactile actuator attached to a hand-held device or directly on the finger, it is possible to simulate virtual haptic sensations as vibrations, such as texture, friction or contact vibrations \cite{culbertson2018haptics}.
|
%With a vibrotactile actuator attached to a hand-held device or directly on the finger, it is possible to simulate virtual haptic sensations as vibrations, such as texture, friction or contact vibrations \cite{culbertson2018haptics}.
|
||||||
%
|
%
|
||||||
In this section, we describe a system for rendering vibrotactile roughness texture in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
In this section, we describe a system for rendering vibrotactile roughness texture in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
||||||
%
|
%
|
||||||
We also describe how to pair this tactile rendering with an immersive AR or VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment.
|
We also describe how to pair this tactile rendering with an immersive AR or VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment.
|
||||||
|
|
||||||
|
\section{Principle}
|
||||||
|
\label{principle}
|
||||||
|
|
||||||
The visuo-haptic texture rendering system is based on
|
The visuo-haptic texture rendering system is based on
|
||||||
%
|
%
|
||||||
\begin{enumerate*}[label=(\arabic*)]
|
\begin{enumerate*}[label=(\arabic*)]
|
||||||
@@ -36,27 +15,37 @@ The visuo-haptic texture rendering system is based on
|
|||||||
\item and a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
\item and a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
||||||
\end{enumerate*}
|
\end{enumerate*}
|
||||||
%
|
%
|
||||||
\figref{method/diagram} shows the diagram of the interaction loop and \eqref{signal} the definition of the vibrotactile signal.
|
\figref{diagram} shows the diagram of the interaction loop and \eqref{signal} the definition of the vibrotactile signal.
|
||||||
%
|
%
|
||||||
The system is composed of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering.
|
The system is composed of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering.
|
||||||
|
|
||||||
|
\figwide[1]{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
||||||
|
Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera.
|
||||||
|
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
|
||||||
|
%These poses are transformed to the AR/VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the real environment.
|
||||||
|
These poses are used to move and display the virtual model replicas aligned with the real environment.
|
||||||
|
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
||||||
|
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
|
||||||
|
The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
||||||
|
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
|
||||||
|
All computation steps except signal sampling are performed at 60~Hz and in separate threads to parallelize them.
|
||||||
|
]
|
||||||
|
|
||||||
\subsection{Pose Estimation and Virtual Environment Alignment}
|
\section{Pose Estimation and Virtual Environment Alignment}
|
||||||
\label{virtual_real_alignment}
|
\label{virtual_real_alignment}
|
||||||
|
|
||||||
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup}[%
|
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup. }[][
|
||||||
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger. %
|
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger.
|
||||||
\item HoloLens~2 AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset. %
|
\item HoloLens~2 AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset.
|
||||||
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
|
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
|
||||||
]
|
]
|
||||||
\hidesubcaption
|
\subfig[0.325]{device}
|
||||||
\subfig[0.325]{method/device}
|
\subfig[0.65]{headset}
|
||||||
\subfig[0.65]{method/headset}
|
|
||||||
\par\vspace{2.5pt}
|
\par\vspace{2.5pt}
|
||||||
\subfig[0.992]{method/apparatus}
|
\subfig[0.992]{apparatus}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{method/apparatus}).
|
A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}).
|
||||||
%
|
%
|
||||||
Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (\figref{setup}).
|
Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (\figref{setup}).
|
||||||
%
|
%
|
||||||
@@ -91,15 +80,14 @@ It was chosen over VST-AR because OST-AR only adds virtual content to the real e
|
|||||||
%
|
%
|
||||||
Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||||
%
|
%
|
||||||
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{method/headset}).
|
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{headset}).
|
||||||
|
|
||||||
|
\section{Vibrotactile Signal Generation and Rendering}
|
||||||
\subsection{Vibrotactile Signal Generation and Rendering}
|
|
||||||
\label{texture_generation}
|
\label{texture_generation}
|
||||||
|
|
||||||
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
||||||
%
|
%
|
||||||
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{method/device}).
|
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{device}).
|
||||||
%
|
%
|
||||||
The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil \cite{mcmahan2014dynamic}.
|
The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil \cite{mcmahan2014dynamic}.
|
||||||
%
|
%
|
||||||
@@ -131,7 +119,7 @@ Note that the finger position and velocity are transformed from the camera frame
|
|||||||
%
|
%
|
||||||
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
|
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
|
||||||
%
|
%
|
||||||
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (\figref{method/phase_adjustment}) and, contrary to previous work \cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
|
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (\figref{phase_adjustment}) and, contrary to previous work \cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
|
||||||
%
|
%
|
||||||
Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures \cite{unger2011roughness}.
|
Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures \cite{unger2011roughness}.
|
||||||
%
|
%
|
||||||
@@ -139,19 +127,17 @@ Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sin
|
|||||||
%
|
%
|
||||||
The tactile texture is described and rendered in this work as a one dimensional signal by integrating the relative finger movement to the texture on a single direction, but it is easily extended to a two-dimensional texture by simply generating a second signal for the orthogonal direction and summing the two signals in the rendering.
|
The tactile texture is described and rendered in this work as a one dimensional signal by integrating the relative finger movement to the texture on a single direction, but it is easily extended to a two-dimensional texture by simply generating a second signal for the orthogonal direction and summing the two signals in the rendering.
|
||||||
|
|
||||||
\fig[1]{method/phase_adjustment}{%
|
\fig[0.7]{phase_adjustment}{
|
||||||
Change in frequency of a sinusoidal signal with (light green) and without phase matching (in dark green).
|
Change in frequency of a sinusoidal signal with (light green) and without phase matching (in dark green).
|
||||||
%
|
}[
|
||||||
The phase matching ensures a continuity in the signal and avoids glitches in the rendering of the signal.
|
The phase matching ensures a continuity in the signal and avoids glitches in the rendering of the signal.
|
||||||
%
|
|
||||||
A sinusoidal signal is show here for clarity, but a different waveform, such as a square wave, will give a similar effect.
|
A sinusoidal signal is show here for clarity, but a different waveform, such as a square wave, will give a similar effect.
|
||||||
}
|
]
|
||||||
|
|
||||||
|
\section{System Latency}
|
||||||
\subsection{System Latency}
|
|
||||||
\label{latency}
|
\label{latency}
|
||||||
|
|
||||||
%As shown in \figref{method/diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
%As shown in \figref{diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
||||||
%
|
%
|
||||||
Because the chosen AR headset is a standalone device (like most current AR/VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
|
Because the chosen AR headset is a standalone device (like most current AR/VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
|
||||||
%
|
%
|
||||||
7
2-perception/vhar-system/6-conclusion.tex
Normal file
7
2-perception/vhar-system/6-conclusion.tex
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
\section{Conclusion}
|
||||||
|
\label{conclusion}
|
||||||
|
|
||||||
|
%Summary of the research problem, method, main findings, and implications.
|
||||||
|
|
||||||
|
We designed and implemented a system for rendering virtual haptic grating textures on a real tangible surface touched directly with the fingertip, using a wearable vibrotactile voice-coil device mounted on the middle phalanx of the finger. %, and allowing free explorative movements of the hand on the surface.
|
||||||
|
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an AR and a VR view.
|
||||||
|
Before Width: | Height: | Size: 4.3 MiB After Width: | Height: | Size: 4.3 MiB |
|
Before Width: | Height: | Size: 994 KiB After Width: | Height: | Size: 994 KiB |
|
Before Width: | Height: | Size: 1.7 MiB After Width: | Height: | Size: 1.7 MiB |
8
2-perception/vhar-system/vhar-system.tex
Normal file
8
2-perception/vhar-system/vhar-system.tex
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
\chapter{Visuo-Haptic Texture Augmentation in Mixed Reality}
|
||||||
|
\mainlabel{vhar_system}
|
||||||
|
|
||||||
|
\chaptertoc
|
||||||
|
|
||||||
|
\input{1-introduction}
|
||||||
|
\input{2-method}
|
||||||
|
\input{6-conclusion}
|
||||||
@@ -1,16 +1,5 @@
|
|||||||
\section{Introduction}
|
|
||||||
\label{introduction}
|
|
||||||
|
|
||||||
% Delivers the motivation for your paper. It explains why you did the work you did.
|
|
||||||
|
|
||||||
% Insist on the advantage of wearable : augment any surface see bau2012revel
|
% Insist on the advantage of wearable : augment any surface see bau2012revel
|
||||||
|
|
||||||
\fig[1]{teaser/teaser2}{%
|
|
||||||
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
|
||||||
%
|
|
||||||
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
|
|
||||||
}
|
|
||||||
|
|
||||||
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
|
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
|
||||||
|
|
||||||
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
||||||
@@ -64,4 +53,9 @@ Our contributions are:
|
|||||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask.
|
%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask.
|
||||||
%
|
%
|
||||||
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
||||||
|
|
||||||
|
\fig[1]{teaser/teaser2}{%
|
||||||
|
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
||||||
%
|
%
|
||||||
|
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
|
||||||
|
}
|
||||||
|
|||||||
@@ -25,7 +25,6 @@ In a two-alternative forced choice (2AFC) task, participants compared the roughn
|
|||||||
%
|
%
|
||||||
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
|
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Participants}
|
\subsection{Participants}
|
||||||
\label{participants}
|
\label{participants}
|
||||||
|
|
||||||
@@ -43,7 +42,6 @@ Participants were recruited at the university on a voluntary basis.
|
|||||||
%
|
%
|
||||||
They all signed an informed consent form before the user study and were unaware of its purpose.
|
They all signed an informed consent form before the user study and were unaware of its purpose.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Apparatus}
|
\subsection{Apparatus}
|
||||||
\label{apparatus}
|
\label{apparatus}
|
||||||
|
|
||||||
@@ -99,7 +97,6 @@ They also wore headphones with a pink noise masking the sound of the voice-coil.
|
|||||||
%
|
%
|
||||||
The user study was held in a quiet room with no windows.
|
The user study was held in a quiet room with no windows.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Procedure}
|
\subsection{Procedure}
|
||||||
\label{procedure}
|
\label{procedure}
|
||||||
|
|
||||||
@@ -135,7 +132,6 @@ All textures were rendered as described in \secref{texture_generation} with peri
|
|||||||
%
|
%
|
||||||
Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable, and the reference texture was chosen to be the one with the middle amplitude.
|
Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable, and the reference texture was chosen to be the one with the middle amplitude.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Experimental Design}
|
\subsection{Experimental Design}
|
||||||
\label{experimental_design}
|
\label{experimental_design}
|
||||||
|
|
||||||
@@ -154,7 +150,6 @@ Within each condition, the order of presentation of the reference and comparison
|
|||||||
%
|
%
|
||||||
A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentation order \x 3 repetitions = 107 trials were performed by each participant.
|
A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentation order \x 3 repetitions = 107 trials were performed by each participant.
|
||||||
|
|
||||||
|
|
||||||
\subsection{Collected Data}
|
\subsection{Collected Data}
|
||||||
\label{collected_data}
|
\label{collected_data}
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,6 @@ Post-hoc pairwise comparisons were performed using the Tukey's Honest Significan
|
|||||||
%
|
%
|
||||||
Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
|
Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Discrimination Accuracy}
|
\subsubsection{Discrimination Accuracy}
|
||||||
\label{discrimination_accuracy}
|
\label{discrimination_accuracy}
|
||||||
|
|
||||||
@@ -57,7 +56,6 @@ Participants took longer on average to respond with the \level{Virtual} renderin
|
|||||||
%
|
%
|
||||||
The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}).
|
The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}).
|
||||||
|
|
||||||
|
|
||||||
\subsubsection{Finger Position and Speed}
|
\subsubsection{Finger Position and Speed}
|
||||||
\label{finger_position_speed}
|
\label{finger_position_speed}
|
||||||
|
|
||||||
@@ -75,11 +73,11 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
|||||||
%
|
%
|
||||||
%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in VR (\level{Virtual} condition).
|
%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in VR (\level{Virtual} condition).
|
||||||
|
|
||||||
\begin{subfigs}{results_finger}{%
|
\begin{subfigs}{results_finger}{Results of the performance metrics for the rendering condition. }[
|
||||||
Boxplots and geometric means of response time at the end of a trial, and finger position and finger speed measures when exploring the comparison texture, with pairwise Tukey's HSD tests: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
Boxplots and geometric means with bootstrap 95~\% confidence interval, with pairwise Tukey's HSD tests: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
||||||
}[%
|
][
|
||||||
\item Response time of a trial.
|
\item Response time at the end of a trial.
|
||||||
\item Distance traveled by the finger in a trial.
|
\item Distance travelled by the finger in a trial.
|
||||||
\item Speed of the finger in a trial.
|
\item Speed of the finger in a trial.
|
||||||
]
|
]
|
||||||
\subfig[0.32]{results/trial_response_times}
|
\subfig[0.32]{results/trial_response_times}
|
||||||
@@ -87,7 +85,6 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
|||||||
\subfig[0.32]{results/trial_speeds}
|
\subfig[0.32]{results/trial_speeds}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
|
|
||||||
\subsection{Questionnaires}
|
\subsection{Questionnaires}
|
||||||
\label{questions}
|
\label{questions}
|
||||||
|
|
||||||
@@ -95,7 +92,7 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
|||||||
%
|
%
|
||||||
Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
|
Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
|
||||||
%
|
%
|
||||||
\figref{question_plots} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
|
\figref{results_questions} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
|
||||||
%
|
%
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 +- 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 +- 0.9}, \pinf{0.001}).
|
\item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 +- 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 +- 0.9}, \pinf{0.001}).
|
||||||
@@ -125,9 +122,14 @@ The vibrations were felt a slightly weak overall (\response{Vibration Strength},
|
|||||||
% (Right) Load Index (NASA-TLX) questionnaire (lower values are better).
|
% (Right) Load Index (NASA-TLX) questionnaire (lower values are better).
|
||||||
%}
|
%}
|
||||||
|
|
||||||
\begin{subfigs}{question_plots}{%
|
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for the virtual hand renderings. }[
|
||||||
Boxplots of responses to questions with significant differences and pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
||||||
}
|
][
|
||||||
|
\item Hand ownership.
|
||||||
|
\item Hand latency.
|
||||||
|
\item Hand reference.
|
||||||
|
\item Hand distraction.
|
||||||
|
]
|
||||||
\subfig[0.24]{results/questions_hand_ownership}
|
\subfig[0.24]{results/questions_hand_ownership}
|
||||||
\subfig[0.24]{results/questions_hand_latency}
|
\subfig[0.24]{results/questions_hand_latency}
|
||||||
\subfig[0.24]{results/questions_hand_reference}
|
\subfig[0.24]{results/questions_hand_reference}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
\chaptertoc
|
\chaptertoc
|
||||||
|
|
||||||
\input{1-introduction}
|
\input{1-introduction}
|
||||||
\input{2-method}
|
|
||||||
\input{3-experiment}
|
\input{3-experiment}
|
||||||
\input{4-results}
|
\input{4-results}
|
||||||
\input{5-discussion}
|
\input{5-discussion}
|
||||||
|
|||||||
1
main.tex
1
main.tex
@@ -56,6 +56,7 @@
|
|||||||
\importchapter{1-introduction/related-work}{related-work}
|
\importchapter{1-introduction/related-work}{related-work}
|
||||||
|
|
||||||
\import{2-perception}{perception}
|
\import{2-perception}{perception}
|
||||||
|
\importchapter{2-perception/vhar-system}{vhar-system}
|
||||||
\importchapter{2-perception/xr-perception}{xr-perception}
|
\importchapter{2-perception/xr-perception}{xr-perception}
|
||||||
\importchapter{2-perception/ar-textures}{ar-textures}
|
\importchapter{2-perception/ar-textures}{ar-textures}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user