Auto add chapter as prefix to labels
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
\section{Introduction}
|
||||
\sublabel{introduction}
|
||||
\label{sec:introduction}
|
||||
|
||||
% Delivers the motivation for your paper. It explains why you did the work you did.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Related Work}
|
||||
\sublabel{related_work}
|
||||
\label{sec:related_work}
|
||||
|
||||
% Answer the following four questions: “Who else has done work with relevance to this work of yours? What did they do? What did they find? And how is your work here different?”
|
||||
|
||||
@@ -9,7 +9,7 @@ Yet visual and haptic sensations are often combined in everyday life, and it is
|
||||
|
||||
|
||||
\subsection{Augmenting Haptic Texture Roughness}
|
||||
\sublabel{vibrotactile_roughness}
|
||||
\label{sec:vibrotactile_roughness}
|
||||
|
||||
When running a finger over a surface, the deformations and vibrations of the skin caused by the micro-height differences of the material induce the sensation of roughness~\autocite{klatzky2003feeling}.
|
||||
%
|
||||
@@ -48,7 +48,7 @@ It remains unclear whether such vibrotactile texture augmentation is perceived t
|
||||
%In our study, we attached a voice-coil actuator to the middle phalanx of the finger and used a squared sinusoidal signal to render grating textures sensations, but we corrected its phase to allow a simple camera-based tracking and free exploration movements of the finger.
|
||||
|
||||
\subsection{Influence of Visual Rendering on Haptic Perception}
|
||||
\sublabel{influence_visual_haptic}
|
||||
\label{sec:influence_visual_haptic}
|
||||
|
||||
When the same object property is sensed simultaneously by vision and touch, the two modalities are integrated into a single perception.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Visuo-Haptic Texture Rendering in Mixed Reality}
|
||||
\sublabel{method}
|
||||
\label{sec:method}
|
||||
|
||||
\figwide[1]{method/diagram}{%
|
||||
Diagram of the visuo-haptic texture rendering system.
|
||||
@@ -36,13 +36,13 @@ The visuo-haptic texture rendering system is based on
|
||||
\item and a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
||||
\end{enumerate*}
|
||||
%
|
||||
\figref{method/diagram} shows the diagram of the interaction loop and \eqref{xr_perception:signal} the definition of the vibrotactile signal.
|
||||
\figref{method/diagram} shows the diagram of the interaction loop and \eqref{signal} the definition of the vibrotactile signal.
|
||||
%
|
||||
The system is composed of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering.
|
||||
|
||||
|
||||
\subsection{Pose Estimation and Virtual Environment Alignment}
|
||||
\sublabel{virtual_real_alignment}
|
||||
\label{sec:virtual_real_alignment}
|
||||
|
||||
\begin{subfigs}{setup}{%
|
||||
Visuo-haptic texture rendering system setup.
|
||||
@@ -93,13 +93,13 @@ The visual rendering is achieved using the Microsoft HoloLens~2, an OST-AR heads
|
||||
%
|
||||
It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment~\autocite{macedo2023occlusion}.
|
||||
%
|
||||
Indeed, one of our objectives (see \secref{xr_perception:experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
Indeed, one of our objectives (see \secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
%
|
||||
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (see \figref{method/headset}).
|
||||
|
||||
|
||||
\subsection{Vibrotactile Signal Generation and Rendering}
|
||||
\sublabel{texture_generation}
|
||||
\label{sec:texture_generation}
|
||||
|
||||
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
||||
%
|
||||
@@ -116,10 +116,10 @@ It is generated as a square wave audio signal, sampled at \qty{48}{\kilo\hertz},
|
||||
A sample $s_k$ of the audio signal at sampling time $t_k$ is given by:
|
||||
%
|
||||
\begin{subequations}
|
||||
\label{eq:\labelprefix:signal}
|
||||
\label{eq:signal}
|
||||
\begin{align}
|
||||
s(x_{f,j}, t_k) & = A \text{\,sgn} ( \sin (2 \pi \frac{\dot{x}_{f,j}}{\lambda} t_k + \phi_j) ) & \label{eq:\labelprefix:signal_speed} \\
|
||||
\phi_j & = \phi_{j-1} + 2 \pi \frac{x_{f,j} - x_{f,{j-1}}}{\lambda} t_k & \label{eq:\labelprefix:signal_phase}
|
||||
s(x_{f,j}, t_k) & = A \text{\,sgn} ( \sin (2 \pi \frac{\dot{x}_{f,j}}{\lambda} t_k + \phi_j) ) & \label{eq:signal_speed} \\
|
||||
\phi_j & = \phi_{j-1} + 2 \pi \frac{x_{f,j} - x_{f,{j-1}}}{\lambda} t_k & \label{eq:signal_phase}
|
||||
\end{align}
|
||||
\end{subequations}
|
||||
%
|
||||
@@ -133,7 +133,7 @@ This is important because it preserves the sensation of a constant spatial frequ
|
||||
%
|
||||
Note that the finger position and velocity are transformed from the camera frame $\mathcal{F}_c$ to the texture frame $\mathcal{F}_t$, with the $x$ axis aligned with the texture direction.
|
||||
%
|
||||
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{xr_perception:signal_phase}.
|
||||
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
|
||||
%
|
||||
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (see \figref{method/phase_adjustment}) and, contrary to previous work~\autocite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
|
||||
%
|
||||
@@ -153,7 +153,7 @@ The tactile texture is described and rendered in this work as a one dimensional
|
||||
|
||||
|
||||
\subsection{System Latency}
|
||||
\sublabel{latency}
|
||||
\label{sec:latency}
|
||||
|
||||
%As shown in \figref{method/diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{User Study}
|
||||
\sublabel{experiment}
|
||||
\label{sec:experiment}
|
||||
|
||||
\begin{subfigswide}{renderings}{%
|
||||
The three visual rendering conditions and the experimental procedure of the two-alternative forced choice (2AFC) psychophysical study.
|
||||
@@ -22,7 +22,7 @@
|
||||
\subfig[0.32][]{experiment/virtual}
|
||||
\end{subfigswide}
|
||||
|
||||
Our visuo-haptic rendering system, described in \secref{xr_perception:method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in AR or VR.
|
||||
Our visuo-haptic rendering system, described in \secref{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in AR or VR.
|
||||
%
|
||||
The user study aimed to investigate the effect of visual hand rendering in AR or VR on the perception of roughness texture augmentation. % of a touched tangible surface.
|
||||
%
|
||||
@@ -32,7 +32,7 @@ In order not to influence the perception, as vision is an important source of in
|
||||
|
||||
|
||||
\subsection{Participants}
|
||||
\sublabel{participants}
|
||||
\label{sec:participants}
|
||||
|
||||
Twenty participants were recruited for the study (16 males, 3 females, 1 prefer not to say), aged between 18 and 61 years old (\median{26}{}, \iqr{6.8}{}).
|
||||
%
|
||||
@@ -50,7 +50,7 @@ They all signed an informed consent form before the user study and were unaware
|
||||
|
||||
|
||||
\subsection{Apparatus}
|
||||
\sublabel{apparatus}
|
||||
\label{sec:apparatus}
|
||||
|
||||
An experimental environment similar as \citeauthorcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (see \figref{renderings}).
|
||||
%
|
||||
@@ -70,7 +70,7 @@ Its size was adjusted to match the real hand of the participants before the expe
|
||||
%
|
||||
%An OST-AR headset (Microsoft HoloLens~2) was chosen over a VST-AR headset because the former only adds virtual content to the real environment, while the latter streams a real-time video capture of the real environment, and one of our objectives was to directly compare a virtual environment replicating a real one, not to a video feed that introduces many other visual limitations~\autocite{macedo2023occlusion}.
|
||||
%
|
||||
The visual rendering of the virtual hand and environment is described in \secref{xr_perception:virtual_real_alignment}.
|
||||
The visual rendering of the virtual hand and environment is described in \secref{virtual_real_alignment}.
|
||||
%
|
||||
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (see \figref{method/headset}).
|
||||
%
|
||||
@@ -98,7 +98,7 @@ Participants sat comfortably in front of the box at a distance of \qty{30}{\cm},
|
||||
%
|
||||
%A vibrotactile voice-coil actuator (HapCoil-One, Actronika) was encased in a 3D printed plastic shell with a \qty{2}{\cm} AprilTag glued to top, and firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap.
|
||||
%
|
||||
The generation of the virtual texture and the control of the virtual hand is described in \secref{xr_perception:method}.
|
||||
The generation of the virtual texture and the control of the virtual hand is described in \secref{method}.
|
||||
%
|
||||
They also wore headphones with a pink noise masking the sound of the voice-coil.
|
||||
%
|
||||
@@ -106,7 +106,7 @@ The user study was held in a quiet room with no windows.
|
||||
|
||||
|
||||
\subsection{Procedure}
|
||||
\sublabel{procedure}
|
||||
\label{sec:procedure}
|
||||
|
||||
Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire.
|
||||
%
|
||||
@@ -136,13 +136,13 @@ Participants were not told that there was a reference and a comparison texture.
|
||||
%
|
||||
The order of presentation was randomised and not revealed to the participants.
|
||||
%
|
||||
All textures were rendered as described in \secref{xr_perception:texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness.
|
||||
All textures were rendered as described in \secref{texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness.
|
||||
%
|
||||
Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable, and the reference texture was chosen to be the one with the middle amplitude.
|
||||
|
||||
|
||||
\subsection{Experimental Design}
|
||||
\sublabel{experimental_design}
|
||||
\label{sec:experimental_design}
|
||||
|
||||
The user study was a within-subjects design with two factors:
|
||||
%
|
||||
@@ -161,7 +161,7 @@ A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentat
|
||||
|
||||
|
||||
\subsection{Collected Data}
|
||||
\sublabel{collected_data}
|
||||
\label{sec:collected_data}
|
||||
|
||||
For each trial, the \textit{Texture Choice} by the participant as the roughest of the pair was recorded.
|
||||
%
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
\section{Results}
|
||||
\sublabel{results}
|
||||
\label{sec:results}
|
||||
|
||||
\subsection{Trial Measures}
|
||||
\sublabel{results_trials}
|
||||
\label{sec:results_trials}
|
||||
|
||||
All measures from trials were analysed using linear mixed models (LMM) or generalised linear mixed models (GLMM) with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts.
|
||||
%
|
||||
@@ -16,7 +16,7 @@ Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci
|
||||
|
||||
|
||||
\subsubsection{Discrimination Accuracy}
|
||||
\sublabel{discrimination_accuracy}
|
||||
\label{sec:discrimination_accuracy}
|
||||
|
||||
A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (see \figref{results/trial_predictions}).
|
||||
%
|
||||
@@ -54,7 +54,7 @@ All pairwise differences were statistically significant.
|
||||
|
||||
|
||||
\subsubsection{Response Time}
|
||||
\sublabel{response_time}
|
||||
\label{sec:response_time}
|
||||
|
||||
A LMM analysis of variance (AOV) with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effects on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}).
|
||||
%
|
||||
@@ -64,7 +64,7 @@ The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}).
|
||||
|
||||
|
||||
\subsubsection{Finger Position and Speed}
|
||||
\sublabel{finger_position_speed}
|
||||
\label{sec:finger_position_speed}
|
||||
|
||||
The frames analysed were those in which the participants actively touched the comparison textures with a finger speed greater than \SI{1}{\mm\per\second}.
|
||||
%
|
||||
@@ -96,7 +96,7 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
||||
|
||||
|
||||
\subsection{Questionnaires}
|
||||
\sublabel{questions}
|
||||
\label{sec:questions}
|
||||
|
||||
%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Discussion}
|
||||
\sublabel{discussion}
|
||||
\label{sec:discussion}
|
||||
|
||||
%Interpret the findings in results, answer to the problem asked in the introduction, contrast with previous articles, draw possible implications. Give limitations of the study.
|
||||
|
||||
@@ -30,9 +30,9 @@ The \level{Mixed} rendering, displaying both the real and virtual hands, was alw
|
||||
%
|
||||
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in VR.
|
||||
%
|
||||
This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (see \secref{xr_perception:questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
||||
This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (see \secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
||||
%
|
||||
Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (see \secref{xr_perception:questions}).
|
||||
Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (see \secref{questions}).
|
||||
%
|
||||
However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the PSEs).
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Conclusion}
|
||||
\sublabel{conclusion}
|
||||
\label{sec:conclusion}
|
||||
|
||||
%Summary of the research problem, method, main findings, and implications.
|
||||
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
\mainchapter{Perception of Visual-Haptic Texture Augmentation in Augmented and Virtual Reality}
|
||||
|
||||
\renewcommand{\labelprefix}{xr_perception}
|
||||
\label{ch:\labelprefix}
|
||||
\mainlabel{xr_perception}
|
||||
|
||||
\input{1-introduction}
|
||||
\input{2-related-work}
|
||||
|
||||
Reference in New Issue
Block a user