Auto add chapter as prefix to labels
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
\section{Visuo-Haptic Texture Rendering in Mixed Reality}
|
||||
\sublabel{method}
|
||||
\label{sec:method}
|
||||
|
||||
\figwide[1]{method/diagram}{%
|
||||
Diagram of the visuo-haptic texture rendering system.
|
||||
@@ -36,13 +36,13 @@ The visuo-haptic texture rendering system is based on
|
||||
\item and a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
||||
\end{enumerate*}
|
||||
%
|
||||
\figref{method/diagram} shows the diagram of the interaction loop and \eqref{xr_perception:signal} the definition of the vibrotactile signal.
|
||||
\figref{method/diagram} shows the diagram of the interaction loop and \eqref{signal} the definition of the vibrotactile signal.
|
||||
%
|
||||
The system is composed of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering.
|
||||
|
||||
|
||||
\subsection{Pose Estimation and Virtual Environment Alignment}
|
||||
\sublabel{virtual_real_alignment}
|
||||
\label{sec:virtual_real_alignment}
|
||||
|
||||
\begin{subfigs}{setup}{%
|
||||
Visuo-haptic texture rendering system setup.
|
||||
@@ -93,13 +93,13 @@ The visual rendering is achieved using the Microsoft HoloLens~2, an OST-AR heads
|
||||
%
|
||||
It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment~\autocite{macedo2023occlusion}.
|
||||
%
|
||||
Indeed, one of our objectives (see \secref{xr_perception:experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
Indeed, one of our objectives (see \secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
%
|
||||
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (see \figref{method/headset}).
|
||||
|
||||
|
||||
\subsection{Vibrotactile Signal Generation and Rendering}
|
||||
\sublabel{texture_generation}
|
||||
\label{sec:texture_generation}
|
||||
|
||||
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
||||
%
|
||||
@@ -116,10 +116,10 @@ It is generated as a square wave audio signal, sampled at \qty{48}{\kilo\hertz},
|
||||
A sample $s_k$ of the audio signal at sampling time $t_k$ is given by:
|
||||
%
|
||||
\begin{subequations}
|
||||
\label{eq:\labelprefix:signal}
|
||||
\label{eq:signal}
|
||||
\begin{align}
|
||||
s(x_{f,j}, t_k) & = A \text{\,sgn} ( \sin (2 \pi \frac{\dot{x}_{f,j}}{\lambda} t_k + \phi_j) ) & \label{eq:\labelprefix:signal_speed} \\
|
||||
\phi_j & = \phi_{j-1} + 2 \pi \frac{x_{f,j} - x_{f,{j-1}}}{\lambda} t_k & \label{eq:\labelprefix:signal_phase}
|
||||
s(x_{f,j}, t_k) & = A \text{\,sgn} ( \sin (2 \pi \frac{\dot{x}_{f,j}}{\lambda} t_k + \phi_j) ) & \label{eq:signal_speed} \\
|
||||
\phi_j & = \phi_{j-1} + 2 \pi \frac{x_{f,j} - x_{f,{j-1}}}{\lambda} t_k & \label{eq:signal_phase}
|
||||
\end{align}
|
||||
\end{subequations}
|
||||
%
|
||||
@@ -133,7 +133,7 @@ This is important because it preserves the sensation of a constant spatial frequ
|
||||
%
|
||||
Note that the finger position and velocity are transformed from the camera frame $\mathcal{F}_c$ to the texture frame $\mathcal{F}_t$, with the $x$ axis aligned with the texture direction.
|
||||
%
|
||||
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{xr_perception:signal_phase}.
|
||||
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
|
||||
%
|
||||
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (see \figref{method/phase_adjustment}) and, contrary to previous work~\autocite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
|
||||
%
|
||||
@@ -153,7 +153,7 @@ The tactile texture is described and rendered in this work as a one dimensional
|
||||
|
||||
|
||||
\subsection{System Latency}
|
||||
\sublabel{latency}
|
||||
\label{sec:latency}
|
||||
|
||||
%As shown in \figref{method/diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
||||
%
|
||||
|
||||
Reference in New Issue
Block a user