Improve vhar_system equation and diagram
This commit is contained in:
@@ -18,9 +18,9 @@ The visuo-haptic texture rendering system is based on:
|
|||||||
The system consists of three main components: the pose estimation of the tracked real elements, the visual rendering of the \VE, and the vibrotactile signal generation and rendering.
|
The system consists of three main components: the pose estimation of the tracked real elements, the visual rendering of the \VE, and the vibrotactile signal generation and rendering.
|
||||||
|
|
||||||
\figwide{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
\figwide{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
||||||
|
\setstretch{1.2}
|
||||||
Fiducial markers attached to the voice-coil actuator and to augmented surfaces to track are captured by a camera.
|
Fiducial markers attached to the voice-coil actuator and to augmented surfaces to track are captured by a camera.
|
||||||
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\poseFrame{c}$ are estimated, then filtered with an adaptive low-pass filter.
|
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\poseFrame{c}$ are estimated, then filtered with an adaptive low-pass filter.
|
||||||
%These poses are transformed to the \AR/\VR headset frame $\poseFrame{h}$ and applied to the virtual model replicas to display them superimposed and aligned with the \RE.
|
|
||||||
These poses are used to move and display the virtual model replicas aligned with the \RE.
|
These poses are used to move and display the virtual model replicas aligned with the \RE.
|
||||||
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
||||||
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\poseFrame{t}$.
|
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\poseFrame{t}$.
|
||||||
@@ -74,7 +74,7 @@ The amplifier is connected to the audio output of a computer that generates the
|
|||||||
|
|
||||||
The represented haptic texture is a 1D series of parallels virtual grooves and ridges, similar to the real linear grating textures manufactured for psychophysical roughness perception studies \secref[related_work]{roughness}. %\cite{friesen2024perceived,klatzky2003feeling,unger2011roughness}.
|
The represented haptic texture is a 1D series of parallels virtual grooves and ridges, similar to the real linear grating textures manufactured for psychophysical roughness perception studies \secref[related_work]{roughness}. %\cite{friesen2024perceived,klatzky2003feeling,unger2011roughness}.
|
||||||
It is generated as a square wave audio signal $r$, sampled at \qty{48}{\kilo\hertz}, with a texture period $\lambda$ and an amplitude $A$, similar to \eqref[related_work]{grating_rendering}.
|
It is generated as a square wave audio signal $r$, sampled at \qty{48}{\kilo\hertz}, with a texture period $\lambda$ and an amplitude $A$, similar to \eqref[related_work]{grating_rendering}.
|
||||||
Its frequency is a ratio of the absolute finger filtered (scalar) velocity $x_f = \poseX{s}{|\hat{\dot{X}}|}{f}$, and the texture period $\lambda$ \cite{friesen2024perceived}.
|
Its frequency is a ratio of the absolute finger filtered (scalar) velocity $\dot{x} = \poseX{s}{|\hat{\dot{X}}|}{f}$, and the texture period $\lambda$ \cite{friesen2024perceived}.
|
||||||
As the finger is moving horizontally on the texture, only the $X$ component of the velocity is used.
|
As the finger is moving horizontally on the texture, only the $X$ component of the velocity is used.
|
||||||
This velocity modulation strategy is necessary as the finger position is estimated at a far lower rate (\qty{60}{\hertz}) than the audio signal (unlike high-fidelity force-feedback devices \cite{unger2011roughness}).
|
This velocity modulation strategy is necessary as the finger position is estimated at a far lower rate (\qty{60}{\hertz}) than the audio signal (unlike high-fidelity force-feedback devices \cite{unger2011roughness}).
|
||||||
|
|
||||||
@@ -82,14 +82,14 @@ This velocity modulation strategy is necessary as the finger position is estimat
|
|||||||
%
|
%
|
||||||
%The best strategy instead is to modulate the frequency of the signal as a ratio of the filtered finger velocity ${}^t\hat{\dot{\mathbf{X}}}_f$ and the texture period $\lambda$ \cite{friesen2024perceived}.
|
%The best strategy instead is to modulate the frequency of the signal as a ratio of the filtered finger velocity ${}^t\hat{\dot{\mathbf{X}}}_f$ and the texture period $\lambda$ \cite{friesen2024perceived}.
|
||||||
%
|
%
|
||||||
When a new finger velocity $x_f\,(t_j)$ is estimated at time $t_j$, the phase $\phi\,(t_j)$ of the signal $r$ needs also to be adjusted to ensure a continuity in the signal.
|
When a new finger velocity $\dot{x}\,(t_j)$ is estimated at time $t_j$, the phase $\phi\,(t_j)$ of the signal $r$ needs also to be adjusted to ensure a continuity in the signal.
|
||||||
In other words, the sampling of the audio signal runs at \qty{48}{\kilo\hertz}, and its frequency and phase is updated at a far lower rate of \qty{60}{\hertz} when a new finger velocity is estimated.
|
In other words, the sampling of the audio signal runs at \qty{48}{\kilo\hertz}, and its frequency and phase is updated at a far lower rate of \qty{60}{\hertz} when a new finger velocity is estimated.
|
||||||
A sample $r(x_f, t_j, t_k)$ of the audio signal at sampling time $t_k$, with $t_k >= t_j$, is thus given by:
|
A sample $r(t_j, t_k)$ of the audio signal at sampling time $t_k$, with $t_k >= t_j$, is thus given by:
|
||||||
\begin{subequations}
|
\begin{subequations}
|
||||||
\label{eq:signal}
|
\label{eq:signal}
|
||||||
\begin{align}
|
\begin{align}
|
||||||
r(x_f, t_j, t_k) & = A\, \text{sgn} ( \sin (2 \pi \frac{x_f\,(t_j)}{\lambda} t_k + \phi(t_j) ) ) & \label{eq:signal_speed} \\
|
r(t_j, t_k) & = A\, \text{sgn} ( \sin (2 \pi \frac{\dot{x}\,(t_j)}{\lambda} t_k + \phi_j ) ) & \label{eq:signal_speed} \\
|
||||||
\phi(t_j) & = \phi(t_{j-1}) + 2 \pi \frac{x_f\,(t_j) - x_f\,(t_j - 1)}{\lambda} t_k & \label{eq:signal_phase}
|
\phi_j & = \phi_{j-1} + 2 \pi \frac{\dot{x}\,(t_j) - \dot{x}\,(t_j - 1)}{\lambda} t_k & \label{eq:signal_phase}
|
||||||
\end{align}
|
\end{align}
|
||||||
\end{subequations}
|
\end{subequations}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user