This commit is contained in:
2024-12-26 19:38:46 +01:00
parent 0cde049bfc
commit fe0da6a83b
15 changed files with 31 additions and 32 deletions

View File

@@ -24,9 +24,9 @@ The system consists of three main components: the pose estimation of the tracked
These poses are used to move and display the virtual model replicas aligned with the \RE.
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\poseFrame{t}$.
The vibrotactile signal $s_k$ is generated by modulating the (scalar) finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
All computation steps except signal sampling are performed at 60~Hz and in separate threads to parallelize them.
The vibrotactile signal $r$ is generated by modulating the (scalar) finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
The signal is sampled at \qty{48}{\kilo\hertz} and sent to the voice-coil actuator via an audio amplifier.
All computation steps except signal sampling are performed at \qty{60}{\hertz} and in separate threads to parallelize them.
]
\section{Description of the System Components}