Corrections
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
%
|
||||
%We describe a system for rendering vibrotactile roughness textures in real time, on any real surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
||||
%
|
||||
%We also describe how to pair this tactile rendering with an immersive \AR or \VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the \RE.
|
||||
%We also describe how to pair this tactile rendering with an immersive \AR or \VR headset visual display to provide a coherent visuo-haptic augmentation of the \RE.
|
||||
|
||||
\section{Principle}
|
||||
\label{principle}
|
||||
@@ -17,7 +17,7 @@ The visuo-haptic texture rendering system is based on:
|
||||
\figref{diagram} shows the interaction loop diagram and \eqref{signal} the definition of the vibrotactile signal.
|
||||
The system consists of three main components: the pose estimation of the tracked real elements, the visual rendering of the \VE, and the vibrotactile signal generation and rendering.
|
||||
|
||||
\figwide[1]{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
||||
\figwide{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
||||
Fiducial markers attached to the voice-coil actuator and to augmented surfaces to track are captured by a camera.
|
||||
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
|
||||
%These poses are transformed to the \AR/\VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the \RE.
|
||||
|
||||
Reference in New Issue
Block a user