tangible -> real
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
%With a vibrotactile actuator attached to a hand-held device or directly on the finger, it is possible to simulate virtual haptic sensations as vibrations, such as texture, friction or contact vibrations \cite{culbertson2018haptics}.
|
||||
%
|
||||
%We describe a system for rendering vibrotactile roughness textures in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
||||
%We describe a system for rendering vibrotactile roughness textures in real time, on any real surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
||||
%
|
||||
%We also describe how to pair this tactile rendering with an immersive \AR or \VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the \RE.
|
||||
|
||||
@@ -18,7 +18,7 @@ The visuo-haptic texture rendering system is based on:
|
||||
The system consists of three main components: the pose estimation of the tracked real elements, the visual rendering of the \VE, and the vibrotactile signal generation and rendering.
|
||||
|
||||
\figwide[1]{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
||||
Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera.
|
||||
Fiducial markers attached to the voice-coil actuator and to augmented surfaces to track are captured by a camera.
|
||||
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
|
||||
%These poses are transformed to the \AR/\VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the \RE.
|
||||
These poses are used to move and display the virtual model replicas aligned with the \RE.
|
||||
@@ -36,8 +36,8 @@ The system consists of three main components: the pose estimation of the tracked
|
||||
\label{pose_estimation}
|
||||
|
||||
A \qty{2}{\cm} AprilTag fiducial marker \cite{wang2016apriltag} is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}).
|
||||
Other markers are placed on the tangible surfaces to augment (\figref{setup}) to estimate the relative position of the finger with respect to the surfaces.
|
||||
Contrary to similar work using vision-based tracking allows both to free the hand movements and to augment any tangible surface.
|
||||
Other markers are placed on the real surfaces to augment (\figref{setup}) to estimate the relative position of the finger with respect to the surfaces.
|
||||
Contrary to similar work using vision-based tracking allows both to free the hand movements and to augment any real surface.
|
||||
A camera external to the \AR headset with a marker-based technique is employed to provide accurate and robust tracking with a constant view of the markers \cite{marchand2016pose}.
|
||||
We denote ${}^c\mathbf{T}_i$, $i=1..n$ the homogenous transformation matrix that defines the position and rotation of the $i$-th marker out of the $n$ defined markers in the camera frame $\mathcal{F}_c$, \eg the finger pose ${}^c\mathbf{T}_f$ and the texture pose ${}^c\mathbf{T}_t$.
|
||||
|
||||
@@ -51,7 +51,7 @@ The velocity (without angular velocity) of the marker, denoted as ${}^c\dot{\mat
|
||||
|
||||
%To be able to compare virtual and augmented realities, we then create a \VE that closely replicate the real one.
|
||||
Before a user interacts with the system, it is necessary to design a \VE that will be registered with the \RE during the experiment.
|
||||
Each real element tracked by a marker is modelled virtually, \eg the hand and the augmented tangible surface (\figref{device}).
|
||||
Each real element tracked by a marker is modelled virtually, \eg the hand and the augmented surface (\figref{device}).
|
||||
In addition, the pose and size of the virtual textures were defined on the virtual replicas.
|
||||
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
|
||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the \RE, using the considered \AR or \VR headset.
|
||||
|
||||
Reference in New Issue
Block a user