Résumé
This commit is contained in:
@@ -37,7 +37,7 @@ The system consists of three main components: the pose estimation of the tracked
|
||||
|
||||
A \qty{2}{\cm} AprilTag fiducial marker \cite{wang2016apriltag} is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}).
|
||||
Other markers are placed on the real surfaces to augment (\figref{setup}) to estimate the relative position of the finger with respect to the surfaces.
|
||||
Contrary to similar work using vision-based tracking allows both to free the hand movements and to augment any real surface.
|
||||
Contrary to similar work, using vision-based tracking allows both to free the hand movements and to augment any real surface.
|
||||
A camera external to the \AR headset with a marker-based technique is employed to provide accurate and robust tracking with a constant view of the markers \cite{marchand2016pose}.
|
||||
We denote ${}^c\mathbf{T}_i$, $i=1..n$ the homogenous transformation matrix that defines the position and rotation of the $i$-th marker out of the $n$ defined markers in the camera frame $\mathcal{F}_c$, \eg the finger pose ${}^c\mathbf{T}_f$ and the texture pose ${}^c\mathbf{T}_t$.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user