Improve registration description
This commit is contained in:
@@ -4,7 +4,7 @@
|
||||
The visuo-haptic texture rendering system is based on:
|
||||
\begin{enumerate}[label=(\arabic*)]
|
||||
\item a real-time interaction loop between the finger movements and a coherent visuo-haptic feedback simulating the sensation of a touched texture,
|
||||
\item a precise alignment of the \VE with its real counterpart, and
|
||||
\item a precise registration of the \VE with its real counterpart, and
|
||||
\item a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
||||
\end{enumerate}
|
||||
|
||||
@@ -44,14 +44,24 @@ The velocity (without angular velocity) of the finger marker, denoted as $\pose{
|
||||
It is then filtered with another 1€ filter with the same parameters, and denoted as $\pose{c}{\hat{\dot{X}}}{f}$.
|
||||
Finally, this filtered finger velocity is transformed into the augmented surface frame $\poseFrame{s}$ to be used in the vibrotactile signal generation, such as $\pose{s}{\hat{\dot{X}}}{f} = \pose{s}{T}{c} \, \pose{c}{\hat{\dot{X}}}{f}$.
|
||||
|
||||
\subsection{Virtual Environment Alignment}
|
||||
\label{virtual_real_alignment}
|
||||
\subsection{Virtual Environment Registration}
|
||||
\label{virtual_real_registration}
|
||||
|
||||
\comans{JG}{The registration process between the external camera, the finger, surface and HoloLens could have been described in more detail. Specifically, it could have been described clearer how the HoloLens coordinate system was aligned (e.g., by also tracking the fiducials on the surface and or finger).}{This has been better described.}
|
||||
Before a user interacts with the system, it is necessary to design a \VE that will be registered with the \RE during the experiment.
|
||||
Each real element tracked by a marker is modelled virtually, \eg the hand and the augmented surface (\figref{device}).
|
||||
In addition, the pose and size of the virtual textures were defined on the virtual replicas.
|
||||
During the experiment, the system uses marker pose estimates to align the virtual models with their real world counterparts.
|
||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the \RE, using the considered \AR or \VR headset.
|
||||
Prior to any usage, it is necessary to register the \VE with the \RE.
|
||||
First, the coordinate system of the headset is manually aligned with that of the external camera by the experimenter using a raycast from the headset to the origin point of the camera (the center of the black rectangle in the box in \figref{apparatus}).
|
||||
This resulted in a \qty{\pm .5}{\cm} spatial alignment error between the \RE and the \VE.
|
||||
While this was sufficient for our use cases, other methods can achieve better accuracy if needed \cite{grubert2018survey}.
|
||||
The registration of the coordinate systems of the camera and the headset thus allows the use of the marker estimation poses performed with the camera to display in the headset the virtual models aligned with their real-world counterparts.
|
||||
|
||||
\comans{JG}{A description if and how the offset between the lower side of the fingertip touching the surface and the fiducial mounted on the top of the finger was calibrated / compensated is missing}{This has been better described.}
|
||||
An additional calibration is performed to compensate for the offset between the finger contact point and the estimated marker pose \cite{son2022effect}.
|
||||
The current user then places the index finger on the origin point, whose respective poses are known from the attached fiducial markers.
|
||||
The transformation between the marker pose of the finger and the finger contact point can be estimated and compensated with an inverse transformation.
|
||||
This allows to detect if the calibrated real finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX).
|
||||
|
||||
In our implementation, the \VE is designed with Unity (v2021.1) and the Mixed Reality Toolkit (v2.7)\footnoteurl{https://learn.microsoft.com/windows/mixed-reality/mrtk-unity}.
|
||||
The visual rendering is achieved using the Microsoft HoloLens~2, an \OST-\AR headset with a \qtyproduct{43 x 29}{\degree} \FoV, a \qty{60}{\Hz} refresh rate, and self-localisation capabilities.
|
||||
|
||||
Reference in New Issue
Block a user