Remove review comments

This commit is contained in:
2025-05-01 22:25:00 +02:00
parent 43037c8407
commit 0202efeb06
11 changed files with 0 additions and 31 deletions

View File

@@ -15,7 +15,6 @@ The visuo-haptic augmentations rendered with this design allow a user to \textbf
To ensure both real-time and reliable renderings, the hand and the real surfaces are tracked using a webcam and marker-based pose estimation.
The haptic textures are rendered as a vibrotactile signal representing a patterned grating texture that is synchronized with the finger movement on the augmented surface.
The goal of this design is to enable new \AR applications capable of augmenting real objects with virtual visuo-haptic textures in a portable, on-demand manner, and without impairing with the interaction of the user with the \RE.
\comans{SJ}{The rationale behind the proposed design is not provided. Since there are multiple ways to implement mechanically transparent haptic devices, the thesis should at least clarify why this design is considered optimal for a specific purpose at this stage.}{This has been better explained in the introduction.}
\noindentskip The contributions of this chapter are:
\begin{itemize}

View File

@@ -55,13 +55,11 @@ First, the coordinate system of the headset is manually aligned with that of the
This resulted in a \qty{\pm .5}{\cm} spatial alignment error between the \RE and the \VE.
While this was sufficient for our use cases, other methods can achieve better accuracy if needed \cite{grubert2018survey}.
The registration of the coordinate systems of the camera and the headset thus allows the use of the marker estimation poses performed with the camera to display in the headset the virtual models aligned with their real-world counterparts.
\comans{JG}{The registration process between the external camera, the finger, surface and HoloLens could have been described in more detail. Specifically, it could have been described clearer how the HoloLens coordinate system was aligned (e.g., by also tracking the fiducials on the surface and or finger).}{This has been better described.}
An additional calibration is performed to compensate for the offset between the finger contact point and the estimated marker pose \cite{son2022effect}.
The current user then places the index finger on the origin point, whose respective poses are known from the attached fiducial markers.
The transformation between the marker pose of the finger and the finger contact point can be estimated and compensated with an inverse transformation.
This allows to detect if the calibrated real finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX).
\comans{JG}{A description if and how the offset between the lower side of the fingertip touching the surface and the fiducial mounted on the top of the finger was calibrated / compensated is missing}{This has been better described.}
In our implementation, the \VE is designed with Unity (v2021.1) and the Mixed Reality Toolkit (v2.7)\footnoteurl{https://learn.microsoft.com/windows/mixed-reality/mrtk-unity}.
The visual rendering is achieved using the Microsoft HoloLens~2, an \OST-\AR headset with a \qtyproduct{43 x 29}{\degree} \FoV, a \qty{60}{\Hz} refresh rate, and self-localisation capabilities.