More visible comments
This commit is contained in:
@@ -5,9 +5,9 @@
|
|||||||
|
|
||||||
\bigskip
|
\bigskip
|
||||||
|
|
||||||
|
\comans{JG}{I was wondering what the difference between an immersive AR headset and a non-immersive AR headset should be. If there is a difference (e.g., derived through headset properties by FoV), it should be stated. If there is none, I would suggest not using the term immersive AR headset but simply AR headset. On this account, in Figure 1.5 another term (“Visual AR Headset”) is introduced (and later OST-AR systems, c.f. also section 2.3.1.3).}{The terms "immersive AR headset" and "visual AR headset" have been replaced by the more appropriate term "AR headset".}
|
||||||
In this manuscript thesis, we show how \AR headset, which integrates visual virtual content into the real world perception, and wearable haptics, which provide tactile sensations on the skin, can improve direct hand interaction with virtual and augmented objects.
|
In this manuscript thesis, we show how \AR headset, which integrates visual virtual content into the real world perception, and wearable haptics, which provide tactile sensations on the skin, can improve direct hand interaction with virtual and augmented objects.
|
||||||
Our goal is to enable users to perceive and interact with wearable visuo-haptic augmentations in a more realistic and effective way, as if they were real.
|
Our goal is to enable users to perceive and interact with wearable visuo-haptic augmentations in a more realistic and effective way, as if they were real.
|
||||||
\comans{JG}{I was wondering what the difference between an immersive AR headset and a non-immersive AR headset should be. If there is a difference (e.g., derived through headset properties by FoV), it should be stated. If there is none, I would suggest not using the term immersive AR headset but simply AR headset. On this account, in Figure 1.5 another term (“Visual AR Headset”) is introduced (and later OST-AR systems, c.f. also section 2.3.1.3).}{The terms "immersive AR headset" and "visual AR headset" have been replaced by the more appropriate term "AR headset".}
|
|
||||||
|
|
||||||
\section{Visual and Haptic Object Augmentations}
|
\section{Visual and Haptic Object Augmentations}
|
||||||
\label{visuo_haptic_augmentations}
|
\label{visuo_haptic_augmentations}
|
||||||
|
|||||||
@@ -157,8 +157,8 @@ Wayfinding is the cognitive planning of the movement, such as path finding or ro
|
|||||||
|
|
||||||
The \emph{system control tasks} are changes to the system state through commands or menus such as creating, deleting, or modifying virtual objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
|
The \emph{system control tasks} are changes to the system state through commands or menus such as creating, deleting, or modifying virtual objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
|
||||||
|
|
||||||
In this thesis we focus on manipulation tasks of virtual content directly with the hands, more specifically on touching visuo-haptic textures with a finger (\partref{perception}) and positioning and rotating virtual objects pushed and grasp by the hand (\partref{manipulation}).
|
|
||||||
\comans{JG}{In, Figure 2.24 I suggest removing d. or presenting it as separate figure as it shows no interaction technique (The caption is “Interaction techniques in AR” but a visualization of a spatial registration technique).}{It has been removed and replaced by an example of resizing a virtual object.}
|
\comans{JG}{In, Figure 2.24 I suggest removing d. or presenting it as separate figure as it shows no interaction technique (The caption is “Interaction techniques in AR” but a visualization of a spatial registration technique).}{It has been removed and replaced by an example of resizing a virtual object.}
|
||||||
|
In this thesis we focus on manipulation tasks of virtual content directly with the hands, more specifically on touching visuo-haptic textures with a finger (\partref{perception}) and positioning and rotating virtual objects pushed and grasp by the hand (\partref{manipulation}).
|
||||||
|
|
||||||
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[][
|
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[][
|
||||||
\item Spatial selection of virtual item of an extended display using a hand-held smartphone \cite{grubert2015multifi}.
|
\item Spatial selection of virtual item of an extended display using a hand-held smartphone \cite{grubert2015multifi}.
|
||||||
|
|||||||
@@ -7,9 +7,9 @@
|
|||||||
\paragraph{Confusion Matrix}
|
\paragraph{Confusion Matrix}
|
||||||
\label{results_matching_confusion_matrix}
|
\label{results_matching_confusion_matrix}
|
||||||
|
|
||||||
|
\comans{JG}{For the two-sample Chi-Squared tests in the matching task, the number of samples reported is 540 due to 20 participants conducting 3 trials for 9 textures each. However, this would only hold true if the repetitions per participant would be independent and not correlated (and then, one could theoretically also run 10 participants with 6 trials each, or 5 participants with 12 trials each). If they are not independent, this would lead to an artificial inflated sample size and Type I error. If the trials are not independent (please double check), I suggest either aggregating data on the participant level or to use alternative models that account for the within-subject correlation (as was done in other chapters).}{Data of the three confusion matrices have been aggregated on the participant level and analyzed using a Poisson regression.}
|
||||||
\figref{results/matching_confusion_matrix} shows the confusion matrix of the \level{Matching} task with the visual textures and the proportion of haptic texture selected in response, \ie the proportion of times the corresponding \response{Haptic Texture} was selected in response to the presentation of the corresponding \factor{Visual Texture}.
|
\figref{results/matching_confusion_matrix} shows the confusion matrix of the \level{Matching} task with the visual textures and the proportion of haptic texture selected in response, \ie the proportion of times the corresponding \response{Haptic Texture} was selected in response to the presentation of the corresponding \factor{Visual Texture}.
|
||||||
To determine which haptic textures were selected most often, the repetitions of the trials were first aggregated by counting the number of selections per participant for each (\factor{Visual Texture}, \response{Haptic Texture}) pair.
|
To determine which haptic textures were selected most often, the repetitions of the trials were first aggregated by counting the number of selections per participant for each (\factor{Visual Texture}, \response{Haptic Texture}) pair.
|
||||||
\comans{JG}{For the two-sample Chi-Squared tests in the matching task, the number of samples reported is 540 due to 20 participants conducting 3 trials for 9 textures each. However, this would only hold true if the repetitions per participant would be independent and not correlated (and then, one could theoretically also run 10 participants with 6 trials each, or 5 participants with 12 trials each). If they are not independent, this would lead to an artificial inflated sample size and Type I error. If the trials are not independent (please double check), I suggest either aggregating data on the participant level or to use alternative models that account for the within-subject correlation (as was done in other chapters).}{Data of the three confusion matrices have been aggregated on the participant level and analyzed using a Poisson regression.}
|
|
||||||
An \ANOVA based on a Poisson regression (no overdispersion was detected) indicated a statistically significant effect on the number of selections of the interaction \factor{Visual Texture} \x \response{Haptic Texture} (\chisqr{64}{180}{414}, \pinf{0.001}).
|
An \ANOVA based on a Poisson regression (no overdispersion was detected) indicated a statistically significant effect on the number of selections of the interaction \factor{Visual Texture} \x \response{Haptic Texture} (\chisqr{64}{180}{414}, \pinf{0.001}).
|
||||||
Post-hoc pairwise comparisons using the Tukey's \HSD test then indicated there was statistically significant differences for the following visual textures:
|
Post-hoc pairwise comparisons using the Tukey's \HSD test then indicated there was statistically significant differences for the following visual textures:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
|
|||||||
@@ -65,9 +65,9 @@ This only allowed us to estimate poses of the index finger and the surface to be
|
|||||||
In fact, preliminary tests we conducted showed that the built-in tracking capabilities of the HoloLens~2 were not able to track hands wearing a vibrotactile voice-coil device.
|
In fact, preliminary tests we conducted showed that the built-in tracking capabilities of the HoloLens~2 were not able to track hands wearing a vibrotactile voice-coil device.
|
||||||
A more robust hand pose estimation system would support wearing haptic devices on the hand as well as holding real objects.
|
A more robust hand pose estimation system would support wearing haptic devices on the hand as well as holding real objects.
|
||||||
|
|
||||||
|
\comans{JG}{I [...] also want to highlight the opportunity to study the effect of visual registration error as noted already in chapter 4.}{Sentences along these lines has been added.}
|
||||||
The spatial registration error \cite{grubert2018survey} and the temporal latency \cite{diluca2019perceptual} between the real and the virtual content should also be reduced to be imperceptible.
|
The spatial registration error \cite{grubert2018survey} and the temporal latency \cite{diluca2019perceptual} between the real and the virtual content should also be reduced to be imperceptible.
|
||||||
The effect of these spatial and temporal errors on the perception and manipulation of the virtual content should be systematically investigated.
|
The effect of these spatial and temporal errors on the perception and manipulation of the virtual content should be systematically investigated.
|
||||||
\comans{JG}{I [...] also want to highlight the opportunity to study the effect of visual registration error as noted already in chapter 4.}{Sentences along these lines has been added.}
|
|
||||||
Prediction of hand movements should also be considered to overcome such issues \cite{klein2020predicting,gamage2021predictable}.
|
Prediction of hand movements should also be considered to overcome such issues \cite{klein2020predicting,gamage2021predictable}.
|
||||||
|
|
||||||
A complementary solution would be to embed tracking sensors in the wearable haptic devices, such as an inertial measurement unit (IMU) or cameras \cite{preechayasomboon2021haplets}.
|
A complementary solution would be to embed tracking sensors in the wearable haptic devices, such as an inertial measurement unit (IMU) or cameras \cite{preechayasomboon2021haplets}.
|
||||||
@@ -102,8 +102,8 @@ As in the previous chapter, our aim was not to accurately reproduce real texture
|
|||||||
However, the results also have some limitations, as they addressed a small set of visuo-haptic textures that augmented the perception of smooth and white real surfaces.
|
However, the results also have some limitations, as they addressed a small set of visuo-haptic textures that augmented the perception of smooth and white real surfaces.
|
||||||
Visuo-haptic texture augmentation might be difficult on surfaces that already have strong visual or haptic patterns \cite{asano2012vibrotactile}, or on objects with complex shapes.
|
Visuo-haptic texture augmentation might be difficult on surfaces that already have strong visual or haptic patterns \cite{asano2012vibrotactile}, or on objects with complex shapes.
|
||||||
The role of visuo-haptic texture augmentation should also be evaluated in more complex tasks, such as object recognition and assembly, or in more concrete use cases, such as displaying and touching a museum object or a 3D printed object before it is manufactured.
|
The role of visuo-haptic texture augmentation should also be evaluated in more complex tasks, such as object recognition and assembly, or in more concrete use cases, such as displaying and touching a museum object or a 3D printed object before it is manufactured.
|
||||||
Finally, the visual textures used were simple color images not intended for use in an \ThreeD \VE, and enhancing their visual quality could improve the perception of visuo-haptic texture augmentation.
|
|
||||||
\comans{JG}{As future work, the effect of visual quality of the rendered textures on texture perception could also be of interest.}{A sentence along these lines has been added.}
|
\comans{JG}{As future work, the effect of visual quality of the rendered textures on texture perception could also be of interest.}{A sentence along these lines has been added.}
|
||||||
|
Finally, the visual textures used were simple color images not intended for use in an \ThreeD \VE, and enhancing their visual quality could improve the perception of visuo-haptic texture augmentation.
|
||||||
|
|
||||||
\paragraph{Specificities of Direct Touch.}
|
\paragraph{Specificities of Direct Touch.}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user