This commit is contained in:
2024-10-21 11:39:31 +02:00
parent bb25e3db38
commit 8957f70343
11 changed files with 96 additions and 19 deletions

3
.vscode/ltex.dictionary.en.txt vendored Normal file
View File

@@ -0,0 +1,3 @@
vhar
visuo-haptic
vibrotactile

4
.vscode/ltex.disabledRules.en.txt vendored Normal file
View File

@@ -0,0 +1,4 @@
COMMA_PARENTHESIS_WHITESPACE
UPPERCASE_SENTENCE_START
UPPERCASE_SENTENCE_START
DOUBLE_PUNCTUATION

View File

@@ -0,0 +1,47 @@
{"rule":"EN_A_VS_AN","sentence":"^\\QHowever, this method has not yet been integrated in an context, where the user should be able to freely touch and explore the visuo-haptic texture augmentations.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qdevice apparatus\\E$"}
{"rule":"ALLOW_TO","sentence":"^\\QAmong the various haptic texture augmentations, data-driven methods allow to capture, model and reproduce the roughness perception of real surfaces when touched touched by a hand-held stylus ([related_work]texture_rendering).\\E$"}
{"rule":"ALLOW_TO","sentence":"^\\QAmong the various haptic texture augmentations, data-driven methods allow to capture, model and reproduce the roughness perception of real surfaces when touched by a hand-held stylus ([related_work]texture_rendering).\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QThe contributions of this chapter are: Transposition of data-driven visuo-haptic textures to augment real objects in a direct touch context in immersive .\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[0.65]experiment/view First person view of the user study.[ As seen through the immersive headset Microsoft HoloLens 2.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q]\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[0.7]results/trial_predictions Proportion of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering.[ Curves represent predictions from the model (probit link function), and points are estimated marginal means with non-parametric bootstrap 95 .\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QThe (results/trial_pses) and (results/trial_jnds) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95 , using a non-parametric bootstrap procedure (1000 samples).\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QReported response times are .\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QThe frames analyzed were those in which the participants actively touched the comparison textures with a finger speed greater than 1 .\\E$"}
{"rule":"MISSING_GENITIVE","sentence":"^\\QFriedman tests were employed to compare the ratings to the questions (questions1 and questions2), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults_questions shows these ratings for questions where statistically significant differences were found (results are shown as mean \\E(?:Dummy|Ina|Jimmy-)[0-9]+\\Q standard deviation): Hand Ownership: participants slightly feel the virtual hand as their own with the Mixed rendering (2.3 1.0) but quite with the Virtual rendering (3.5 0.9, 0.001).\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults_questions presents the questionnaire results of the Matching and Ranking tasks.\\E$"}
{"rule":"EN_A_VS_AN","sentence":"^\\QHowever, this method has not yet been integrated in an context, where the user should be able to freely touch and explore the visuo-haptic texture augmentations.\\E$"}
{"rule":"ALLOW_TO","sentence":"^\\QAmong the various haptic texture augmentations, data-driven methods allow to capture, model and reproduce the roughness perception of real surfaces when touched by a hand-held stylus ([related_work]texture_rendering).\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/matching_confusion_matrix shows the confusion matrix of the Matching task with the visual textures and the proportion of haptic texture selected in response, i.e. the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[0.82]results/matching_confusion_matrix Confusion matrix of the Matching task.[ With the presented visual textures as columns and the selected haptic texture in proportion as rows.\\E$"}
{"rule":"EN_A_VS_AN","sentence":"^\\QA on the log Completion Time with the Visual Texture as fixed effect and the participant as random intercept was performed.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QNo statistical significant effect of Visual Texture was found (8 512 1.9, 0.06) on Completion Time (44 , 42 46), indicating an equal difficulty and participant behaviour for all the visual textures.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/ranking_mean_ci presents the results of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[ A lower rank means that the texture was considered rougher, a higher rank means smoother.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/matching_correspondence_analysis shows the first two dimensions with the 18 haptic and visual textures.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults_clusters shows the dendrograms of the two hierarchical clusterings of the haptic and visual textures, constructed using the Euclidean distance and the Ward's method on squared distance.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[ The closer the haptic and visual textures are, the more similar they were judged.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/haptic_visual_clusters_confusion_matrices (left) shows the confusion matrix of the Matching task with visual texture clusters and the proportion of haptic texture clusters selected in response.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/haptic_visual_clusters_confusion_matrices (right) shows the confusion matrix of the Matching task with visual texture ranks and the proportion of haptic texture clusters selected in response.\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/haptic_visual_clusters_confusion_matrices Confusion matrices of the visual texture (left) or rank (right) with the corresponding haptic texture clusters selected in proportion.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[ Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (i.e. more than 20, 0.05).\\E$"}
{"rule":"EN_A_VS_AN","sentence":"^\\QA non-parametric on an model was used on the Difficulty and Realism question results, while the other question results were analyzed using Wilcoxon signed-rank tests.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QOn Difficulty, there were statistically significant effects of Task (1 57 13, 0.001) and of Modality (1 57 8, 0.007), but no interaction effect Task Modality (1 57 2, ).\\E$"}
{"rule":"UPPERCASE_SENTENCE_START","sentence":"^\\Qresults/questions_modalitiesresults/questions_tasks\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q[ Questions were bipolar 100-points scales (0 = Very Low and 100 = Very High, except for Performance where 0 = Perfect and 100 = Failure), with increments of 5.\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\Q] Code Question Mental Demand How mentally demanding was the task?\\E$"}
{"rule":"NON_STANDARD_WORD","sentence":"^\\QTouching, grasping and manipulating virtual objects are fundamental interactions in ([related_work]ve_tasks) and essential for many of its applications ([related_work]ar_applications).\\E$"}
{"rule":"AND_END","sentence":"^\\QManipulation of virtual objects is achieved using a virtual hand interaction technique that represents the user's hand in the and simulates interaction with virtual objects ([related_work]ar_virtual_hands).\\E$"}
{"rule":"THE_PUNCT","sentence":"^\\QHowever, direct hand manipulation is still challenging due to the intangibility of the , the lack of mutual occlusion between the hand and the virtual object in - ([related_work]ar_displays), and the inherent delays between the user's hand and the result of the interaction simulation ([related_work]ar_virtual_hands).\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QTo this end, we selected in the literature and compared the most popular visual hand renderings used to interact with virtual objects in .\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QThe main contributions of this chapter are: A comparison from the literature of six common visual hand renderings used to interact with virtual objects in .\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QWe present the results of the user study and discuss the implications of these results for the manipulation of virtual objects directly with the hand in .\\E$"}
{"rule":"COMMA_PARENTHESIS_WHITESPACE","sentence":"^\\QResults showed that 95 of the contacts between the fingertip and the virtual cube happened at speeds below 1.5 .\\E$"}
{"rule":"DOUBLE_PUNCTUATION","sentence":"^\\QParticipants rated their expertise (I use it more than once a year) with , , and haptics in a pre-experiment questionnaire.\\E$"}
{"rule":"THE_SENT_END","sentence":"^\\QIt is technically and conceptually closely related to , which completely replaces perception with a .\\E$"}
{"rule":"THE_PUNCT","sentence":"^\\QOn the original reality-virtuality continuum of \\E(?:Dummy|Ina|Jimmy-)[0-9]+\\Q, augmented virtuality is also considered, as the incorporation of real objects into a , and is placed between and .\\E$"}
{"rule":"THE_SENT_END","sentence":"^\\QIn this thesis we call / systems the computational set of hardware (input devices, sensors, displays and haptic devices) and software (tracking, simulation and rendering) that allows the user to interact with the .\\E$"}
{"rule":"THE_PUNCT","sentence":"^\\QBecause the visuo-haptic is displayed in real time and aligned with the , the user is given the illusion of directly perceiving and interacting with the virtual content as if it were part of the .\\E$"}
{"rule":"THE_SENT_END","sentence":"^\\QBecause the visuo-haptic is displayed in real time and aligned with the , the user is given the illusion of directly perceiving and interacting with the virtual content as if it were part of the .\\E$"}

29
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,29 @@
{
"cSpell.words": [
"accessorization",
"avatarization",
"deafferented",
"electrovibration",
"haptically",
"Impa",
"jeon",
"jones",
"mechanoreceptors",
"Meissner",
"Microdrive",
"milgram",
"Nowh",
"Oppo",
"Pacini",
"Pacinian",
"Pico",
"retargeting",
"Skel",
"subcomponents",
"teleoperation",
"Wayfinding",
"wearability",
"Wearability",
"Wris"
]
}

View File

@@ -184,7 +184,7 @@ However, \textbf{manipulating a purely virtual object with the bare hand can be
In addition, current \AR systems have visual rendering limitations that also affect interaction with virtual objects. %, due to depth underestimation, a lack of mutual occlusions, and hand tracking latency.
\AR is the display of superimposed images of the virtual world, synchronized with the user's current view of the real world.
However, the depth perception of virtual objects is often underestimated \cite{peillard2019studying,adams2022depth}.
There is also often \textbf{a lack of mutual occlusion between the hand and a virtual object}, that is the hand can hide the object or be hidden by the object \cite{macedo2023occlusion}.
There is also often \textbf{a lack of mutual occlusions between the hand and a virtual object}, that is the hand can hide the object or be hidden by the object \cite{macedo2023occlusion}.
Finally, as illustrated in our interaction loop \figref{interaction-loop}, interaction with a virtual object is an illusion, because the real hand controls in real time a virtual hand, like an avatar, whose contacts with virtual objects are then simulated in the \VE.
Therefore, there is inevitably a latency between the movements of the real hand and the feedback movements of the virtual object, and a spatial shift between the real hand and the virtual hand, whose movements are constrained to the virtual object touched \cite{prachyabrued2014visual}.
These three rendering limitations make it \textbf{difficult to perceive the position of the fingers relative to the object} before touching or grasping it, but also to estimate the force required to grasp the virtual object and move it to a desired location.
@@ -219,7 +219,7 @@ Our contributions are summarized in \figref{contributions}.
\subsectionstarbookmark{Axis I: Augmenting the Texture Perception of Real Surfaces}
Wearable haptic devices have proven effective in modifying the perception of a touched real surface, without altering the object or covering the fingertip, forming haptic augmentation \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}.
%It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronised with the finger movement.
%It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronized with the finger movement.
%It enables rich haptic feedback as the combination of kinesthetic sensation from the real and cutaneous sensation from the actuator.
However, wearable haptic augmentation with \AR has been little explored, as well as the visuo-haptic augmentation of texture.
Texture is indeed one of the most fundamental perceived properties of a surface material \cite{hollins1993perceptual,okamoto2013psychophysical}, perceived equally well by sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual}, and one of the most studied haptic (only, without visual) augmentation \cite{unger2011roughness,culbertson2014modeling,asano2015vibrotactile,strohmeier2017generating,friesen2024perceived}.
@@ -277,7 +277,7 @@ Finally, we describe how visuo-haptic feedback has augmented direct hand interac
We then address each of our two research axes in a dedicated part.
\noindentskip
In \textbf{\partref{perception}} we present our contributions to the first axis of research: modifying the visuo-haptic texture perception of real surfaces.
In \textbf{\partref{perception}}, we present our contributions to the first axis of research: modifying the visuo-haptic texture perception of real surfaces.
We evaluate how the visual feedback of the hand (real or virtual), the environment (\AR or \VR) and the textures (coherent, different or not shown) affect the perception of virtual vibrotactile textures rendered on real surfaces and touched directly with the index finger.
In \textbf{\chapref{vhar_system}}, we design and implement a system for rendering visuo-haptic virtual textures that augment real surfaces. %, using an immersive \OST-\AR headset and a wearable vibrotactile device.

View File

@@ -118,7 +118,7 @@ The joints at the base of each phalanx allow flexion and extension, \ie folding
The proximal phalanges can also adduct and abduct, \ie move the fingers towards and away from each other.
Finally, the metacarpal of the thumb is capable of flexion/extension and adduction/abduction, which allows the thumb to oppose the other fingers.
These axes of movement are called DoFs and can be represented by a \emph{kinematic model} of the hand with 27 DoFs as shown in \figref{blausen2014medical_hand}.
Thus the thumb has 5 DoFs, each of the other four fingers has 4 DoFs and the wrist has 6 DoFs and can take any position (3 DoFs) or orientation (3 DoFs) in space \cite{erol2007visionbased}.
Thus, the thumb has 5 DoFs, each of the other four fingers has 4 DoFs and the wrist has 6 DoFs and can take any position (3 DoFs) or orientation (3 DoFs) in space \cite{erol2007visionbased}.
This complex structure enables the hand to perform a wide range of movements and gestures. However, the way we explore and grasp objects follows simpler patterns, depending on the object being touched and the aim of the interaction.
@@ -144,11 +144,6 @@ It takes only \qtyrange{2}{3}{\s} to perform these procedures, except for contou
\fig{exploratory_procedures}{Exploratory procedures and their associated object properties (in brackets). Adapted from \textcite{lederman2009haptic}.}
%Le sens haptique seul (sans la vision) nous permet ainsi de reconnaitre les objets et matériaux avec une grande précision.
%La reconnaissance des propriété matérielles, \ie la surface et sa texture, rigidité et température est meilleure qu'avec le sens visuel seul.
%Mais la reconnaissance des propriétés spatiales, la forme et la taille de l'objet, est moins bonne avec l'haptique qu'avec la vision \cite{lederman2009haptic}.
%Quelques secondes (\qtyrange{2}{3}{\s}) suffisent pour effectuer ces procédures, à l'exception du suivi de contour qui peut prendre une dizaine de secondes \cite{jones2006human}.
\subsubsection{Grasp Types}
\label{grasp_types}

View File

@@ -151,7 +151,7 @@ A \LRA consists of a coil that creates a magnetic field from an alternative curr
They are more complex to control and a bit larger than \ERMs.
Each \LRA is designed to vibrate with maximum amplitude at a given resonant frequency, but won't vibrate efficiently at other frequencies, \ie their bandwidth is narrow, as shown in \figref{azadi2014vibrotactile}.
A voice-coil actuator is a \LRA but capable of generating vibration at two \DoF, with an independent control of the frequency and amplitude of the vibration on a wide bandwidth.
They are larger in size than \ERMs and \LRAs, but can generate more complex renderings.
They are larger than \ERMs and \LRAs, but can generate more complex renderings.
Piezoelectric actuators deform a solid material when a voltage is applied.
They are small and thin and provide two \DoFs of amplitude and frequency control.

View File

@@ -303,7 +303,7 @@ While in \VST-\AR, this could be solved as a masking problem by combining the re
%However, this effect still causes depth conflicts that make it difficult to determine if one's hand is behind or in front of a virtual object, \eg the thumb is in front of the virtual cube, but could be perceived to be behind it.
Since the \VE is intangible, adding a visual feedback of the virtual hand in \AR that is physically constrained to the virtual objects would achieve a similar result to the double-hand feedback of \textcite{prachyabrued2014visual}.
A virtual object overlaying a real object object in \OST-\AR can vary in size and shape without degrading user experience or manipulation performance \cite{kahl2021investigation,kahl2023using}.
A virtual object overlaying a real object in \OST-\AR can vary in size and shape without degrading user experience or manipulation performance \cite{kahl2021investigation,kahl2023using}.
This suggests that a visual hand feedback superimposed on the real hand as a partial avatarization (\secref{ar_embodiment}) might be helpful without impairing the user.
Few works have compared different visual feedback of the virtual hand in \AR or with wearable haptic feedback.

View File

@@ -1,12 +1,12 @@
\section{Introduction}
\label{intro}
When we look at the surface of an everyday object, we then touch it to confirm or contrast our initial visual impression and to estimate the properties of the object, particularly its texture \secref[related_work]{visual_haptic_influence}.
Among the various haptic texture augmentations, data-driven methods allow to capture, model and reproduce the roughness perception of real surfaces when touched touched by a hand-held stylus \secref[related_work]{texture_rendering}.
When we look at the surface of an everyday object, we then touch it to confirm or contrast our initial visual impression and to estimate the properties of the object, particularly its texture (\secref[related_work]{visual_haptic_influence}).
Among the various haptic texture augmentations, data-driven methods allow to capture, model and reproduce the roughness perception of real surfaces when touched by a hand-held stylus (\secref[related_work]{texture_rendering}).
Databases of visuo-haptic textures have been developed in this way \cite{culbertson2014one,balasubramanian2024sens3}, but they have not yet been explored in an immersive and direct touch context with \AR and wearable haptics.
In this chapter, we investigate whether simultaneous and \textbf{co-localized visual and wearable haptic texture augmentation of real surfaces} in \AR can be perceived in a coherent and realistic manner, and to what extent each sensory modality would contribute to the overall perception of the augmented texture.
We used nine pairs of \textbf{data-driven visuo-haptic textures} from the \HaTT database \cite{culbertson2014one}, which we rendered using the wearable visuo-haptic augmentatio nsystem presented in \chapref{vhar_system}. %, an \OST-\AR headset, and a wearable voice-coil device worn on the finger.
We used nine pairs of \textbf{data-driven visuo-haptic textures} from the \HaTT database \cite{culbertson2014one}, which we rendered using the wearable visuo-haptic augmentation system presented in \chapref{vhar_system}. %, an \OST-\AR headset, and a wearable voice-coil device worn on the finger.
In a \textbf{user study}, 20 participants freely explored in direct touch the combination of the visuo-haptic texture pairs to rate their coherence, realism and perceived roughness.
We aimed to assess \textbf{which haptic textures were matched with which visual textures}, how the roughness of the visual and haptic textures was perceived, and whether \textbf{the perceived roughness} could explain the matches made between them.

View File

@@ -58,7 +58,7 @@ Even though the consensus was high (\kendall{0.61}, \ci{0.58}{0.64}), the roughn
\paragraph{Visuo-Haptic Textures Ranking}
Also, almost all the texture pairs in the visuo-haptic textures ranking results were statistically significantly different (\chisqr{8}{20}{140}, \pinf{0.001}; \pinf{0.05} for each comparison), except for the following groups: \{\level{Sandpaper~100}, \level{Cork}\}; \{\level{Cork}, \level{Brick~2}\}; and \{\level{Plastic Mesh~1}, \level{Velcro Hooks}, \level{Sandpaper~320}\}.
The consezsus between the participants was also high \kendall{0.77}, \ci{0.74}{0.79}.
The consensus between the participants was also high \kendall{0.77}, \ci{0.74}{0.79}.
Finally, calculating the similarity of the three rankings of each participant, the \textit{Visuo-Haptic Textures Ranking} was on average highly similar to the \textit{Haptic Textures Ranking} (\kendall{0.79}, \ci{0.72}{0.86}) and moderately to the \textit{Visual Textures Ranking} (\kendall{0.48}, \ci{0.39}{0.56}).
A Wilcoxon signed-rank test indicated that this difference was statistically significant (\wilcoxon{190}, \p{0.002}).
These results indicate that the two haptic and visual modalities were integrated together, the resulting roughness ranking being between the two rankings of the modalities alone, but with haptics predominating.
@@ -85,7 +85,6 @@ Stiffness is indeed an important perceptual dimension of a material (\secref[rel
\fig[0.6]{results/matching_correspondence_analysis}{
Correspondence analysis of the confusion matrix of the \level{Matching} task.
}[
%The haptic textures are represented as green squares, the haptic textures as red circles. %
The closer the haptic and visual textures are, the more similar they were judged. %
The first dimension (horizontal axis) explains \percent{60} of the variance, the second dimension (vertical axis) explains \percent{30} of the variance.
The confusion matrix is \figref{results/matching_confusion_matrix}.
@@ -139,7 +138,7 @@ This shows that the participants consistently identified the roughness of each v
A non-parametric \ANOVA on an \ART model was used on the \response{Difficulty} and \response{Realism} question results, while the other question results were analyzed using Wilcoxon signed-rank tests.
On \response{Difficulty}, there were statistically significant effects of \factor{Task} (\anova{1}{57}{13}, \pinf{0.001}) and of \response{Modality} (\anova{1}{57}{8}, \p{0.007}), but no interaction effect \factor{Task} \x \factor{Modality} (\anova{1}{57}{2}, \ns).
The \level{Ranking} task was found easier (\mean{2.9}, \sd{1.2}) than the \level{Matching} task (\mean{3.9}, \sd{1.5}), and the Haptic textures were found easier to discrimate (\mean{3.0}, \sd{1.3}) than the Visual ones (\mean{3.8}, \sd{1.5}).
The \level{Ranking} task was found easier (\mean{2.9}, \sd{1.2}) than the \level{Matching} task (\mean{3.9}, \sd{1.5}), and the Haptic textures were found easier to discriminate (\mean{3.0}, \sd{1.3}) than the Visual ones (\mean{3.8}, \sd{1.5}).
Both haptic and visual textures were judged moderately realistic for both tasks (\mean{4.2}, \sd{1.3}), with no statistically significant effect of \factor{Task}, \factor{Modality} or their interaction on \response{Realism}.
No statistically significant effects of \factor{Task} on \response{Textures Match} and \response{Uncomfort} were found either.
The coherence of the texture pairs was considered moderate (\mean{4.6}, \sd{1.2}) and the haptic device was not felt uncomfortable (\mean{2.4}, \sd{1.4}).

View File

@@ -8,13 +8,13 @@ However, direct hand manipulation is still challenging due to the intangibility
In this chapter, we investigate the \textbf{visual rendering as hand augmentation} for direct manipulation of virtual objects in \OST-\AR.
To this end, we selected in the literature and compared the most popular visual hand renderings used to interact with virtual objects in \AR.
The virtual hand is \textbf{displayed superimposed} on the user's hand with these visual rendering, providing a \textbf{feedback on the tracking} of the real hand, as shown in \figref{hands}.
The virtual hand is \textbf{displayed superimposed} on the user's hand with these visual rendering, providing \textbf{feedback on the tracking} of the real hand, as shown in \figref{hands}.
The movement of the virtual hand is also \textbf{constrained to the surface} of the virtual object, providing an additional \textbf{feedback on the interaction} with the virtual object.
We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloLens~2, the effect of six visual hand renderings on the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a virtual object directly with the hand.
\noindentskip The main contributions of this chapter are:
\begin{itemize}
\item A comparison from the literature of the six most common visual hand renderings used to interact with virtual objects in \AR.
\item A comparison from the literature of six common visual hand renderings used to interact with virtual objects in \AR.
\item A user study evaluating with 24 participants the performance and user experience of the six visual hand renderings as augmentation of the real hand during free and direct hand manipulation of virtual objects in \OST-\AR.
\end{itemize}