Simpler section reference labels
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
\section{Introduction}
|
||||
\label{sec:introduction}
|
||||
\label{introduction}
|
||||
|
||||
When we look at the surface of an everyday object, we then touch it to confirm or contrast our initial visual impression and to estimate the properties of the object~\autocite{ernst2002humans}.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{User Study}
|
||||
\label{sec:experiment}
|
||||
\label{experiment}
|
||||
|
||||
\begin{subfigs}{setup}{%
|
||||
User Study.
|
||||
@@ -28,7 +28,7 @@ Our objective is to assess which haptic textures were associated with which visu
|
||||
|
||||
|
||||
\subsection{The textures}
|
||||
\label{sec::textures}
|
||||
\label{textures}
|
||||
|
||||
The 100 visuo-haptic texture pairs of the HaTT database~\autocite{culbertson2014one} were preliminary tested and compared using AR and vibrotactile haptic feedback on the finger on a tangible surface.
|
||||
%
|
||||
@@ -40,7 +40,7 @@ All these visual and haptic textures are isotropic: their rendering (appearance
|
||||
|
||||
|
||||
\subsection{Apparatus}
|
||||
\label{sec::apparatus}
|
||||
\label{apparatus}
|
||||
|
||||
\figref{setup} shows the experimental setup (middle) and the first person view (right) of the user study.
|
||||
%
|
||||
@@ -70,7 +70,7 @@ The user study was held in a quiet room with no windows, with one light source o
|
||||
|
||||
|
||||
\subsection{Procedure and Collected Data}
|
||||
\label{sec::procedure}
|
||||
\label{procedure}
|
||||
|
||||
Participants were first given written instructions about the experimental setup, the tasks, and the procedure of the user study.
|
||||
%
|
||||
@@ -117,7 +117,7 @@ The user study took on average 1 hour to complete.
|
||||
|
||||
|
||||
\subsection{Participants}
|
||||
\label{sec::participants}
|
||||
\label{participants}
|
||||
|
||||
Twenty participants took part to the user study (12 males, 7 females, 1 preferred not to say), aged between 20 and 60 years old (M=29.1, SD=9.4).
|
||||
%
|
||||
@@ -135,7 +135,7 @@ They all signed an informed consent form before the user study.
|
||||
|
||||
|
||||
\subsection{Design}
|
||||
\label{sec::design}
|
||||
\label{design}
|
||||
|
||||
The matching task was a single-factor within-subjects design, \textit{Visual Texture}, with the following levels:
|
||||
%
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
\section{Results}
|
||||
\label{sec:results}
|
||||
\label{results}
|
||||
|
||||
\subsection{Textures Matching}
|
||||
\label{sec:results_matching}
|
||||
\label{results_matching}
|
||||
|
||||
\subsubsection{Confusion Matrix}
|
||||
\label{sec:results_matching_confusion_matrix}
|
||||
\label{results_matching_confusion_matrix}
|
||||
|
||||
\begin{subfigs}{results_matching_ranking}{%
|
||||
(Left) Confusion matrix of the matching task, with the presented visual textures as columns and the selected haptic texture in proportion as rows. %
|
||||
@@ -45,7 +45,7 @@ Another explanation could be that the participants had difficulties to estimate
|
||||
Indeed, many participants explained that they tried to identify or imagine the roughness of a given visual texture then to select the most plausible haptic texture, in terms of frequency and/or amplitude of vibrations.
|
||||
|
||||
\subsubsection{Completion Time}
|
||||
\label{sec:results_matching_time}
|
||||
\label{results_matching_time}
|
||||
|
||||
To verify that the difficulty with all the visual textures was the same on the matching task, the \textit{Completion Time} of a trial, \ie the time between the visual texture display and the haptic texture selection, was analyzed.
|
||||
%
|
||||
@@ -59,7 +59,7 @@ No statistical significant effect of \textit{Visual Texture} was found (\anova{8
|
||||
|
||||
|
||||
\subsection{Textures Ranking}
|
||||
\label{sec:results_ranking}
|
||||
\label{results_ranking}
|
||||
|
||||
\figref{results_matching_ranking} (right) presents the results of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs.
|
||||
%
|
||||
@@ -83,7 +83,7 @@ These results indicate, with \figref{results_matching_ranking} (right), that the
|
||||
|
||||
|
||||
\subsection{Perceived Similarity of Visual and Haptic Textures}
|
||||
\label{sec:results_similarity}
|
||||
\label{results_similarity}
|
||||
|
||||
\begin{subfigs}{results_similarity}{%
|
||||
(Left) Correspondence analysis of the matching task confusion matrix (see \figref{results_matching_ranking}, left).
|
||||
@@ -155,7 +155,7 @@ This shows that the participants consistently identified the roughness of each v
|
||||
|
||||
|
||||
\subsection{Questionnaire}
|
||||
\label{sec:results_questions}
|
||||
\label{results_questions}
|
||||
|
||||
\begin{subfigs}{results_questions}{%
|
||||
Boxplots of the 7-item Likert scale question results (1=Not at all, 7=Extremely) %
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Discussion}
|
||||
\label{sec:discussion}
|
||||
\label{discussion}
|
||||
|
||||
In this study, we investigated the perception of visuo-haptic texture augmentation of tangible surfaces touched directly with the index fingertip, using visual texture overlays in AR and haptic roughness textures generated by a vibrotactile device worn on the middle index phalanx.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Conclusion}
|
||||
\label{sec:conclusion}
|
||||
\label{conclusion}
|
||||
|
||||
\fig[0.6]{experiment/use_case}{%
|
||||
Illustration of the texture augmentation in AR through an interior design scenario. %
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Introduction}
|
||||
\label{sec:introduction}
|
||||
\label{introduction}
|
||||
|
||||
% Delivers the motivation for your paper. It explains why you did the work you did.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Visuo-Haptic Texture Rendering in Mixed Reality}
|
||||
\label{sec:method}
|
||||
\label{method}
|
||||
|
||||
\figwide[1]{method/diagram}{%
|
||||
Diagram of the visuo-haptic texture rendering system.
|
||||
@@ -42,7 +42,7 @@ The system is composed of three main components: the pose estimation of the trac
|
||||
|
||||
|
||||
\subsection{Pose Estimation and Virtual Environment Alignment}
|
||||
\label{sec:virtual_real_alignment}
|
||||
\label{virtual_real_alignment}
|
||||
|
||||
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup}[%
|
||||
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger. %
|
||||
@@ -95,7 +95,7 @@ To simulate a VR headset, a cardboard mask (with holes for sensors) is attached
|
||||
|
||||
|
||||
\subsection{Vibrotactile Signal Generation and Rendering}
|
||||
\label{sec:texture_generation}
|
||||
\label{texture_generation}
|
||||
|
||||
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
||||
%
|
||||
@@ -149,7 +149,7 @@ The tactile texture is described and rendered in this work as a one dimensional
|
||||
|
||||
|
||||
\subsection{System Latency}
|
||||
\label{sec:latency}
|
||||
\label{latency}
|
||||
|
||||
%As shown in \figref{method/diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{User Study}
|
||||
\label{sec:experiment}
|
||||
\label{experiment}
|
||||
|
||||
\begin{subfigswide}{renderings}{%
|
||||
The three visual rendering conditions and the experimental procedure of the two-alternative forced choice (2AFC) psychophysical study.
|
||||
@@ -32,7 +32,7 @@ In order not to influence the perception, as vision is an important source of in
|
||||
|
||||
|
||||
\subsection{Participants}
|
||||
\label{sec:participants}
|
||||
\label{participants}
|
||||
|
||||
Twenty participants were recruited for the study (16 males, 3 females, 1 prefer not to say), aged between 18 and 61 years old (\median{26}{}, \iqr{6.8}{}).
|
||||
%
|
||||
@@ -50,7 +50,7 @@ They all signed an informed consent form before the user study and were unaware
|
||||
|
||||
|
||||
\subsection{Apparatus}
|
||||
\label{sec:apparatus}
|
||||
\label{apparatus}
|
||||
|
||||
An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (see \figref{renderings}).
|
||||
%
|
||||
@@ -106,7 +106,7 @@ The user study was held in a quiet room with no windows.
|
||||
|
||||
|
||||
\subsection{Procedure}
|
||||
\label{sec:procedure}
|
||||
\label{procedure}
|
||||
|
||||
Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire.
|
||||
%
|
||||
@@ -142,7 +142,7 @@ Preliminary studies allowed us to determine a range of amplitudes that could be
|
||||
|
||||
|
||||
\subsection{Experimental Design}
|
||||
\label{sec:experimental_design}
|
||||
\label{experimental_design}
|
||||
|
||||
The user study was a within-subjects design with two factors:
|
||||
%
|
||||
@@ -161,7 +161,7 @@ A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentat
|
||||
|
||||
|
||||
\subsection{Collected Data}
|
||||
\label{sec:collected_data}
|
||||
\label{collected_data}
|
||||
|
||||
For each trial, the \textit{Texture Choice} by the participant as the roughest of the pair was recorded.
|
||||
%
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
\section{Results}
|
||||
\label{sec:results}
|
||||
\label{results}
|
||||
|
||||
\subsection{Trial Measures}
|
||||
\label{sec:results_trials}
|
||||
\label{results_trials}
|
||||
|
||||
All measures from trials were analysed using linear mixed models (LMM) or generalised linear mixed models (GLMM) with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts.
|
||||
%
|
||||
@@ -16,7 +16,7 @@ Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci
|
||||
|
||||
|
||||
\subsubsection{Discrimination Accuracy}
|
||||
\label{sec:discrimination_accuracy}
|
||||
\label{discrimination_accuracy}
|
||||
|
||||
A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (see \figref{results/trial_predictions}).
|
||||
%
|
||||
@@ -51,7 +51,7 @@ All pairwise differences were statistically significant.
|
||||
|
||||
|
||||
\subsubsection{Response Time}
|
||||
\label{sec:response_time}
|
||||
\label{response_time}
|
||||
|
||||
A LMM analysis of variance (AOV) with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effects on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}).
|
||||
%
|
||||
@@ -61,7 +61,7 @@ The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}).
|
||||
|
||||
|
||||
\subsubsection{Finger Position and Speed}
|
||||
\label{sec:finger_position_speed}
|
||||
\label{finger_position_speed}
|
||||
|
||||
The frames analysed were those in which the participants actively touched the comparison textures with a finger speed greater than \SI{1}{\mm\per\second}.
|
||||
%
|
||||
@@ -91,7 +91,7 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
||||
|
||||
|
||||
\subsection{Questionnaires}
|
||||
\label{sec:questions}
|
||||
\label{questions}
|
||||
|
||||
%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire.
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Discussion}
|
||||
\label{sec:discussion}
|
||||
\label{discussion}
|
||||
|
||||
%Interpret the findings in results, answer to the problem asked in the introduction, contrast with previous articles, draw possible implications. Give limitations of the study.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Conclusion}
|
||||
\label{sec:conclusion}
|
||||
\label{conclusion}
|
||||
|
||||
%Summary of the research problem, method, main findings, and implications.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Introduction}
|
||||
\label{sec:introduction}
|
||||
\label{introduction}
|
||||
|
||||
\begin{subfigswide}{hands}{%
|
||||
Experiment \#1. The six considered visual hand renderings, as seen by the user through the AR headset
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
\section{User Study}
|
||||
\label{sec:method}
|
||||
\label{method}
|
||||
|
||||
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
|
||||
|
||||
|
||||
\subsection{Visual Hand Renderings}
|
||||
\label{sec:hands}
|
||||
\label{hands}
|
||||
|
||||
We compared a set of the most popular visual hand renderings.%, as also presented in \secref{hands}.
|
||||
%
|
||||
@@ -19,7 +19,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
|
||||
|
||||
|
||||
\subsubsection{None~(\figref{method/hands-none})}
|
||||
\label{sec:hands_none}
|
||||
\label{hands_none}
|
||||
|
||||
As a reference, we considered no visual hand rendering, as is common in AR~\autocite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
%
|
||||
@@ -29,7 +29,7 @@ As virtual content is rendered on top of the real environment, the hand of the u
|
||||
|
||||
|
||||
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
|
||||
\label{sec:hands_occlusion}
|
||||
\label{hands_occlusion}
|
||||
|
||||
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\autocite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
|
||||
%
|
||||
@@ -37,7 +37,7 @@ This approach is frequent in works using VST-AR headsets~\autocite{knorlein2009i
|
||||
|
||||
|
||||
\subsubsection{Tips (\figref{method/hands-tips})}
|
||||
\label{sec:hands_tips}
|
||||
\label{hands_tips}
|
||||
|
||||
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
|
||||
%
|
||||
@@ -45,7 +45,7 @@ Unlike work using small spheres~\autocite{maisto2017evaluation, meli2014wearable
|
||||
|
||||
|
||||
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
|
||||
\label{sec:hands_contour}
|
||||
\label{hands_contour}
|
||||
|
||||
This rendering is a {1-mm-thick} outline contouring the user's hands, providing information about the whole hand while leaving its inside visible.
|
||||
%
|
||||
@@ -55,7 +55,7 @@ This rendering is not as usual as the previous others in the literature~\autocit
|
||||
|
||||
|
||||
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
|
||||
\label{sec:hands_skeleton}
|
||||
\label{hands_skeleton}
|
||||
|
||||
This rendering schematically renders the joints and phalanges of the fingers with small spheres and cylinders, respectively, leaving the outside of the hand visible.
|
||||
%
|
||||
@@ -65,7 +65,7 @@ It is widely used in VR~\autocite{argelaguet2016role, schwind2018touch, chessa20
|
||||
|
||||
|
||||
\subsubsection{Mesh (\figref{method/hands-mesh})}
|
||||
\label{sec:hands_mesh}
|
||||
\label{hands_mesh}
|
||||
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\autocite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
%
|
||||
@@ -73,17 +73,17 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
|
||||
|
||||
|
||||
\subsection{Manipulation Tasks and Virtual Scene}
|
||||
\label{sec:tasks}
|
||||
\label{tasks}
|
||||
|
||||
\begin{subfigs}{tasks}{%
|
||||
Experiment \#1. The two manipulation tasks:
|
||||
}[
|
||||
}[
|
||||
\item pushing a virtual cube along a table towards a target placed on the same surface; %
|
||||
\item grasping and lifting a virtual cube towards a target placed on a 20-cm-higher plane. %
|
||||
Both pictures show the cube to manipulate in the middle (5-cm-edge and opaque) and the eight possible targets to
|
||||
reach (7-cm-edge volume and semi-transparent). %
|
||||
Only one target at a time was shown during the experiments.%
|
||||
]
|
||||
]
|
||||
\subfig[0.23]{method/task-push}
|
||||
\subfig[0.23]{method/task-grasp}
|
||||
\end{subfigs}
|
||||
@@ -92,7 +92,7 @@ Following the guidelines of \textcite{bergstrom2021how} for designing object man
|
||||
|
||||
|
||||
\subsubsection{Push Task}
|
||||
\label{sec:push-task}
|
||||
\label{push-task}
|
||||
|
||||
The first manipulation task consists in pushing a virtual object along a real flat surface towards a target placed on the same plane (see \figref{method/task-push}).
|
||||
%
|
||||
@@ -110,7 +110,7 @@ The task is considered completed when the cube is \emph{fully} inside the target
|
||||
|
||||
|
||||
\subsubsection{Grasp Task}
|
||||
\label{sec:grasp-task}
|
||||
\label{grasp-task}
|
||||
|
||||
The second manipulation task consists in grasping, lifting, and placing a virtual object in a target placed on a different (higher) plane (see \figref{method/task-grasp}).
|
||||
%
|
||||
@@ -122,7 +122,7 @@ As before, the task is considered completed when the cube is \emph{fully} inside
|
||||
|
||||
|
||||
\subsection{Experimental Design}
|
||||
\label{sec:design}
|
||||
\label{design}
|
||||
|
||||
We analyzed the two tasks separately. For each of them, we considered two independent, within-subject, variables:
|
||||
%
|
||||
@@ -140,7 +140,7 @@ This design led to a total of 2 manipulation tasks \x 6 visual hand renderings \
|
||||
|
||||
|
||||
\subsection{Apparatus and Implementation}
|
||||
\label{sec:apparatus}
|
||||
\label{apparatus}
|
||||
|
||||
We used the OST-AR headset HoloLens~2.
|
||||
%
|
||||
@@ -174,7 +174,7 @@ This setup enabled a good and consistent tracking of the user's fingers.
|
||||
|
||||
|
||||
\subsection{Protocol}
|
||||
\label{sec:protocol}
|
||||
\label{protocol}
|
||||
|
||||
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
|
||||
%
|
||||
@@ -190,7 +190,7 @@ The experiment took around 1 hour and 20 minutes to complete.
|
||||
|
||||
|
||||
\subsection{Participants}
|
||||
\label{sec:participants}
|
||||
\label{participants}
|
||||
|
||||
Twenty-four subjects participated in the study (eight aged between 18 and 24, fourteen aged between 25 and 34, and two aged between 35 and 44; 22~males, 1~female, 1~preferred not to say).
|
||||
%
|
||||
@@ -206,7 +206,7 @@ Participants signed an informed consent, including the declaration of having no
|
||||
|
||||
|
||||
\subsection{Collected Data}
|
||||
\label{sec:metrics}
|
||||
\label{metrics}
|
||||
|
||||
Inspired by \textcite{laviolajr20173d}, we collected the following metrics during the experiment.
|
||||
%
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
\subsection{Push Task}
|
||||
\label{sec:push}
|
||||
\label{push}
|
||||
|
||||
\subsubsection{Completion Time}
|
||||
\label{sec:push_tct}
|
||||
\label{push_tct}
|
||||
|
||||
On the time to complete a trial, there were two statistically significant effects: %
|
||||
Hand (\anova{5}{2868}{24.8}, \pinf{0.001}, see \figref{results/Push-ContactsCount-Hand-Overall-Means}) %
|
||||
@@ -20,7 +20,7 @@ and (3) back N and NW targets were the slowest (\p{0.04}).
|
||||
|
||||
|
||||
\subsubsection{Contacts}
|
||||
\label{sec:push_contacts_count}
|
||||
\label{push_contacts_count}
|
||||
|
||||
On the number of contacts, there were two statistically significant effects: %
|
||||
Hand (\anova{5}{2868}{6.7}, \pinf{0.001}, see \figref{results/Push-ContactsCount-Hand-Overall-Means}) %
|
||||
@@ -38,7 +38,7 @@ Targets on the left (W) and the right (E, SW) were easier to reach than the back
|
||||
|
||||
|
||||
\subsubsection{Time per Contact}
|
||||
\label{sec:push_time_per_contact}
|
||||
\label{push_time_per_contact}
|
||||
|
||||
On the mean time spent on each contact, there were two statistically significant effects: %
|
||||
Hand (\anova{5}{2868}{8.4}, \pinf{0.001}, see \figref{results/Push-MeanContactTime-Hand-Overall-Means}) %
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
\subsection{Grasp Task}
|
||||
\label{sec:grasp}
|
||||
\label{grasp}
|
||||
|
||||
\subsubsection{Completion Time}
|
||||
\label{sec:grasp_tct}
|
||||
\label{grasp_tct}
|
||||
|
||||
On the time to complete a trial, there was one statistically significant effect %
|
||||
of Target (\anova{7}{2868}{37.2}, \pinf{0.001}) %
|
||||
@@ -12,7 +12,7 @@ Targets on the back and the left (N, NW, and W) were slower than targets on the
|
||||
|
||||
|
||||
\subsubsection{Contacts}
|
||||
\label{sec:grasp_contacts_count}
|
||||
\label{grasp_contacts_count}
|
||||
|
||||
On the number of contacts, there were two statistically significant effects: %
|
||||
Hand (\anova{5}{2868}{5.2}, \pinf{0.001}, see \figref{results/Grasp-ContactsCount-Hand-Overall-Means}) %
|
||||
@@ -30,7 +30,7 @@ Targets on the back and left were more difficult (N, NW, and W) than targets on
|
||||
|
||||
|
||||
\subsubsection{Time per Contact}
|
||||
\label{sec:grasp_time_per_contact}
|
||||
\label{grasp_time_per_contact}
|
||||
|
||||
On the mean time spent on each contact, there were two statistically significant effects: %
|
||||
Hand (\anova{5}{2868}{9.6}, \pinf{0.001}, see \figref{results/Grasp-MeanContactTime-Hand-Overall-Means}) %
|
||||
@@ -50,7 +50,7 @@ This time was the shortest on the front S than on the other target volumes (\pin
|
||||
|
||||
|
||||
\subsubsection{Grip Aperture}
|
||||
\label{sec:grasp_grip_aperture}
|
||||
\label{grasp_grip_aperture}
|
||||
|
||||
On the average distance between the thumb's fingertip and the other fingertips during grasping, there were two
|
||||
statistically significant effects: %
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\subsection{Ranking}
|
||||
\label{sec:ranks}
|
||||
\label{ranks}
|
||||
|
||||
\begin{subfigs}{ranks}{%
|
||||
Experiment \#1. Boxplots of the ranking (lower is better) of each visual hand rendering
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\subsection{Questionnaire}
|
||||
\label{sec:questions}
|
||||
\label{questions}
|
||||
|
||||
\begin{subfigswide}{questions}{%
|
||||
Experiment \#1. Boxplots of the questionnaire results of each visual hand rendering
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Results}
|
||||
\label{sec:results}
|
||||
\label{results}
|
||||
|
||||
\begin{subfigs}{push_results}{%
|
||||
Experiment \#1: Push task.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Discussion}
|
||||
\label{sec:discussion}
|
||||
\label{discussion}
|
||||
|
||||
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in AR.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Conclusion}
|
||||
\label{sec:conclusion}
|
||||
\label{conclusion}
|
||||
|
||||
This paper presented two human subject studies aimed at better understanding the role of visuo-haptic rendering of the hand during virtual object manipulation in OST-AR.
|
||||
%
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
\section{Introduction}
|
||||
\label{sec:introduction}
|
||||
\label{introduction}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{User Study}
|
||||
\label{sec:method}
|
||||
\label{method}
|
||||
|
||||
Providing haptic feedback during free-hand manipulation in AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system.
|
||||
%
|
||||
@@ -13,7 +13,7 @@ The chosen visuo-haptic hand renderings are the combination of the two most repr
|
||||
|
||||
|
||||
\subsection{Vibrotactile Renderings}
|
||||
\label{sec:vibration}
|
||||
\label{vibration}
|
||||
|
||||
The vibrotactile hand rendering provided information about the contacts between the virtual object and the thumb and index fingers of the user, as they were the two fingers most used for grasping in our first experiment.
|
||||
%
|
||||
@@ -21,7 +21,7 @@ We evaluated both the delocalized positioning and the contact vibration techniqu
|
||||
|
||||
|
||||
\subsubsection{Vibrotactile Positionings}
|
||||
\label{sec:positioning}
|
||||
\label{positioning}
|
||||
|
||||
\fig[0.30]{method/locations}{%
|
||||
Experiment \#2: setup of the vibrotactile devices.
|
||||
@@ -45,7 +45,7 @@ We evaluated both the delocalized positioning and the contact vibration techniqu
|
||||
|
||||
|
||||
\subsubsection{Contact Vibration Techniques}
|
||||
\label{sec:technique}
|
||||
\label{technique}
|
||||
|
||||
When a fingertip contacts the virtual cube, we activate the corresponding vibrating actuator.
|
||||
%
|
||||
@@ -70,7 +70,7 @@ Similarly, we designed the distance vibration technique (Dist) so that interpene
|
||||
|
||||
|
||||
\subsection{Experimental Design}
|
||||
\label{sec:design}
|
||||
\label{design}
|
||||
|
||||
\begin{subfigs}{tasks}{%
|
||||
Experiment \#2. The two manipulation tasks: %
|
||||
@@ -115,7 +115,7 @@ This design led to a total of 5 vibrotactile positionings \x 2 vibration contact
|
||||
|
||||
|
||||
\subsection{Apparatus and Protocol}
|
||||
\label{sec:apparatus}
|
||||
\label{apparatus}
|
||||
|
||||
Apparatus and protocol were very similar to the first experiment, as described in \secref[visual_hand]{apparatus} and \secref[visual_hand]{protocol}, respectively.
|
||||
%
|
||||
@@ -166,7 +166,7 @@ Preliminary tests confirmed this approach.
|
||||
|
||||
|
||||
\subsection{Collected Data}
|
||||
\label{sec:metrics}
|
||||
\label{metrics}
|
||||
|
||||
During the experiment, we collected the same data as in the first experiment, see \secref[visual_hand]{metrics}.
|
||||
%
|
||||
@@ -184,7 +184,7 @@ Finally, they rated the ten combinations of Positioning \x Hand on a 7-item Like
|
||||
|
||||
|
||||
\subsection{Participants}
|
||||
\label{sec:participants}
|
||||
\label{participants}
|
||||
|
||||
Twenty subjects participated in the study (mean age = 26.8, SD = 4.1; 19~males, 1~female).
|
||||
%
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
\subsection{Push Task}
|
||||
\label{sec:push}
|
||||
\label{push}
|
||||
|
||||
\subsubsection{Completion Time}
|
||||
\label{sec:push_tct}
|
||||
\label{push_tct}
|
||||
|
||||
On the time to complete a trial, there were two statistically significant effects: %
|
||||
Positioning (\anova{4}{1990}{3.8}, \p{0.004}, see \figref{results/Push-CompletionTime-Location-Overall-Means}) %
|
||||
@@ -18,7 +18,7 @@ The NW target volume was also faster than the SW (\p{0.05}).
|
||||
|
||||
|
||||
\subsubsection{Contacts}
|
||||
\label{sec:push_contacts_count}
|
||||
\label{push_contacts_count}
|
||||
|
||||
On the number of contacts, there was one statistically significant effect of %
|
||||
Positioning (\anova{4}{1990}{2.4}, \p{0.05}, see \figref{results/Push-Contacts-Location-Overall-Means}).
|
||||
@@ -29,7 +29,7 @@ This could indicate more difficulties to adjust the virtual cube inside the targ
|
||||
|
||||
|
||||
\subsubsection{Time per Contact}
|
||||
\label{sec:push_time_per_contact}
|
||||
\label{push_time_per_contact}
|
||||
|
||||
On the mean time spent on each contact, there were two statistically significant effects of %
|
||||
Positioning (\anova{4}{1990}{11.5}, \pinf{0.001}, see \figref{results/Push-TimePerContact-Location-Overall-Means}) %
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
\subsection{Grasp Task}
|
||||
\label{sec:grasp}
|
||||
\label{grasp}
|
||||
|
||||
\subsubsection{Completion Time}
|
||||
\label{sec:grasp_tct}
|
||||
\label{grasp_tct}
|
||||
|
||||
On the time to complete a trial, there were two statistically significant effects: %
|
||||
Positioning (\anova{4}{3990}{13.6}, \pinf{0.001}, see \figref{results/Grasp-CompletionTime-Location-Overall-Means}) %
|
||||
@@ -18,7 +18,7 @@ and SW was faster than NE (\p{0.03}).
|
||||
|
||||
|
||||
\subsubsection{Contacts}
|
||||
\label{sec:grasp_contacts_count}
|
||||
\label{grasp_contacts_count}
|
||||
|
||||
On the number of contacts, there were two statistically significant effects: %
|
||||
Positioning (\anova{4}{3990}{15.1}, \pinf{0.001}, see \figref{results/Grasp-Contacts-Location-Overall-Means}) %
|
||||
@@ -32,7 +32,7 @@ It was also easier on SW than on NE (\pinf{0.001}), NW (\p{0.006}), or SE (\p{0.
|
||||
|
||||
|
||||
\subsubsection{Time per Contact}
|
||||
\label{sec:grasp_time_per_contact}
|
||||
\label{grasp_time_per_contact}
|
||||
|
||||
On the mean time spent on each contact, there were two statistically significant effects: %
|
||||
Positioning (\anova{4}{3990}{2.9}, \p{0.02}, see \figref{results/Grasp-TimePerContact-Location-Overall-Means}) %
|
||||
@@ -46,7 +46,7 @@ but longer on SW than on NE or NW (\pinf{0.001}).
|
||||
|
||||
|
||||
\subsubsection{Grip Aperture}
|
||||
\label{sec:grasp_grip_aperture}
|
||||
\label{grasp_grip_aperture}
|
||||
|
||||
On the average distance between the thumb's fingertip and the other fingertips during grasping, there were two
|
||||
statistically significant effects: %
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\subsection{Discrimination of Vibration Techniques}
|
||||
\label{sec:technique_results}
|
||||
\label{technique_results}
|
||||
|
||||
Seven participants were able to correctly discriminate between the two vibration techniques, which they described as the contact vibration (being the Impact technique) and the continuous vibration (being the Distance technique) respectively.
|
||||
%
|
||||
@@ -17,7 +17,7 @@ Although the Distance technique provided additional feedback on the interpenetra
|
||||
|
||||
|
||||
\subsection{Questionnaire}
|
||||
\label{sec:questions}
|
||||
\label{questions}
|
||||
|
||||
\begin{subfigswide}{questions}{%
|
||||
Experiment \#2. Boxplots of the questionnaire results of each vibrotactile positioning
|
||||
@@ -42,7 +42,7 @@ Only significant results are reported.
|
||||
|
||||
|
||||
\subsubsection{Vibrotactile Rendering Rating}
|
||||
\label{sec:vibration_ratings}
|
||||
\label{vibration_ratings}
|
||||
|
||||
There was a main effect of Positioning (\anova{4}{171}{27.0}, \pinf{0.001}).
|
||||
%
|
||||
@@ -54,7 +54,7 @@ And Wrist more than Opposite (\p{0.01}) and No Vibration (\pinf{0.001}).
|
||||
|
||||
|
||||
\subsubsection{Positioning \x Hand Rating}
|
||||
\label{sec:positioning_hand}
|
||||
\label{positioning_hand}
|
||||
|
||||
There were two main effects of Positioning (\anova{4}{171}{20.6}, \pinf{0.001}) and of Hand (\anova{1}{171}{12.2}, \pinf{0.001}).
|
||||
%
|
||||
@@ -68,7 +68,7 @@ And Skeleton more than No Hand (\pinf{0.001}).
|
||||
|
||||
|
||||
\subsubsection{Workload}
|
||||
\label{sec:workload}
|
||||
\label{workload}
|
||||
|
||||
There was a main of Positioning (\anova{4}{171}{3.9}, \p{0.004}).
|
||||
%
|
||||
@@ -76,7 +76,7 @@ Participants found Opposite more fatiguing than Fingertips (\p{0.01}), Proximal
|
||||
|
||||
|
||||
\subsubsection{Usefulness}
|
||||
\label{sec:usefulness}
|
||||
\label{usefulness}
|
||||
|
||||
There was a main effect of Positioning (\anova{4}{171}{38.0}, \p{0.041}).
|
||||
%
|
||||
@@ -90,7 +90,7 @@ And Opposite more than No Vibrations (\p{0.004}).
|
||||
|
||||
|
||||
\subsubsection{Realism}
|
||||
\label{sec:realism}
|
||||
\label{realism}
|
||||
|
||||
There was a main effect of Positioning (\anova{4}{171}{28.8}, \pinf{0.001}).
|
||||
%
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Results}
|
||||
\label{sec:results}
|
||||
\label{results}
|
||||
|
||||
\begin{subfigswide}{grasp_results}{%
|
||||
Experiment \#{2}: Grasp task.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Discussion}
|
||||
\label{sec:discussion}
|
||||
\label{discussion}
|
||||
|
||||
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
\section{Conclusion}
|
||||
\label{sec:conclusion}
|
||||
\label{conclusion}
|
||||
|
||||
This paper presented two human subject studies aimed at better understanding the role of visuo-haptic rendering of the hand during virtual object manipulation in OST-AR.
|
||||
%
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
\renewcommand{\eqref}[2][\labelprefix]{Equation~\ref{#1:eq:#2}}
|
||||
\renewcommand{\figref}[2][\labelprefix]{Figure~\ref{#1:fig:#2}}
|
||||
\newcommand{\partref}[1]{Part~\ref{#1}}
|
||||
\renewcommand{\secref}[2][\labelprefix]{Section~\ref{#1:sec:#2}}
|
||||
\renewcommand{\secref}[2][\labelprefix]{Section~\ref{#1:#2}}
|
||||
\renewcommand{\tabref}[2][\labelprefix]{Table~\ref{#1:tab:#2}}
|
||||
|
||||
% Images
|
||||
|
||||
Reference in New Issue
Block a user