Add packages used for stats
This commit is contained in:
@@ -110,7 +110,7 @@ For each trial of the \level{Matching} task, the chosen \response{Haptic Texture
|
||||
The \response{Completion Time} was also measured as the time between the visual texture display and the haptic texture selection.
|
||||
For each modality of the \level{Ranking} task, the \response{Rank} of each of the visual, haptic, or visuo-haptic pairs of the textures presented was recorded.
|
||||
|
||||
\noindentskip After each of the two tasks, participants answered to the following 7-item Likert scale questions (1=Not at all, 7=Extremely):
|
||||
After each of the two tasks, participants answered to the following 7-item Likert scale questions (1=Not at all, 7=Extremely):
|
||||
\begin{itemize}
|
||||
\item \response{Haptic Difficulty}: How difficult was it to differentiate the tactile textures?
|
||||
\item \response{Visual Difficulty}: How difficult was it to differentiate the visual textures?
|
||||
@@ -120,4 +120,7 @@ For each modality of the \level{Ranking} task, the \response{Rank} of each of th
|
||||
\item \response{Uncomfort}: How uncomfortable was to use the haptic device?
|
||||
\end{itemize}
|
||||
|
||||
\noindentskip In an open question, participants also commented on their strategy for completing the \level{Matching} task (\enquote{How did you associate the tactile textures with the visual textures?}) and the \level{Ranking} task (\enquote{How did you rank the textures?}).
|
||||
In an open question, participants also commented on their strategy for completing the \level{Matching} task (\enquote{How did you associate the tactile textures with the visual textures?}) and the \level{Ranking} task (\enquote{How did you rank the textures?}).
|
||||
|
||||
\comans{JG}{I suggest to also report on [...] the software packages used for statistical analysis (this holds also for the subsequent chapters).}{This has been added to all chapters where necessary.}
|
||||
The results were analyzed using R (v4.4) and the packages \textit{afex} (v1.4), \textit{ARTool} (v0.11), \textit{corrr} (v0.4), \textit{FactoMineR} (v2.11), \textit{lme4} (v1.1), and \textit{performance} (v0.13).
|
||||
|
||||
@@ -118,6 +118,8 @@ After each \factor{Visual Rendering} block of trials, participants rated their e
|
||||
They also assessed their workload with the NASA Task Load Index (\response{NASA-TLX}) questionnaire after each blocks of trials (\tabref{questions2}).
|
||||
For all questions, participants were shown only labels (\eg \enquote{Not at all} or \enquote{Extremely}) and not the actual scale values (\eg 1 or 5) \cite{muller2014survey}.
|
||||
|
||||
The results were analyzed using R (v4.4) and the packages \textit{afex} (v1.4), \textit{ARTool} (v0.11), \textit{MixedPsy} (v1.2), \textit{lme4} (v1.1), and \textit{performance} (v0.13).
|
||||
|
||||
\newcommand{\scalegroup}[2]{\multirow{#1}{1\linewidth}{#2}}
|
||||
\afterpage{
|
||||
\begin{tabwide}{questions1}
|
||||
|
||||
@@ -165,3 +165,5 @@ Participants also rated each visual hand augmentation individually on six questi
|
||||
\end{itemize}
|
||||
|
||||
Finally, participants were encouraged to comment out loud on the conditions throughout the experiment, as well as in an open-ended question at its end, to gather additional qualitative information.
|
||||
|
||||
The results were analyzed using R (v4.4) and the packages \textit{afex} (v1.4), \textit{ARTool} (v0.11), and \textit{performance} (v0.13).
|
||||
|
||||
@@ -140,3 +140,5 @@ They then rated the ten combinations of \factor{Positioning} \x \factor{Vibratio
|
||||
|
||||
Finally, they rated the ten combinations of \factor{Positioning} \x \factor{Hand} on a 7-item Likert scale (1=Not at all, 7=Extremely):
|
||||
\response{Positioning \x Hand Rating}: How much do you like each combination of vibrotactile location for each visual hand rendering?
|
||||
|
||||
The results were analyzed using R (v4.4) and the packages \textit{afex} (v1.4), \textit{ARTool} (v0.11), and \textit{performance} (v0.13).
|
||||
|
||||
Reference in New Issue
Block a user