Typo
This commit is contained in:
@@ -91,9 +91,7 @@ As illustrated in \figref{sensorimotor_continuum}, \textcite{jones2006human} del
|
||||
\item \emph{Gestures}, or non-prehensible skilled movements, are motor activities without constant contact with an object. Examples include pointing at a target, typing on a keyboard, accompanying speech with gestures, or signing in sign language \cite{yoon2020evaluating}.
|
||||
\end{itemize}
|
||||
|
||||
\fig[0.65]{sensorimotor_continuum}{
|
||||
The sensorimotor continuum of the hand function proposed by and adapted from \textcite{jones2006human}.
|
||||
}[
|
||||
\fig[0.65]{sensorimotor_continuum}{ The sensorimotor continuum of the hand function proposed by and adapted from \textcite{jones2006human}.}[%
|
||||
Functions of the hand are classified into four categories based on the relative importance of sensory and motor components.
|
||||
\protect\footnotemark
|
||||
]
|
||||
|
||||
@@ -162,8 +162,8 @@ Choosing useful and efficient \UIs and interaction techniques is crucial for the
|
||||
\subsubsection{Tasks with Virtual Environments}
|
||||
\label{ve_tasks}
|
||||
|
||||
\textcite[p.385]{laviolajr20173d} classify interaction techniques into three categories based on the tasks they enable users to perform: manipulation, navigation, and system control.
|
||||
\textcite{hertel2021taxonomy} proposed a taxonomy of interaction techniques specifically for immersive \AR.
|
||||
\textcite{laviolajr20173d} (p.385) classify interaction techniques into three categories based on the tasks they enable users to perform: manipulation, navigation, and system control.
|
||||
\textcite{hertel2021taxonomy} proposed a similar taxonomy of interaction techniques specifically for immersive \AR.
|
||||
|
||||
The \emph{manipulation tasks} are the most fundamental tasks in \AR and \VR systems, and the building blocks for more complex interactions.
|
||||
\emph{Selection} is the identification or acquisition of a specific virtual object, \eg pointing at a target as in \figref{grubert2015multifi}, touching a button with a finger, or grasping an object with a hand.
|
||||
@@ -177,7 +177,7 @@ Wayfinding is the cognitive planning of the movement, such as path finding or ro
|
||||
|
||||
The \emph{system control tasks} are changes to the system state through commands or menus such as creating, deleting, or modifying virtual objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols.
|
||||
|
||||
In this thesis we focus on manipulation tasks of virtual content directly with the hands, more specifically on touching visuo-haptic textures with a finger (\partref{perception}) and positioning and rotating virtual objects pushed and grasp by the hand.
|
||||
In this thesis we focus on manipulation tasks of virtual content directly with the hands, more specifically on touching visuo-haptic textures with a finger (\partref{perception}) and positioning and rotating virtual objects pushed and grasp by the hand (\partref{manipulation}).
|
||||
|
||||
\begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[][
|
||||
\item Spatial selection of virtual item of an extended display using a hand-held smartphone \cite{grubert2015multifi}.
|
||||
@@ -247,7 +247,7 @@ Similarly, in \secref{tactile_rendering} we described how a material property (\
|
||||
|
||||
%can track the user's movements and use them as inputs to the \VE \textcite[p.172]{billinghurst2015survey}.
|
||||
Initially tracked by active sensing devices such as gloves or controllers, it is now possible to track hands in real time using passive sensing (\secref{interaction_techniques}) and computer vision algorithms natively integrated into \AR/\VR headsets \cite{tong2023survey}.
|
||||
Our hands allow us to manipulate real everyday objects (\secref{grasp_types}), so virtual hand interaction techniques seem to be the most natural way to manipulate virtual objects \cite[p.400]{laviolajr20173d}.
|
||||
Our hands allow us to manipulate real everyday objects (\secref{grasp_types}), hence virtual hand interaction techniques seem to be the most natural way to manipulate virtual objects \cite[p.400]{laviolajr20173d}.
|
||||
|
||||
The user's hand being tracked is reconstructed as a \emph{virtual hand} model in the \VE \cite[p.405]{laviolajr20173d}.
|
||||
The simplest models represent the hand as a rigid \ThreeD object that follows the movements of the real hand with \qty{6}{DoF} (position and orientation in space) \cite{talvas2012novel}.
|
||||
|
||||
@@ -24,9 +24,9 @@ The system consists of three main components: the pose estimation of the tracked
|
||||
These poses are used to move and display the virtual model replicas aligned with the \RE.
|
||||
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
||||
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\poseFrame{t}$.
|
||||
The vibrotactile signal $s_k$ is generated by modulating the (scalar) finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
||||
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
|
||||
All computation steps except signal sampling are performed at 60~Hz and in separate threads to parallelize them.
|
||||
The vibrotactile signal $r$ is generated by modulating the (scalar) finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
||||
The signal is sampled at \qty{48}{\kilo\hertz} and sent to the voice-coil actuator via an audio amplifier.
|
||||
All computation steps except signal sampling are performed at \qty{60}{\hertz} and in separate threads to parallelize them.
|
||||
]
|
||||
|
||||
\section{Description of the System Components}
|
||||
|
||||
@@ -16,7 +16,7 @@ Indeed, the majority of users explained that, based on the roughness, granularit
|
||||
Several strategies were used, as some participants reported using vibration frequency and/or amplitude to match a haptic texture.
|
||||
It should be noted that the task was rather difficult (\figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the \level{Ranking} task.
|
||||
|
||||
The correspondence analysis (\figref{results/matching_correspondence_analysis}) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (\percent{60} of variance), which is in line with previous perception studies on real \cite{baumgartner2013visual} and virtual \cite{culbertson2014modeling} textures.
|
||||
The correspondence analysis (\figref{results/matching_correspondence_analysis}) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (\percent{60} of variance), which is in line with previous perception studies on real textures \cite{baumgartner2013visual} and virtual textures \cite{culbertson2014modeling}.
|
||||
The rankings (\figref{results/ranking_mean_ci}) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures \cite{bergmanntiest2007haptic}.
|
||||
These results made it possible to identify and name groups of textures in the form of clusters (\figref{results_clusters}), and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters (\figref{results/haptic_visual_clusters_confusion_matrices}), showing that participants consistently identified and matched haptic and visual textures.
|
||||
\percent{30} of the matching variance of the correspondence analysis was also captured with a second dimension, opposing the roughest textures (\level{Metal Mesh}, \level{Sandpaper~100}), and to a lesser extent the smoothest (\level{Coffee Filter}, \level{Sandpaper~320}), with all other textures (\figref{results/matching_correspondence_analysis}).
|
||||
|
||||
@@ -6,7 +6,7 @@ Touching, grasping and manipulating virtual objects are fundamental interactions
|
||||
Manipulation of virtual objects is achieved using a virtual hand interaction technique that represents the user's hand in the \VE and simulates interaction with virtual objects (\secref[related_work]{ar_virtual_hands}).
|
||||
The visual feedback of the virtual hand is a key element for interacting and manipulating virtual objects in \VR \cite{prachyabrued2014visual,grubert2018effects}.
|
||||
Some work has also investigated the visual feedback of the virtual hand in \AR, but not in an immersive context of virtual object manipulation \cite{blaga2017usability,yoon2020evaluating} or was limited to a single visual hand augmentation \cite{piumsomboon2014graspshell,maisto2017evaluation}.
|
||||
\OST-\AR also has significant perceptual differences from \VR due the lack of mutual occlusion between the hand and the virtual object in \OST-\AR (\secref[related_work]{ar_displays}), and the inherent delays between the user's hand and the result of the interaction simulation (\secref[related_work]{ar_virtual_hands}).
|
||||
\Gls{OST}-\AR also has significant perceptual differences from \VR due the lack of mutual occlusion between the hand and the virtual object in \OST-\AR (\secref[related_work]{ar_displays}), and the inherent delays between the user's hand and the result of the interaction simulation (\secref[related_work]{ar_virtual_hands}).
|
||||
|
||||
In this chapter, we investigate the \textbf{visual rendering of the virtual hand as augmentation of the real hand} for direct hand manipulation of virtual objects in \OST-\AR.
|
||||
To this end, we selected in the literature and compared the most popular visual hand augmentation used to interact with virtual objects in \AR.
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
We evaluated six visual hand augmentations, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in \AR.
|
||||
|
||||
During the \level{Push} task, the \level{Skeleton} hand rendering was the fastest (\figref{results/Push-CompletionTime}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (\figref{results/Push-ContactsCount} and \figref{results/Push-MeanContactTime}).
|
||||
Participants consistently used few and continuous contacts for all visual hand augmentations (Fig. 3b), with only less than ten trials, carried out by two participants, quickly completed with multiple discrete touches.
|
||||
%Participants consistently used few and continuous contacts for all visual hand augmentations (\figref{results/Push-ContactsCount}), with only less than ten trials, carried out by two participants, quickly completed with multiple discrete touches.
|
||||
However, during the \level{Grasp} task, despite no difference in \response{Completion Time}, providing no visible hand rendering (\level{None} and \level{Occlusion} renderings) led to more failed grasps or cube drops (\figref{results/Grasp-ContactsCount} and \figref{results/Grasp-MeanContactTime}).
|
||||
Indeed, participants found the \level{None} and \level{Occlusion} renderings less effective (\figref{results/Ranks-Grasp}) and less precise (\figref{results_questions}).
|
||||
To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering \VR experience as an additional between-subjects factor, \ie \VR novices vs. \VR experts (\enquote{I use it every week}, see \secref{participants}).
|
||||
|
||||
@@ -27,10 +27,10 @@ They are described as follows, with the corresponding abbreviation in brackets:
|
||||
When a fingertip contacts the virtual cube, we activate the corresponding vibrating actuator.
|
||||
We considered two representative contact vibration techniques, \ie two ways of rendering such contacts through vibrations:
|
||||
\begin{itemize}
|
||||
\item \level{Impact} (Impa): a \qty{200}{\ms}--long vibration burst is applied when the fingertip makes contact with the object.
|
||||
\item \level{Impact}: a \qty{200}{\ms}--long vibration burst is applied when the fingertip makes contact with the object.
|
||||
The amplitude of the vibration is proportional to the speed of the fingertip at the moment of the contact.
|
||||
This technique is inspired by the impact vibrations modelled by tapping on real surfaces, as described in \secref[related_work]{hardness_rendering}.
|
||||
\item \level{Distance} (Dist): a continuous vibration is applied whenever the fingertip is in contact with the object.
|
||||
\item \level{Distance}: a continuous vibration is applied whenever the fingertip is in contact with the object.
|
||||
The amplitude of the vibration is proportional to the interpenetration between the fingertip and the virtual cube surface.
|
||||
\end{itemize}
|
||||
|
||||
@@ -38,8 +38,8 @@ The implementation of these two techniques have been tuned according to the resu
|
||||
Three participants were asked to carry out a series of push and grasp tasks similar to those used in the actual experiment.
|
||||
Results showed that \percent{95} of the contacts between the fingertip and the virtual cube happened at speeds below \qty{1.5}{\m\per\s}.
|
||||
We also measured the perceived minimum amplitude to be \percent{15} (\qty{0.6}{\g}) of the maximum amplitude of the motors we used.
|
||||
For this reason, we designed the Impact vibration technique (Impa) so that contact speeds from \qtyrange{0}{1.5}{\m\per\s} are linearly mapped into \qtyrange{15}{100}{\%} amplitude commands for the motors.
|
||||
Similarly, we designed the distance vibration technique (Dist) so that interpenetrations from \qtyrange{0}{2.5}{\cm} are linearly mapped into \qtyrange{15}{100}{\%} amplitude commands for the motors, recalling that the virtual cube has an edge of \qty{5}{\cm}.
|
||||
For this reason, we designed the \level{Impact} vibration technique so that contact speeds from \qtyrange{0}{1.5}{\m\per\s} are linearly mapped into \qtyrange{15}{100}{\%} amplitude commands for the motors.
|
||||
Similarly, we designed the \level{Distance} vibration technique so that interpenetrations from \qtyrange{0}{2.5}{\cm} are linearly mapped into \qtyrange{15}{100}{\%} amplitude commands for the motors, recalling that the virtual cube has an edge of \qty{5}{\cm}.
|
||||
|
||||
\section{User Study}
|
||||
\label{method}
|
||||
@@ -60,7 +60,8 @@ We considered the same two \level{Push} and \level{Grasp} tasks as described in
|
||||
\begin{itemize}
|
||||
\item left-bottom (\level{LB}) and left-right (\level{LF}) during the \level{Push} task; and
|
||||
\item right-bottom (\level{RB}), left-bottom (\level{LB}), left-right (\level{LF}) and right-front (\level{RF}) during the \level{Grasp} task.
|
||||
\end{itemize}. We considered these targets because they presented different difficulties.
|
||||
\end{itemize}
|
||||
We considered these targets because they presented different difficulties in the previous user study (\chapref{visual_hand}).
|
||||
\end{itemize}
|
||||
|
||||
\begin{subfigs}{tasks}{The two manipulation tasks of the user study.}[
|
||||
@@ -114,10 +115,10 @@ Preliminary tests confirmed this approach.
|
||||
\subsection{Participants}
|
||||
\label{participants}
|
||||
|
||||
Twenty subjects participated in the study (mean age = 26.8, \sd{4.1}; 19~males, 1~female).
|
||||
Twenty participants were recruited for the study (19 males, 1 female), aged between 20 and 35 years (\median{26}{}, \iqr{5.3}{}).
|
||||
One was left-handed, while the other nineteen were right-handed. They all used their dominant hand during the trials.
|
||||
They all had a normal or corrected-to-normal vision.
|
||||
Thirteen subjects participated also in the previous experiment.
|
||||
Thirteen participants participated also in the previous experiment.
|
||||
|
||||
Participants rated their expertise (\enquote{I use it more than once a year}) with \VR, \AR, and haptics in a pre-experiment questionnaire.
|
||||
There were twelve experienced with \VR, eight experienced with \AR, and ten experienced with haptics.
|
||||
@@ -137,5 +138,5 @@ They then rated the ten combinations of \factor{Positioning} \x \factor{Vibratio
|
||||
\item \response{Realism}: How realistic was each vibrotactile rendering?
|
||||
\end{itemize}
|
||||
|
||||
Finally, they rated the ten combinations of \factor{Positioning} \x factor{Hand} on a 7-item Likert scale (1=Not at all, 7=Extremely):
|
||||
Finally, they rated the ten combinations of \factor{Positioning} \x \factor{Hand} on a 7-item Likert scale (1=Not at all, 7=Extremely):
|
||||
\response{Positioning \x Hand Rating}: How much do you like each combination of vibrotactile location for each visual hand rendering?
|
||||
|
||||
@@ -33,7 +33,7 @@ This showed different strategies to adjust the cube inside the target volume, wi
|
||||
It was also shorter with \level{None} than with \level{Skeleton} (\percent{-9}, \pinf{0.001}).
|
||||
This indicates, as for the \chapref{visual_hand}, more confidence with a visual hand augmentation.
|
||||
|
||||
\begin{subfigs}{push_results}{Results of the grasp task performance metrics.}[
|
||||
\begin{subfigs}{push_results}{Results of the push task performance metrics.}[
|
||||
Geometric means with bootstrap \percent{95} \CI for each vibrotactile positioning (a, b and c) or visual hand augmentation (d)
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
|
||||
@@ -60,8 +60,8 @@ And \level{Opposite} more than \level{Nowhere} (\p{0.03}).
|
||||
|
||||
\begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each vibrotactile positioning.}[
|
||||
Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Higher is better for \textbf{(a)} vibrotactile rendering rating, \textbf{(c)} usefulness and \textbf{(c)} fatigue.
|
||||
Lower is better for \textbf{(d)} workload.
|
||||
Higher is better for \textbf{(a)} vibrotactile rendering rating, \textbf{(c)} usefulness and \textbf{(d)} realism.
|
||||
Lower is better for \textbf{(b)} workload.
|
||||
]
|
||||
\subfig[0.24]{results/Question-Vibration Rating-Positioning-Overall}
|
||||
\subfig[0.24]{results/Question-Workload-Positioning-Overall}
|
||||
|
||||
@@ -26,7 +26,7 @@ This seemed inversely correlated with the performance, except for the \level{Now
|
||||
|
||||
Considering the two tasks, no clear difference in performance or appreciation was found between the two contact vibration techniques.
|
||||
While the majority of participants discriminated the two different techniques, only a minority identified them correctly (\secref{technique_results}).
|
||||
It seemed that the Impact technique was sufficient to provide contact information compared to the \level{Distance} technique, which provided additional feedback on interpenetration, as reported by participants.
|
||||
It seemed that the \level{Impact} technique was sufficient to provide contact information compared to the \level{Distance} technique, which provided additional feedback on interpenetration, as reported by participants.
|
||||
|
||||
No difference in performance was found between the two visual hand augmentations, except for the \level{Push} task, where the \level{Skeleton} hand rendering resulted again in longer contacts.
|
||||
Additionally, the \level{Skeleton} rendering was appreciated and perceived as more effective than having no visual hand augmentation, confirming the results of our \chapref{visual_hand}.
|
||||
@@ -46,5 +46,5 @@ On the one hand, participants behave differently when the haptic rendering was g
|
||||
This behavior has likely given them a better experience of the tasks and more confidence in their actions, as well as leading to a lower interpenetration/force applied to the cube \cite{pacchierotti2015cutaneous}.
|
||||
On the other hand, the unfamiliarity of the contralateral hand positioning (\level{Opposite}) caused participants to spend more time understanding the haptic stimuli, which might have made them more focused on performing the task.
|
||||
In terms of the contact vibration technique, the continuous vibration technique on the finger interpenetration (\level{Distance}) did not make a difference to performance, although it provided more information.
|
||||
Participants felt that vibration bursts were sufficient (\level{Distance}) to confirm contact with the virtual object.
|
||||
Participants felt that vibration bursts were sufficient (\level{Impact}) to confirm contact with the virtual object.
|
||||
Finally, it was interesting to note that the visual hand augmentation was appreciated but felt less necessary when provided together with vibrotactile hand rendering, as the latter was deemed sufficient for acknowledging the contact.
|
||||
|
||||
@@ -103,10 +103,10 @@ The role of visuo-haptic texture augmentation should also be evaluated in more c
|
||||
\paragraph{Specificities of Direct Touch.}
|
||||
|
||||
The haptic textures used were recordings and models of the vibrations of a hand-held probe sliding over real surfaces.
|
||||
We generated the vibrotactile textures only from finger speed \cite{culbertson2015should}, but the perceived roughness of real textures also depends on other factors such as the contact force, angle, posture or surface of the contact \cite{schafer2017transfer}.
|
||||
We generated the vibrotactile textures from velocity magnitude of the finger, but the perceived roughness of real textures also depends on other factors such as the contact force, angle, posture or surface of the contact \cite{schafer2017transfer}.
|
||||
The respective importance of these factors on the haptic texture perception is not yet fully understood \cite{richardson2022learning}.
|
||||
It would be interesting to determine the importance of these factors on the perceived realism of virtual vibrotactile textures in the context of bare finger touch.
|
||||
We finger based captures of real textures should also be considered \cite{balasubramanian2024sens3}.
|
||||
Finger based captures of real textures should also be considered \cite{balasubramanian2024sens3}.
|
||||
Finally, the virtual texture models should also be adaptable to individual sensitivities \cite{malvezzi2021design,young2020compensating}.
|
||||
|
||||
\subsection*{Visual Augmentation of the Hand for Manipulating virtual objects in AR}
|
||||
|
||||
Binary file not shown.
Binary file not shown.
@@ -5,7 +5,7 @@
|
||||
|
||||
\textbf{Erwan Normand}, Claudio Pacchierotti, Eric Marchand, and Maud Marchal.
|
||||
\enquote{Visuo-Haptic Rendering of the Hand during 3D Manipulation in Augmented Reality}.
|
||||
In: \textit{IEEE Transactions on Haptics}. 27.4 (2024), pp. 2481--2487.
|
||||
In: \textit{IEEE Transactions on Haptics (ToH)}. 27.4 (2024), pp. 2481--2487.
|
||||
%\textsc{doi}: \href{https://doi.org/10/gtqcfz}{10/gtqcfz}
|
||||
|
||||
\section*{International Conferences}
|
||||
@@ -18,5 +18,5 @@ In: \textit{EuroHaptics}. Lille, France, July 2024. pp. 469--484.
|
||||
\noindentskip
|
||||
\textbf{Erwan Normand}, Claudio Pacchierotti, Eric Marchand, and Maud Marchal.
|
||||
\enquote{How Different Is the Perception of Vibrotactile Texture Roughness in Augmented versus Virtual Reality?}.
|
||||
In: \textit{ACM Symposium on Virtual Reality Software and Technology}. Trier, Germany, October 2024. pp. 287--296.
|
||||
In: \textit{ACM Symposium on Virtual Reality Software and Technology (VRST)}. Trier, Germany, October 2024. pp. 287--296.
|
||||
%\textsc{doi}: \href{https://doi.org/10/g5rr49}{10/g5rr49}
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
\selectlanguage{french}
|
||||
|
||||
Dans ce manuscrit de thèse, nous montrons comment la \emph{réalité augmentée (RA)}, qui intègre un contenu visuel virtuel dans la perception du monde réel, et l'\emph{haptique portable}, qui fournit des sensations tactiles sur la peau, peuvent améliorer l'interaction de la main avec des objets virtuels.
|
||||
Dans ce manuscrit de thèse, nous montrons comment la \emph{réalité augmentée (RA)}, qui intègre un contenu visuel virtuel dans la perception du monde réel, et l'\emph{haptique portable}, qui fournit des sensations tactiles sur la peau, peuvent améliorer les interactions de la main avec des objets virtuels et augmentés.
|
||||
Notre objectif est de permettre aux utilisateurs de percevoir et de manipuler des augmentations visuo-haptiques portables, comme si elles étaient réelles, directement avec leurs mains.
|
||||
|
||||
\section{Introduction}
|
||||
@@ -31,7 +31,7 @@ Un aspect important de l'illusion de la RA (et de la RV) est la \emph{plausibili
|
||||
Dans ce contexte, nous définissons un \emph{système de RA} comme l'ensemble des dispositifs matériels (dispositifs d'entrée, capteurs, affichages et dispositifs haptiques) et logiciels (suivi, simulation et rendu) qui permettent à l'utilisateur d'interagir avec l'environnement augmenté.
|
||||
Les casques de RA sont la technologie d'affichage la plus prometteuse, car ils sont portables, fournissent à l'utilisateur un environnement augmenté \emph{immersif} et laissent les mains libres pour interagir \cite{hertel2021taxonomy}.
|
||||
Un retour haptique est alors indispensable pour assurer une interaction plausible et cohérente avec le contenu visuel virtuel.
|
||||
C'est pourquoi l'haptique portable semble pRAticulièrement adaptée à la RA immersive.
|
||||
C'est pourquoi l'haptique portable semble particulièrement adaptée à la RA immersive.
|
||||
|
||||
\subsectionstarbookmark{Défis de la réalité augmentée visuo-haptique portable}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user