diff --git a/2-perception/xr-perception/4-experiment.tex b/2-perception/xr-perception/4-experiment.tex index 581f814..dfb4823 100644 --- a/2-perception/xr-perception/4-experiment.tex +++ b/2-perception/xr-perception/4-experiment.tex @@ -70,7 +70,7 @@ Its size was adjusted to match the real hand of the participants before the expe % %An OST-AR headset (Microsoft HoloLens~2) was chosen over a VST-AR headset because the former only adds virtual content to the real environment, while the latter streams a real-time video capture of the real environment, and one of our objectives was to directly compare a virtual environment replicating a real one, not to a video feed that introduces many other visual limitations~\cite{macedo2023occlusion}. % -The visual rendering of the virtual hand and environment is described in \secref{xr_perception:xr_perception:virtual_real_alignment}. +The visual rendering of the virtual hand and environment is described in \secref{xr_perception:virtual_real_alignment}. % %In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (see \figref{method/headset}). % @@ -86,7 +86,7 @@ In the \level{Mixed} and \level{Real} conditions, the mask had two additional ho % %This voice-coil was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnotemark[1]. % -%It was driven by an audio amplifier (XY-502, not branded) connected to a computer that generated the audio signal of the textures as described in \secref{xr_perception:xr_perception:method}, using the NAudio library and the WASAPI driver in exclusive mode. +%It was driven by an audio amplifier (XY-502, not branded) connected to a computer that generated the audio signal of the textures as described in \secref{xr_perception:method}, using the NAudio library and the WASAPI driver in exclusive mode. % %The position of the finger relative to the sheet was estimated using a webcam placed on top of the box (StreamCam, Logitech) and the OpenCV library by tracking a \qty{2}{\cm} square fiducial marker (AprilTag) glued to top of the vibrotactile actuator. % @@ -98,7 +98,7 @@ Participants sat comfortably in front of the box at a distance of \qty{30}{\cm}, % %A vibrotactile voice-coil actuator (HapCoil-One, Actronika) was encased in a 3D printed plastic shell with a \qty{2}{\cm} AprilTag glued to top, and firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap. % -The generation of the virtual texture and the control of the virtual hand is described in \secref{xr_perception:xr_perception:method}. +The generation of the virtual texture and the control of the virtual hand is described in \secref{xr_perception:method}. % They also wore headphones with a pink noise masking the sound of the voice-coil. % @@ -136,7 +136,7 @@ Participants were not told that there was a reference and a comparison texture. % The order of presentation was randomised and not revealed to the participants. % -All textures were rendered as described in \secref{xr_perception:xr_perception:texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness. +All textures were rendered as described in \secref{xr_perception:texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness. % Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable, and the reference texture was chosen to be the one with the middle amplitude.