diff --git a/1-introduction/related-work/3-augmented-reality.tex b/1-introduction/related-work/3-augmented-reality.tex index 25c86ef..df87c7a 100644 --- a/1-introduction/related-work/3-augmented-reality.tex +++ b/1-introduction/related-work/3-augmented-reality.tex @@ -42,22 +42,22 @@ Yet, most of the research have focused on visual augmentations, and the term \AR \label{ar_applications} Advances in technology, research and development have enabled many usages of \AR, including medicine, education, industrial, navigation, collaboration and entertainment applications~\cite{dey2018systematic}. -For example, \AR can help surgeons to visualize \ThreeD images of the brain overlaid on the patient's head prior or during surgery, \eg in \figref{watanabe2016transvisible}~\cite{watanabe2016transvisible}, or improve the learning of students with complex concepts and phenomena such as optics or chemistry~\cite{bousquet2024reconfigurable}. -It can also guide workers in complex tasks, such as assembly, maintenance or verification, \eg in \figref{hartl2013mobile}~\cite{hartl2013mobile}, reinvent the way we interact with desktop computers, \eg in \figref{lee2013spacetop}~\cite{lee2013spacetop}, or can create complete new forms of gaming or tourism experiences, \eg in \figref{roo2017inner}~\cite{roo2017inner}. +For example, \AR can help surgeons to visualize \ThreeD images of the brain overlaid on the patient's head prior or during surgery~\cite{watanabe2016transvisible} (\figref{watanabe2016transvisible}), or improve the learning of students with complex concepts and phenomena such as optics or chemistry~\cite{bousquet2024reconfigurable}. +It can also guide workers in complex tasks, such as assembly, maintenance or verification~\cite{hartl2013mobile} (\figref{hartl2013mobile}), reinvent the way we interact with desktop computers~\cite{lee2013spacetop} (\figref{lee2013spacetop}), or can create complete new forms of gaming or tourism experiences~\cite{roo2017inner} (\figref{roo2017inner}). Most of (visual) \AR/\VR experience can now be implemented with commercially available hardware and software solutions, in particular for tracking, rendering and display. Yet, the user experience in \AR is still highly dependent on the display used. \begin{subfigs}{ar_applications}{Examples of \AR applications. }[ \item Neurosurgery \AR visualization of the brain on a patient's head~\cite{watanabe2016transvisible}. + %\item HOBIT is a spatial, tangible \AR table simulating an optical bench for educational experimentations~\cite{bousquet2024reconfigurable}. + \item \AR can interactively guide in document verification tasks by recognizing and comparing with virtual references~\cite{hartl2013mobile}. \item SpaceTop is transparent \AR desktop computer featuring direct hand manipulation of \ThreeD content~\cite{lee2013spacetop}. - \item \AR can interactively guide in document verification tasks by recognizing and comparing with virtual references - ~\cite{hartl2013mobile}. - \item Inner Garden is a tangible, spatial \AR zen garden for relaxation and meditation~\cite{roo2017inner}. + \item Inner Garden is a spatial \AR zen garden made of real sand visually augmented to create a mini world that can be reshaped by hand~\cite{roo2017inner}. ] - \subfigsheight{47mm} + \subfigsheight{41mm} \subfig{watanabe2016transvisible} - \subfig{lee2013spacetop} \subfig{hartl2013mobile} + \subfig{lee2013spacetop} \subfig{roo2017inner} \end{subfigs} @@ -125,46 +125,46 @@ As presence, \SoE in \AR is a recent topic and little is known about its percept \subsection{Direct Hand Manipulation in AR} \label{ar_interaction} -Both \AR/\VR and haptic systems are able to render \VOs and environments as sensations displayed to the user's senses. -A user must also be able in turn to manipulate the \VOs and environments to complete the loop interaction (\figref[introduction]{interaction-loop}), \eg through a hand-held controller, a tangible object, or even directly with the hands. -An \emph{interaction technique} is then required to map the user inputs to actions on the \VE~\cite{laviola20173d}. +Both \AR/\VR and haptic systems to render \VOs as visual or haptic sensations that are prenseted to the user's senses. +In turn, a user must be able to manipulate the \VOs and environments to complete the interaction loop (\figref[introduction]{interaction-loop}), \eg through a hand-held controller, a tangible object, or even directly with the hands. +An \emph{interaction technique} is then required to map the user input to actions on the \VE~\cite{laviola20173d}. \subsubsection{User Interfaces and Interaction Techniques} \label{interaction_techniques} -For a user to interact with a computer system, they first perceive the state of the system and then act on it with inputs through a \UI. -An input \UI can be either an \emph{active sensing}, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a \emph{passive sensing}, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking. -The sensors' information gathered by the \UI are then translated into actions within the computer system by an interaction technique. -For example, a cursor on a screen can be moved either with a mouse or with arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image. +For a user to interact with a computer system, they first perceive the state of the system and then acts upon it through an input \UI. +Inputs interfaces can be either an \emph{active sensing}, physically held or worn device, such as a mouse, a touch screen, or a hand-held controller, or a \emph{passive sensing}, that does not require physical contact, such as eye trackers, voice recognition, or hand tracking. +The information gathered from the sensors by the \UI is then translated into actions within the computer system by an interaction technique. +For example, a cursor on a screen can be moved using either with a mouse or with the arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image. Choosing useful and efficient \UIs and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}. \fig[0.5]{interaction-technique}{An interaction technique map user inputs to actions within a computer system. Adapted from \textcite{billinghurst2005designing}.} -\subsubsection{Tasks} +\subsubsection{Tasks with Virtual Environments} \label{ve_tasks} \textcite{laviola20173d} classify interaction techniques into three categories based on the tasks they enable users to perform: manipulation, navigation, and system control. -\textcite{hertel2021taxonomy} proposed a revised taxonomy of interaction techniques specifically for immersive \AR. +\textcite{hertel2021taxonomy} proposed a taxonomy of interaction techniques specifically for immersive \AR. -The \emph{manipulation tasks} are the most fundamental tasks in \AR and \VR systems, and the basic blocks for more complex interactions. +The \emph{manipulation tasks} are the most fundamental tasks in \AR and \VR systems, and the building blocks for more complex interactions. \emph{Selection} is the identification or acquisition of a specific \VO, \eg pointing at a target as in \figref{grubert2015multifi}, touching a button with a finger, or grasping an object with a hand. -\emph{Positioning} and \emph{rotation} of a selected object are respectively the change of its position and orientation in \ThreeD space. +\emph{Positioning} and \emph{rotation} of a selected object are the change of its position and orientation in \ThreeD space respectively. It is also common to \emph{resize} a \VO to change its size. -These three tasks are geometric (rigid) manipulations of the object: they do not change its shape. +These three operations are geometric (rigid) manipulations of the object: they do not change its shape. The \emph{navigation tasks} are the movements of the user within the \VE. Travel is the control of the position and orientation of the viewpoint in the \VE, \eg physical walking, velocity control, or teleportation. -Wayfinding is the cognitive planning of the movement such as pathfinding or route following (\figref{grubert2017pervasive}). +Wayfinding is the cognitive planning of the movement, such as path finding or route following (\figref{grubert2017pervasive}). -The \emph{system control tasks} are changes in the system state through commands or menus such as creation, deletion, or modification of objects, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols. +The \emph{system control tasks} are changes to the system state through commands or menus such as creating, deleting, or modifying \VOs, \eg as in \figref{roo2017onea}. It is also the input of text, numbers, or symbols. \begin{subfigs}{interaction-techniques}{Interaction techniques in \AR. }[ \item Spatial selection of virtual item of an extended display using a hand-held smartphone~\cite{grubert2015multifi}. \item Displaying as an overlay registered on the \RE the route to follow~\cite{grubert2017pervasive}. \item Virtual drawing on a tangible object with a hand-held pen~\cite{roo2017onea}. - \item Simultaneous Localization and Mapping (SLAM) techniques such as KinectFusion~\cite{newcombe2011kinectfusion} reconstruct the \RE in real time and enables to register the \VE in it. + \item Simultaneous Localization and Mapping (SLAM) techniques such as KinectFusion~\cite{newcombe2011kinectfusion} reconstruct the \RE in real time and enables to register the \VE in it. ] \subfigsheight{36mm} \subfig{grubert2015multifi} @@ -177,55 +177,58 @@ The \emph{system control tasks} are changes in the system state through commands \subsubsection{Reducing the Physical-Virtual Gap} \label{physical-virtual-gap} -In \AR and \VR, the state of the system is displayed to the user as a \VE seen spatially in 3D. -Within an immersive and portable \AR system, this \VE is experienced at a 1:1 scale and as an integral part of the \RE. +In \AR and \VR, the state of the system is displayed to the user as a \ThreeD spatial \VE. +In an immersive and portable \AR system, this \VE is experienced at a 1:1 scale and as an integral part of the \RE. The rendering gap between the physical and virtual elements, as described on the interaction loop in \figref[introduction]{interaction-loop}, is thus experienced as very narrow or even not consciously perceived by the user. -This manifests as a sense of presence of the virtual, as presented in \secref{ar_presence}. +This manifests as a sense of presence of the virtual, as described in \secref{ar_presence}. -As the physical-virtual rendering gap is reduced, we could expect a similar and seamless interaction with the \VE as with a physical environment that \cite{jacob2008realitybased} called \emph{reality based interactions}. +As the physical-virtual rendering gap is reduced, we could expect a similar and seamless interaction with the \VE as with a physical environment, which \textcite{jacob2008realitybased} called \emph{reality based interactions}. As of today, an immersive \AR system track itself with the user in \ThreeD, using tracking sensors and pose estimation algorithms~\cite{marchand2016pose}, \eg as in \figref{newcombe2011kinectfusion}. -It enables to register the \VE with the \RE and the user simply moves themselves to navigate within the virtual content. +It enables the \VE to be registered with the \RE and the user simply moves to navigate within the virtual content. %This tracking and mapping of the user and \RE into the \VE is named the \enquote{extent of world knowledge} by \textcite{skarbez2021revisiting}, \ie to what extent the \AR system knows about the \RE and is able to respond to changes in it. -However, direct hand manipulation of the virtual content is a challenge that requires specific interaction techniques~\cite{billinghurst2021grand}. -Such \emph{reality based interaction}~\cite{jacob2008realitybased} in immersive \AR is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{billinghurst2015survey,hertel2021taxonomy}. +However, direct hand manipulation of virtual content is a challenge that requires specific interaction techniques~\cite{billinghurst2021grand}. +It is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands}~\cite{billinghurst2015survey,hertel2021taxonomy}. \subsubsection{Manipulating with Tangibles} \label{ar_tangibles} -As \AR integrates visual virtual content into the \RE perception, it can involve real surrounding objects as a \UI: to visually augment them, \eg by superimposing a visual texture~\cite{gupta2020replicate}, and to use them as physical proxies to support the interaction with \VOs~\cite{ishii1997tangible}. -According to \textcite{billinghurst2005designing}, each \VO is coupled with a tangible object, and the \VO is physically manipulated via the tangible object, providing a direct, efficient and seamless interactions with both the real and virtual content. +As \AR integrates visual virtual content into \RE perception, it can involve real surrounding objects as \UI: to visually augment them, \eg by superimposing visual textures~\cite{roo2017inner} (\figref{roo2017inner}), and to use them as physical proxies to support interaction with \VOs~\cite{ishii1997tangible}. +According to \textcite{billinghurst2005designing}, each \VO is coupled to a tangible object, and the \VO is physically manipulated through the tangible object, providing a direct, efficient and seamless interaction with both the real and virtual content. This is a technique similar to mapping a physical mouse movement to a virtual cursor on a screen. -Methods have been developed to automatically pair and adapt the \VOs to render with available tangibles of similar shape and size~\cite{simeone2015substitutional,hettiarachchi2016annexing}. -The issue with these \enquote{space-multiplexed} interfaces is the high number and the diversity of tangibles required. -An alternative is to use a single \enquote{universal} tangible object, such as a cube~\cite{issartel2016tangible} or a sphere~\cite{englmeier2020tangible}, like a hand-held controller -Such \enquote{time-multiplexed} interfaces require interaction techniques to allow the user to pair the tangible with any \VO, \eg by placing the tangible into the \VO and pressing the fingers~\cite{issartel2016tangible}, similar to a real grasp (\secref{grasp_types}). +Methods have been developed to automatically pair and adapt the \VOs to render with available tangibles of similar shape and size~\cite{hettiarachchi2016annexing,jain2023ubitouch} (\figref{jain2023ubitouch}). +The issue with these \enquote{space-multiplexed} interfaces is the high number and variety of tangibles required. +An alternative is to use a single \enquote{universal} tangible object like a hand-held controller, such as a cube~\cite{issartel2016tangible} or a sphere~\cite{englmeier2020tangible}. +These \enquote{time-multiplexed} interfaces require interaction techniques that allow the user to pair the tangible with any \VO, \eg by placing the tangible into the \VO and pressing the fingers~\cite{issartel2016tangible} (\figref{issartel2016tangible}), similar to a real grasp (\secref{grasp_types}). Still, the virtual visual rendering and the tangible haptic sensations can be inconsistent. -When performing a precision grasp (\secref{grasp_types}) in \VR, only a certain relative difference between the tangible and the \VO is noticeable: \percent{6} for the object width, \percent{44} for the surface orientation, and \percent{67} for the surface curvature~\cite{detinguy2019how}. -Similarly, in immersive \OST-\AR, +Especially in \OST-\AR, as the \VOs are slightly transparent allowing the paired tangibles to be seen through them. +In a pick-and-place task with tangibles of different shapes, a difference in size~\cite{kahl2021investigation} (\figref{kahl2021investigation}) and shape~\cite{kahl2023using} (\figref{kahl2023using}) with the \VOs does not affect user performance or presence, and that small variations (\percent{\sim 10} for size) were not even noticed by the users. +This suggests the feasibility of using simplified tangibles in \AR whose spatial properties (\secref{spatial_properties}) abstract those of the \VOs. +Similarly, we described in \secref{tactile_rendering} how a material property (\secref{object_properties}) of a touched tangible can be modified using wearable haptic devices~\cite{detinguy2018enhancing,salazar2020altering}: It could be used to render coherent visuo-haptic material perceptions directly touched with the hand in \AR. -Triple problème : -il faut un tangible par objet, problème de l'association qui ne fonctionne pas toujours (\cite{hettiarachchi2016annexing}) et du nombre de tangibles à avoir -et l'objet visuellement peut ne pas correspondre aux sensations haptiques du tangible manipulé (\cite{detinguy2019how}). -C'est pourquoi utiliser du wearable pour modifier les sensations cutanées du tangible est une solution qui fonctionne en VR (\cite{detinguy2018enhancing,salazar2020altering}) et pourrait être adaptée à la RA. -Mais, spécifique à la RA vs RV, le tangible et la main sont visibles, du moins partiellement, même si caché par un objet virtuel : comment va fonctionner l'augmentation haptique en RA vs RV ? Biais perceptuels ? Le fait de voir toucher avec sa propre main le tangible vs en RV où il est caché, donc illusion potentiellement plus forte en RV ? +\begin{subfigs}{ar_applications}{Manipulating \VOs with tangibles. }[ + \item Ubi-Touch paired the movements and screw interaction of a virtual drill with a real vaporizer held by the user~\cite{jain2023ubitouch}. + \item A tangible cube that can be moved into the \VE and used to grasp and manipulate \VOs~\cite{issartel2016tangible}. + \item Size and + \item shape difference between a tangible and a \VO is acceptable for manipulation in \AR~\cite{kahl2021investigation,kahl2023using}. + ] + \subfigsheight{37.5mm} + \subfig{jain2023ubitouch} + \subfig{issartel2016tangible} + \subfig{kahl2021investigation} + \subfig{kahl2023using} +\end{subfigs} \subsubsection{Manipulating with Virtual Hands} \label{ar_virtual_hands} -Natural UI allow the user to use their body movements directly as inputs with the \VE~\cite{billinghurst2015survey}. +Natural UI allow the user to use their body movements directly as inputs with the \VE \cite{billinghurst2015survey}. Our hands allow us to manipulate real everyday objects with both strength and precision (\secref{grasp_types}), hence virtual hand interaction techniques seem the most natural way to manipulate virtual objects~\cite{laviola20173d}. Initially tracked by active sensing devices such as gloves or controllers, it is now possible to track hands in real time using cameras and computer vision algorithms natively integrated into \AR/\VR headsets~\cite{tong2023survey}. -La main de l'utilisateur est donc suivie et reconstruite dans le \VE sous forme d'une \emph{main virtuelle}~\cite{billinghurst2015survey,laviola20173d}. -Les modèles les plus simples représentent la main sous forme d'un objet 3D rigide suivant les mouvements de la main réelle avec \qty{6}{\DoF} (position et orientation dans l'espace)~\cite{talvas2012novel}. -Une alternative est de représenter seulement les bouts des doigts, as in \figref{lee2007handy}, voire de représenter la main sous forme d'un nuage de points (\figref{hilliges2012holodesk_1}). -Enfin, les techniques les plus courantes représentent l'ensemble du squelette de la main sous forme d'un modèle cinématique articulé (\secref{hand_anatomy}): -Chaque phalange virtuelle est alors représentée avec certain \DoFs de rotations par rapport à la phalange précédente~\cite{borst2006spring}. - The user's hand is therefore tracked and reconstructed as a \emph{virtual hand} model in the \VE ~\cite{billinghurst2015survey,laviola20173d}. The simplest models represent the hand as a rigid 3D object that follows the movements of the real hand with \qty{6}{\DoF} (position and orientation in space)~\cite{talvas2012novel}. An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points. @@ -238,9 +241,9 @@ Physics-based techniques simulate forces at the contact points between the virtu In particular, \textcite{borst2006spring} have proposed an articulated kinematic model in which each phalanx is a rigid body simulated with the god-object~\cite{zilles1995constraintbased} method: the virtual phalanx follows the movements of the real phalanx, but remains constrained to the surface of the virtual objects during contact. The forces acting on the object are calculated as a function of the distance between the real and virtual hands (\figref{borst2006spring}). More advanced techniques simulate the friction phenomena described in \secref{friction}~\cite{talvas2013godfinger} and finger deformations~\cite{talvas2015aggregate}, allowing highly accurate and realistic interactions, but which can be difficult to compute in real time. -\begin{subfigs}{virtual-hand}{Virtual hand interactions in \AR. }[ +\begin{subfigs}{virtual-hand}{Manipulating \VOs with virtual hands. }[ \item A fingertip tracking that enables to select a \VO by opening the hand~\cite{lee2007handy}. - \item Physics-based hand-object interactions with a virtual hand made of numerous many small rigid-body spheres~\cite{hilliges2012holodesk}. + \item Physics-based hand-object manipulation with a virtual hand made of numerous many small rigid-body spheres~\cite{hilliges2012holodesk}. \item Grasping a through gestures when the fingers are detected as opposing on the \VO~\cite{piumsomboon2013userdefined}. \item A kinematic hand model with rigid-body phalanges (in beige) following the real tracked hand (in green) but kept physically constrained to the \VO. Applied force are displayed as red arrows~\cite{borst2006spring}. ] @@ -254,7 +257,7 @@ More advanced techniques simulate the friction phenomena described in \secref{fr However, the lack of physical constraints on the user's hand movements makes manipulation actions tiring~\cite{hincapie-ramos2014consumed}. While the fingers of the user traverse the virtual object, a physics-based virtual hand remains in contact with the object, a discrepancy that may degrade the user's performance in \VR~\cite{prachyabrued2012virtual}. Finally, in the absence of haptic feedback on each finger, it is difficult to estimate the contact and forces exerted by the fingers on the object during grasping and manipulation~\cite{maisto2017evaluation,meli2018combining}. -While a visual rendering of the virtual hand in \VR can compensate for these issues~\cite{prachyabrued2014visual},, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched. +While a visual rendering of the virtual hand in \VR can compensate for these issues~\cite{prachyabrued2014visual}, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched. \subsection{Visual Rendering of Hands in AR} @@ -294,7 +297,7 @@ Taken together, these results suggest that a visual hand rendering in \AR could %\textcite{saito2021contact} found that masking the real hand with a textured 3D opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did. %To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of \VO manipulation. -\begin{subfigs}{visual-hands}{Visual hand renderings of virtual hands in \AR. }[ +\begin{subfigs}{visual-hands}{Visual hand renderings in \AR. }[ \item Grasping a \VO in \OST-\AR with no visual hand rendering~\cite{hilliges2012holodesk}. \item Simulated mutual-occlusion between the hand grasping and the \VO in \VST-\AR~\cite{suzuki2014grasping}. \item Grasping a real object with a semi-transparent hand in \VST-\AR~\cite{buchmann2005interaction}. diff --git a/1-introduction/related-work/4-visuo-haptic-ar.tex b/1-introduction/related-work/4-visuo-haptic-ar.tex index 67f1fbb..552b0df 100644 --- a/1-introduction/related-work/4-visuo-haptic-ar.tex +++ b/1-introduction/related-work/4-visuo-haptic-ar.tex @@ -30,8 +30,10 @@ Thus, the overall perception can be modified by changing one of the modalities, Similarly but in VR, \textcite{degraen2019enhancing} combined visual textures with different passive haptic hair-like structure that were touched with the finger to induce a larger set of visuo-haptic materials perception. \textcite{gunther2022smooth} studied in a complementary way how the visual rendering of a \VO touching the arm with a tangible object influenced the perception of roughness. -Likewise, visual textures were combined in VR with various tangible objects to induce a larger set of visuo-haptic material perceptions, in both active touch~\cite{degraen2019enhancing} and passive touch~\cite{gunther2022smooth} contexts. A common finding of these studies is that haptic sensations seem to dominate the perception of roughness, suggesting that a smaller set of haptic textures can support a larger set of visual textures. +When performing a precision grasp (\secref{grasp_types}) in \VR, some discrepancy in spatial properties (\secref{spatial_properties}) between a tangible and a \VO is not noticeable by users: it took a relative difference of \percent{6} for the object width, \percent{44} for the surface orientation, and \percent{67} for the surface curvature to be perceived~\cite{detinguy2019how}. +%When performing a precision grasp (\secref{grasp_types}) in \VR, only a certain relative difference between the tangible and the \VO is noticeable: \percent{6} for the object width, \percent{44} for the surface orientation, and \percent{67} for the surface curvature~\cite{detinguy2019how}. + \subsubsection{Pseudo-Haptic Feedback} \label{pseudo_haptic} diff --git a/1-introduction/related-work/figures/hartl2013mobile.jpg b/1-introduction/related-work/figures/hartl2013mobile.jpg index ced0879..69282a9 100644 Binary files a/1-introduction/related-work/figures/hartl2013mobile.jpg and b/1-introduction/related-work/figures/hartl2013mobile.jpg differ diff --git a/1-introduction/related-work/figures/issartel2016tangible.jpg b/1-introduction/related-work/figures/issartel2016tangible.jpg new file mode 100644 index 0000000..36799d7 Binary files /dev/null and b/1-introduction/related-work/figures/issartel2016tangible.jpg differ diff --git a/1-introduction/related-work/figures/jain2023ubitouch.jpg b/1-introduction/related-work/figures/jain2023ubitouch.jpg new file mode 100644 index 0000000..a9ee0dd Binary files /dev/null and b/1-introduction/related-work/figures/jain2023ubitouch.jpg differ diff --git a/1-introduction/related-work/figures/kahl2021investigation.jpg b/1-introduction/related-work/figures/kahl2021investigation.jpg new file mode 100644 index 0000000..03c2f0b Binary files /dev/null and b/1-introduction/related-work/figures/kahl2021investigation.jpg differ diff --git a/1-introduction/related-work/figures/kahl2023using.jpg b/1-introduction/related-work/figures/kahl2023using.jpg new file mode 100644 index 0000000..e341158 Binary files /dev/null and b/1-introduction/related-work/figures/kahl2023using.jpg differ diff --git a/1-introduction/related-work/figures/roo2017inner.jpg b/1-introduction/related-work/figures/roo2017inner.jpg index 3c949fd..94eef9d 100644 Binary files a/1-introduction/related-work/figures/roo2017inner.jpg and b/1-introduction/related-work/figures/roo2017inner.jpg differ diff --git a/1-introduction/related-work/figures/watanabe2016transvisible.jpg b/1-introduction/related-work/figures/watanabe2016transvisible.jpg index 23daa06..0bb4f43 100644 Binary files a/1-introduction/related-work/figures/watanabe2016transvisible.jpg and b/1-introduction/related-work/figures/watanabe2016transvisible.jpg differ