Interface Techniques for Tangible Augmented Reality in a Mobile Device Setup for Magic Lens Experiences

Page created by Elmer Ball
 
CONTINUE READING
Interface Techniques for Tangible Augmented Reality in a Mobile Device Setup for Magic Lens Experiences
Interface Techniques for Tangible Augmented Reality in
    a Mobile Device Setup for Magic Lens Experiences
                          Dagny C. Döring, Robin Horst, Linda Rau, Ralf Dörner

         RheinMain University of Applied Sciences, Wiesbaden, Germany; E-Mails:
     Dagny.C.Doering@student.hs-rm.de; Robin.Horst/Linda.Rau/Ralf.Dörner@hs-rm.de

Abstract: Tangible Augmented Reality (TAR) is a subclass of Augmented Reality (AR)
that includes tangible objects within AR interfaces. For example, utilizing object tracking
and methodologies from AR, a real object such as a steering wheel can be used to steer a
virtual car and thus becomes part of the user interface of an AR game. In this paper, we
introduce four TAR-based user interface techniques for a setup that is based on handheld
mobile device technology to create a magic lens AR experience. We relate these interfaces
to basic interaction scenarios and state lessons learned during the implementation of them.
Two interfaces are evaluated within a comparative user study. The study indicates that
touch interactions on the mobile device performed better in terms of usability and that
interactions that were virtually projected onto a planar surface around the tangible object
involved challenges regarding the hand-coordination.

Keywords: Tangible Augmented Reality, Interface Design, Magic Lens Experiences

Figure 1: Illustration of four different tangible augmented reality interfaces in the example of
augmenting a tangible skin model through a mobile tablet device. The (1) screen interface provides
touch interactions on the screen, whereas the (2) surface interface projects a virtual interface onto a
planar surface such as a table around the augmented tangible. The (3) tangible interface technique
shifts the interaction space directly to the tangible object, for example by projecting a virtual
interface onto the tangible itself. The (4) additional tangible interface extends our setup with a
generic tangible object that serves as user interface for a TAR application.

Veröffentlicht durch die Gesellschaft für Informatik e.V. 2020 in B. Weyers, C. Lürig, D. Zielasko (Hrsg.):
GI VR / AR Workshop 2020, 24. - 25, September 2020, Trier.
Copyright c 2020 bei den Autoren.
http://dx.doi.org/10.18420/vrar2020_1
Interface Techniques for Tangible Augmented Reality in a Mobile Device Setup for Magic Lens Experiences
1    Introduction

Particularly since the advent of the Internet of Things, IT in private living environments
is not confined to dedicated computer devices. This provides an opportunity to employ
tangible objects for designing user interfaces that non-experts in IT find literally better to
grasp. One subset of tangible user interfaces can be built with Tangible Augmented Reality
(TAR).
    TAR can mediate information through a virtual extension of a real-world tangible object.
For example, a hand-held mobile device can be used as a magic lens [BSP+ 93] to annotate
a tangible object. Users of this TAR setup can interact with both the tangible object and
virtual application features. In this work, we differentiate between two classes of TAR inter-
actions. Tangible interactions include interactions with the tangible itself, such as grabbing
and rotating. Virtual interactions relate directly to the virtual features of the TAR applica-
tion that augments real-world object. While virtual interactions do not influence the state
of the tangible, tangible interactions can influence the state of the application. For example,
rotating a tangible can intuitively reveal virtual information on the backside of a tangible.
    In this paper, we make the following contributions:

    • We explore input and output possibilities and affordances that the mobile device based
      setup provides and introduce four TAR-based interface techniques. (1) A touch screen
      interface, (2) an interface virtually projected through a handheld screen onto planar
      surface such as a table that the tangible object stands on, (3) an interface that uses
      only the tangible object itself to provide interactions and (4) an additional tangible
      object that serves as a generic interaction device (Fig. 1). We illustrate the interface
      designs in three basic use cases.

    • We state lessons learned during the implementation process of three interfaces, give
      practical insights in how to implement them and demonstrate their feasibility.

    • We conducted a comparative user study that evaluates the (1) screen interface and
      the (2) surface interface to draw conclusions on our techniques and their suitability
      for pattern-based applications.

   This paper is organized as follows: The next section discusses related work. In section 3,
we depict our TAR-based interface concepts, and then describe their implementations in the
fourth section. The user study is stated in section 5. The last section provides a conclusion
and points out directions for future work.

2    Related Work

Pioneering work by Ullmer and Ishii [IU97] and the Tangible Media Group [Tan] suggest
principles of tangible user interfaces which utilize real world objects as computer input and
output devices. In following work, they proposed a mode-control-representation interaction
Interface Techniques for Tangible Augmented Reality in a Mobile Device Setup for Magic Lens Experiences
model (physical and digital) for such tangible interfaces [UI00]. Integration of physical
representations and controls are highlighted. Koleva [KBNR03] describe the physical objects
in tangible UIs according to their coherence to the digital objects. A high coherence exists
when linked physical and digital objects can be perceived as the same thing by the user and
the physical object is a common object within the application domain. General purpose
tools represent a low coherence in their work. Work by Billinghurst et al. [BKP+ 08] shapes
the term of TAR. They build on the existing principles of tangible user interfaces suggested
by Ishii and Ullmer [IU97] and the Tangible Media Group [Tan]. They use an AR visual
display, such as a mobile device, and couple it to a tangible physical interface.
    TAR interfaces involve hand-based interactions. For mobile AR displays, such as a tablet,
users usually hold the device during the interactions. Datcu et al. [DLB15] compare different
types of hand-based interaction in AR for navigation. Their study indicates that neither two-
hand nor one-hand interaction is more effective compared to tangible interaction. Still, the
positioning of the interface in the user’s field of view was considered disturbing. In their
study, they used glasses as an AR display so that the user interface could make use of both
hands. However, handheld mobile devices restrict the possibilities of interactions to one
hand. Research on TAR interfaces for a mobile device setup is not considered here.
    Henderson and Feiner [HF09, HF08] describe in their work examples of what they call
Opportunistic Controls that they have designed and implemented. They use optical marker
tracking and extend existing tactile features on domain objects. These otherwise unused
affordances in the environment are used as input and output devices for interactions within
AR. The considered affordances are located in the domain-specific environment, however,
affordances at more common objects or environments such as a table that a tangible lies on
are not scope of their work. Work by Becker et al. [BKMS19] introduces a method to create
personalized tangible user interfaces from plain paper. However, AR is not targeted here.
Current work by Valkov et al. [VML19] presents a semi-passive tangible object that can be
used to control virtual information in a tabletop setup such as interactive workbenches of
fish-tank VR. Their haptic object provides programmable frictions and therefore is seen as
a more generic tangible interface that is not determined to specific use cases.
    The mentioned work shows the variety of potential tangible interfaces that can success-
fully be used in different setups. Exploring affordances that mobile device setup for magic
lens experiences can offer for TAR-based interface techniques can be valuable for design-
ing interfaces, as well, especially since mobile devices and magic lens experiences such as
Pokemon Go [Inc] already belong to our everyday live environment.

3   Mobile Tangible Augmented Reality Interface Techniques

This section presents four TAR-based interface concepts. We show how to design these
interfaces for specific scenarios of a domain and illustrate it in three examples. Universal
patterns from the knowledge communication domain were used representative for other basic
cases of TAR applications. These patterns are the following (after [HD19]): The show and
Interface Techniques for Tangible Augmented Reality in a Mobile Device Setup for Magic Lens Experiences
tell pattern annotates an object to inform a user about its composition. Compare contrasts
aspects of two objects. A chronological sequence pattern illustrates a temporal relation.

Figure 2: Conceptual illustration of our four proposed TAR interfaces regarding their application
for short pattern-based AR experiences.

3.1   Screen Interface

Using a visual display for interacting with the virtual content is an established concept
for mobile AR technology such as a tablet device. The screen interface is closely related
to the typical interactions performed on these touch devices. Most interaction takes place
on the mobile screen itself. Regarding our screen interface for TAR, app interactions are
performed solely on a touch screen, as well. For being able to actually hold a mobile device
and simultaneously interact with its touch screen or the tangible, the mobile device can
only be held with one hand. The second hand is free for either app or tangible interactions.
Another possibility is to use a stand for the mobile device so that users have both hands
free for simultaneous interaction with both the screen and the tangible. We focused on the
hand-held setup, as users of mobile devices are already used to hold it themselves. More
detailed information on this aspect is described in work by Rau et al. [RHL+ 20].
    For the show and tell pattern, text annotations are displayed on the screen and connected
to the real-world object (Fig. 2). In addition, these annotations serve as buttons that can
be triggered to show further details or to activate animations of the sub-objects that they
annotate. A menu at the side of the mobile TAR application provides system functionality,
such as closing the app or switching to another pattern implementation. The compare pat-
tern visualizes similar parts of the tangible object and a virtual 3D model or an image by
drawing lines between them. Additionally, corresponding features are marked, and lines can
be activated or deactivated to avoid visual clutter. The comparison object is displayed on
one side of the screen whereas the remaining screen-space is used for the see-through aug-
mentation. This object can also be placed within the three-dimensional space of the tangible,
although this requires to track its environment as well. Otherwise, haptic interactions, such
as rotating the tangible, will also affect the virtual object. In displaying it on screen-space,
both the tangible and the comparison object can be modified separately. In the presentation
of a chronological sequence, the user receives information while navigating through different
states of the tangible. Temporal relations to other objects can be visualized by lines, as well,
but for TAR, we chose to rely on different states of our single tangible as our concepts are
focused on using one tangible as the main object. A relation to another object is virtually
displayed similar to the compare pattern if it is necessary.

3.2   Surface Interface

The surface interface (Fig. 2) shifts the app interactions from the mobile screen to a planar
surface around the tangible, such as a table. Therefore, the mobile camera must record
both the tracked tangible and the surface. By placing the action triggers, such as buttons,
spatially between the mobile screen and the tangible, it brings the app interaction space
and the tangible interactions space closer together. The free hand does not have to travel
as much as when switching from the tangible to action triggers compared to the screen
interface. Besides, tangible interactions are not affected. As interactions are mapped to a
surface within the 3D space in front of the mobile device, users can decide when to include
action triggers in their view and when to use the screen for augmenting the tangible.
    Similar to the screen interface, the text labels for show and tell are connected to the
tangible with lines. In contrast to the previous approach, the free-floating labels are not
interactive. Therefore, they do not necessarily have to point towards the screen from every
angle, so that they can be located at fixed positions around the object. In addition to buttons
on the table, we chose to define areas distributed over the table that trigger actions when
placing the tangible within them. For example, they can be used for system functionalities,
such as switching between the patterns. Regarding the presentation of comparisons and
chronological sequences, the surface interface requires to track the table surface, so that
this interface can include virtual objects within the tangibles 3D space without coupling
movement of the tangible to the virtual objects.

3.3   Tangible Interface

The tangible interface (Fig. 2) uses the tangible itself as an interface technology for both
app interactions and tangible interactions. Therefore, it merges the app interaction space
and the tangible interactions space. Analogous to the screen and surface interface, the user
has one hand to interact with the tangible. One possibility to use the tangible as overall
interface is to make its movements triggering actions. For instance, shaking the object can
reveal additional information, while turning it to the left or right can navigate through states
of chronological sequence. However, this interaction technique focuses on the entire tangible,
making interactions with sub-objects non-intuitive for users. We chose to use the tangible
as remote control. Action triggers can be displayed on the tangible itself. At show and tell,
more details on an annotated part can be obtained by touching the sub-object directly.
    System interactions, such as closing the app or switching between patterns, are provided
by virtual buttons on one dedicated side of the object. As many objects have a flat bottom
side to stand on, we found it convenient to place these virtual buttons at the flat bottom
side of the tangible. These buttons are also used for the chronological sequence to switch
between states. In case of comparison, a virtual object can be shown aside of the tangible
or on the screen space as for the screen interface. Single comparisons can be triggered by
touching the specific parts of the tangible that are compared to the virtual parts.

3.4   Additional Tangible Interface

The additional tangible interface makes use of a generic tangible object that is used addi-
tionally to the tangible that information shall be provided of. This interface divides the app
interaction space and the tangible interactions space again but places them close to each
other, since both can be used in front of the mobile screen. The shape of such an additional
object can vary. In this work, we refer to a cuboid shape, because of its even shape and
different sides that can be utilized separately to project virtual UI elements on it.
    Using an additional cube gives several possibilities for interactions. On a basic level, its
position and orientation can be used to decide which information is displayed. For instance,
each side can correspond to a specific feature. Its separately tracked orientation and position
can also be used for the compare pattern to control the virtual object. It can be moved and
rotated by attaching it in the object-space to the cube. In contrast to the solely virtual
extensions that were mentioned in the descriptions of the other interfaces, it can also have
haptic buttons or sensors on it. For example, rotation values can be used when a gyroscope
rotary decoder is attached even, if the additional tangible is located outside of the mobile
device’s camera frustum. In summary, for each use case and each teaching pattern many
variations of additional objects can be used.

4     Implementation

We focused on implementing three of the proposed interfaces: (1) the screen interface, (2) the
surface interface and (3) the tangible interface. By validating these concepts, we establish
the foundation for the additional tangible interface, which can then be adapted for any use
case. We implemented a prototype by means of a use case that informs users about the
structure and functionality of the human skin. The tangible object was a simplified model
of a human skin (Fig. 3 left). We used a Samsung Galaxy Tab S5e tablet as the mobile
device. The prototype was implemented as a Unity [Tec] app and the Vuforia integration
[PTC] provided the tracking functionality. To track the tangible skin model a 3D object
target was created using the Vuforia Object Scanner.
    The interface consists of two types of elements: Interactive action triggers on the touch
surface, e.g., buttons or panels, and game objects that are placed in the 3D world. The
tangible is represented as a Vuforia game object. For text-labels, Unity UI elements were
used. The built-in Unity line-renderer component only offers to draw lines between game
objects and not specific positions at the tangible objects or UI elements. To draw lines
from screen-space (UI) to the object space of the tangible, empty unity game objects were
attached to sub-components of the tangible and the UI elements respectively (Fig. 3 left). A
similar technique can be used for connecting two objects for a comparison. However, when
lines are drawn between the tangible in 3D space and a UI-element on screen-space, this will
always attach the lines to the back of the image. Users will not see where it is connected
to. We chose to draw a line from the point of interest to the image border and then draw
another line from this border to the point of interest on the tangible object. We supported
active comparisons with a red circle indicator.

Figure 3: Screenshot of our prototype implementation that was used within the user study. Left:
screen interface with a show and tell pattern active. Right: surface interface with a compare pattern
enabled and the the comparison lines hidden.

    For placing action triggers on a planar surface, Vuforia does currently not support pro-
jecting buttons directly on planar table surfaces. However, it supports to track predefined
planar image targets that lie on the table for placing the virtual buttons (Fig. 3 right). They
are triggered by covering the button image features, for example with a finger. We also used
the different image targets to implement system interactions, such as switching between
patterns. We designed suitable image targets for each pattern. For example, the compare
image target (Fig. 3 right) indicates a place that the tangible can be placed on and leaves
space for a virtual 3D model. To place buttons on the image target and use Vuforias button
triggers, we had to use Vuforia’s virtual buttons instead of Unity UI buttons. A challenge
for implementing the interface for our patterns was the selection of single show and tell an-
notations (e.g., to trigger animations). With an increasing number of annotations, likewise
the number of buttons increased. The arrangement of one button for each annotation was
feasible in our use case but may not be suitable in future implementations of the interface.
The space for the buttons could exceed the space of the image target.
    For the tangible interface (and similarly for additional tangible interface), Vuforia does
not support to add virtual buttons directly on a Vuforia 3D object target, such as the tan-
gible. It only supports to attach virtual buttons on image targets. Neither supported are
virtual buttons on Vuforia multi targets even though they correspond to a spatial arrange-
ment of image targets. Therefore, we took photos from each side of the tangible skin model,
included them as image targets, and attached them to the corresponding sides of the object
tracking target of the skin. So we used Vuforia to track both the object target, for adding
augmented information based on the rotation and movement of the skin as before, and mul-
tiple image targets. On the tracked image targets, we placed the virtual buttons. For our
use case, other actions such as navigating through the sequence states or switching between
patterns were placed on the planar sides of the skin model, particularly at the bottom of it.
However, these sides were also less feature-rich, making it more difficult for Vuforia to detect
when buttons were triggered. Hence, we created small additional markers and attached them
to the bottom of the skin model and used them as image targets, as well. This supported
the tracking on these sides, but the combined tracking of the tangible tracking, the image
target tracking of the photos and the marker tracking still was not robust enough to use it
within a user study, specifically under varying lighting conditions.

5   Evaluation

To underline the feasibility of our interface techniques, we conducted a user study that
evaluates the surface and the screen interface. The study involved 12 unpaid, voluntary
participants (6 females, Ø 26.09 years old, SD 2.25 years). Our participants stated their
experience with AR on a 7-point scale (0 no experience and 6 regular usage). Most of them
were novel to AR technology (Ø 1.54 and SD 1.91).
    At the beginning of the study, each participant was given a short introduction explaining
the setting and the prototype. During the study, each participant performed the following
five tasks with both the screen and the surface interface, in random order: (T1) Find the
skin model with the tablet (warm-up). (T2) Discover different phases of the object using
the sequence (chronological sequence). (T3) Display labels for the individual skin elements
(show and tell ). (T4) Display an animation of an object part (show and tell ). (T5) Compare
the model to a virtual version of it and display the corresponding features (compare).
    After the study, our participants were asked to fill out a questionnaire that consisted
of five questions and items of the abbreviated AttrakDiff questionnaire [HBK03], to draw
conclusions about the hedonic and pragmatic qualities of each interface. The questions
Q1-Q5 each related to a specific aspect of our interface were: (Q1) How do you rate the
simultaneous use of the mobile, the tangible and image targets (coordination)? (Q2) Does
the handling of the interface distract from the accomplishment of the task (distraction)?
(Q3) Does the interface leave enough free space to explore the tangible (freedom)? (Q4)
What challenges have you encountered while using it (challenges)? (Q5) What other use
cases could you imagine for this interface (fields of application)? The questions Q1-Q3
were answered using a 7-point (0-6) semantic differential scale, where 0 indicates a negative
notion and 6 a positive one. Q4 and Q5 were answered on free text forms. At the end of the
questionnaire, we asked the participants how they liked each interface for the three distinct
use cases.
    The analysis of the study shows that our participants assessed the screen interface more
positive than the surface interface regarding A1-A3 (Fig. 4 top). Our participants specifically
found the coordination challenging while interacting with the surface interface. It occurred
that they triggered buttons by mistake when they performed tangible interactions and also
when they tried to reach a button behind another button. We conducted a non-parametric
and dependent Wilcoxon signed-rank test [WW64] for each aspect A1-A3 to compare the
interfaces. With a threshold for statistical significance of 5%, the tests point out that there
exist significant differences for two of three tests – all in favor to the screen interface. These
significant differences occurred at the aspects A1 (coordination) with p = 0.00424 and A2
(distraction) with p = 0.01078. For A3 (freedom) the test did not confirm a significant dif-
ference (p = 0.08364). Free text comments for A4 (challenges) indicate further coordination-
related difficulties, specifically for the table, but also some for the screen interface. We also
could observe that some participants tried to interact with the mobile device with the hand
that actually held the device, so that they do not had to switch their free hand between app
and tangible interactions with the screen interface. We observed that this worked well in
most cases. It could not be determined that one interface was better suited for a particular
pattern. Regarding A5, the participants stated that they would use our applications mainly
in museums or other teaching related domains (fields of application).
    The portfolio presentation of the AttrakDiff questionnaire [HBK03] (Fig. 4 bottom right)
compares our interfaces regarding hedonic and pragmatic qualities. It shows that the screen
interface was placed within the ’desired’ region with a slight shift towards ’task-oriented’.
The surface interface was placed within the ’self-oriented’ area, indicating higher hedonic
than pragmatic qualities. The description of word pairs (Fig. 4 bottom right) shows the
mean scores of the single items that the AttrakDiff consists of. It shows that the scores were
similar at three items (’unpredictable-predictable’, ’tacky-stylish’ and ’dull-captivating’) and
only one item had a higher score for the surface interface (’unimaginative-creative’).
    Regarding the use cases, our participants stated on a 3-point scale (-1,0,1) for the screen
interface, that they assessed show and tell with Ø 0.06, compare with Ø 0.58 and chrono-
logical sequence with Ø 1 point. For the surface interface, they assessed show and tell with
Ø -0.25, compare with Ø 0 and chronological sequence with Ø 0 points. So our participants
indicated that they liked the pattern implementations with the screen interface, specifically
chronological sequence with a unanimously positive rating. The surface interface was rated
Figure 4: Top: Box plots comparing the scores of our screen (green) with our surface interface
(blue) relating to the three aspects A1-A3. Bottom left: Analysis of the description of word pairs
[HBK03]. Right: Portfolio presentation [HBK03].

mostly neutral, whereas show and tell was rated even negatively in mean.

6   Conclusions and Future Work

In this paper, we explored input and output possibilities and affordances that a mobile
device TAR setup offers for magic lens experiences. We introduced four TAR-based interface
techniques and illustrated how to design them for specific scenarios in the example of three
basic TAR use cases.
    Regarding their implementation, we have shown how to realize the interfaces and stated
lessons learned. The screen and the surface interface could successfully be implemented.
We could show alternative workarounds for realizing a tangible interface and concluded that
it is feasible to implement buttons directly attached to a tangible using Vuforia, but that
the tracking quality did not suffice for further testing. Our user study indicates that our
screen interface and our surface interface both were perceived positively by our participants.
However, they had difficulties regarding the hand coordination, specifically with the surface
interface. Overall, our participants chose the screen interface to be more suitable compared
to the surface interface.
    Future work will focus in two directions. At first, we will foster work on our interface
techniques and specifically elaborate on concepts for a generic tangible for TAR. We will also
investigate other tracking toolkits that can be used within Unity to stabilize the tracking for
our tangible interface. Secondly, it can also be valuable to investigate combinations of our
interfaces. The proposed concepts do not exclude each other and provide different aspects
regarding their app and tangible interactions. Depending on the tangible object, identified
interaction spaces can differ and can be more or less suitable for some techniques. For
example, a tangible without a specific flat side is not appropriate for projecting the system
interactions on one side of it, however, it can still be suitable to provide these actions on a
surface next to the tangible.
    Acknowledgments: This project (HA project no. 690/19-10) is financed with funds
of LOEWE – Landes-Offensive zur Entwicklung Wissenschaftlich-ökonomischer Exzellenz,
Förderlinie 3: KMU-Verbundvorhaben (State Offensive for the Development of Scientific
and Economic Excellence).

References

[BKMS19] Vincent Becker, Sandro Kalbermatter, Simon Mayer, and Gábor Sörös. Tai-
         lored controls: Creating personalized tangible user interfaces from paper. In
         Proceedings of the 2019 ACM International Conference on Interactive Surfaces
         and Spaces, pages 289–301, 2019.

[BKP+ 08] Mark Billinghurst, Hirokazu Kato, Ivan Poupyrev, et al. Tangible augmented
          reality. ACM SIGGRAPH ASIA, 7(2), 2008.

[BSP+ 93] Eric A Bier, Maureen C Stone, Ken Pier, William Buxton, and Tony D DeRose.
          Toolglass and magic lenses: the see-through interface. In Proceedings of the 20th
          conference on Computer graphics and interactive techniques, pages 73–80, 1993.

[DLB15]    Dragoş Datcu, Stephan Lukosch, and Frances Brazier. On the usability and
           effectiveness of different interaction types in augmented reality. International
           Journal of Human-Computer Interaction, 31(3):193–209, 2015.

[HBK03]    Marc Hassenzahl, Michael Burmester, and Franz Koller. Attrakdiff: Ein fragebo-
           gen zur messung wahrgenommener hedonischer und pragmatischer qualität. In
           Mensch & computer 2003, pages 187–196. Springer, 2003.

[HD19]     Robin Horst and Ralf Dörner. Integration of bite-sized virtual reality applications
           into pattern-based knowledge demonstration. In Proceedings of the 16th Work-
           shop Virtual and Augmented Reality of the GI Group VR/AR, pages 137–148.
           Gesellschaft für Informatik, Shaker Verlag, 2019.
[HF08]     Steven J Henderson and Steven Feiner. Opportunistic controls: leveraging nat-
           ural affordances as tangible user interfaces for augmented reality. In Proceedings
           of the 2008 ACM symposium on Virtual reality software and technology, pages
           211–218, 2008.

[HF09]     Steven Henderson and Steven Feiner. Opportunistic tangible user interfaces for
           augmented reality. IEEE Transactions on Visualization and Computer Graphics,
           16(1):4–16, 2009.

[Inc]      Niantic Inc. Pokemon go. https://www.pokemongo.com/. Accessed: 17th July
           2020.

[IU97]     Hiroshi Ishii and Brygg Ullmer. Tangible bits: towards seamless interfaces be-
           tween people, bits and atoms. In Proceedings of the ACM SIGCHI Conference
           on Human factors in computing systems, pages 234–241, 1997.

[KBNR03] Boriana Koleva, Steve Benford, Kher Hui Ng, and Tom Rodden. A framework
         for tangible user interfaces. In Physical Interaction (PI03) Workshop on Real
         World User Interfaces, pages 46–50, 2003.

[PTC]      PTC. Vuforia. https://developer.vuforia.com/. Accessed: 17th July 2020.

[RHL+ 20] Linda Rau, Robin Horst, Yu Liu, Ralf Dörner, and Ulrike Spierling. A tangible
          object for general purposes in mobile augmented reality applications. In INFOR-
          MATIK 2020, page to appear (8 pages). Gesellschaft für Informatik, 2020.

[Tan]      Tangible Media Group (MIT). Tangible media site. http://www.media.mit.
           edu/tangible/. Accessed: 17th July 2020.

[Tec]      Unity Technologies. Unity game engine description. https://unity.com/. Ac-
           cessed: 17th July 2020.

[UI00]     Brygg Ullmer and Hiroshi Ishii. Emerging frameworks for tangible user interfaces.
           IBM systems journal, 39(3.4):915–931, 2000.

[VML19]    Dimitar Valkov, Andreas Mantler, and Lars Linsen. Haptic prop: A tangible
           prop for semi-passive haptic interaction. In 2019 IEEE Conference on Virtual
           Reality and 3D User Interfaces (VR), pages 1744–1748. IEEE, 2019.

[WW64]     Frank Wilcoxon and Roberta A Wilcox. Some rapid approximate statistical pro-
           cedures. Lederle Laboratories, 1964.
You can also read