Demystifying Deception Technology: A Survey

Page created by Joanne Mcdaniel
 
CONTINUE READING
Demystifying Deception Technology:
                                                               A Survey

                                         Daniel Fraunholz1 , Simon Duque Anton1 , Christoph Lipps1 , Daniel Reti1 , Daniel
                                          Krohmer1 , Frederic Pohl1 , Matthias Tammen1 , and Hans Dieter Schotten1,2
                                                                  1
                                                                    Intelligent Networks Research Group
                                                            German Research Center for Artificial Intelligence
                                                                          Kaiserslautern, Germany
                                                                     {firstname}.{lastname}@dfki.de
                                                       2
                                                           Institute for Wireless Communication and Navigation
                                                                        University of Kaiserslautern
arXiv:1804.06196v1 [cs.CR] 17 Apr 2018

                                                                          Kaiserslautern, Germany
                                                                           schotten@eit.uni-kl.de

                                               Abstract. Deception boosts security for systems and components by
                                               denial, deceit, misinformation, camouflage and obfuscation. In this work
                                               an extensive overview of the deception technology environment is pre-
                                               sented. Taxonomies, theoretical backgrounds, psychological aspects as
                                               well as concepts, implementations, legal aspects and ethics are discussed
                                               and compared.

                                         Keywords: Information Security, Network Security, Deception, Deception Tech-
                                         nology, Cyber-Deception, Honeytokens, Honeypots

                                         1   Introduction

                                         Sun Tzu once wrote “all warfare is based on deception” [1]. This was long before
                                         the first digital devices. Since then, 2500 years ago, deception was an essential
                                         aspect of many fields, e.g. the military. Over the years deception was an essential
                                         aspect of military operations. In information security (IS), social engineering, as
                                         extensively described by Mitnick [2] was the first use of deception. In 1986 and
                                         1991, Stoll [3], respectively Cheswick [4], transferred the concept of deception to
                                         defensive applications. These applications were called honeypots (HP). Later on
                                         the concept was generalized to Deception technology (DT), which is a superset
                                         of HPs and all other technologies relying on simulation and dissimulation. The
                                         Deception Toolkit published by Cohen was the first publicity available deception
                                         software [5]. Over the last three decades, the concepts of deception experienced
                                         a rising popularity in information security. Perimeter-based security measures,
                                         such as firewalls and authentication mechanisms, do not provide a proper level of
                                         security in the context of insider threats and social engineering. Defense-in-depth
                                         strategies, such as signature-based intrusion detection and prevention, often
                                         suffer from a large number of false-positive detections, resulting in alarm fatigue
2       Fraunholz et al.

of security information and event management systems. DTs come along with
several major advantages such as low false-positive alerts and 0-day detection
capabilities, making it a promising solution to tackle advanced security threats
such as intrusion detection [6] and attribution [7, 8]. This work is structured as
follows: In section 2, an overview of recent definitions and taxonomies related
to DT are discussed and a taxonomy for this work is specified. DT is also
put into context of the current IT-security environment. Furthermore, cognitive
vulnerabilities are introduced and formal deception models are reviewed. Section
3 gives a comparative overview of important and recent advances in deceptive
software, honeytokens (HT) and HPs as well as field studies based on such
technologies. Legal considerations, ethics and baseline security is discussed in
section 4. In section 5, this work is concluded.

2     Background and Theory

In this section, research on the theoretical background of deception is reviewed.
First, definitions and taxonomies are presented and determined for this work.
DT is then integrated in the information security environment and research on
psychological aspects of deception is discussed. The section is finished by a review
on formal approaches to model deception as a game.

2.1   Definitions and Taxonomies

Whaley [9] defines deception as a misperception that is intentionally induced by
other entities. In his typology of perception, deception has three requirements:
1) not a pluperception, 2) not self-induced and 3) not induced unintentional.
Around ten years after this publication Bell and Whaley [10] published a taxon-
omy of deception. This taxonomy classifies deception into two major categories:
dissimulation and simulation. Dissimulation consists of three classes: masking,
repacking and dazzling, whereas mimicking, inventing and decoying are within the
simulation category. This taxonomy is the most frequently employed taxonomy
for deception in the information security domain. In this work, the taxonomy
from Bell and Whaley is used. Dunnigan and Nofi [11] categorized deception
based on the six principles of deception introduced by Fowler and Nesbit [12]: De-
ception should reinforce enemy expectations, have realistic timing and duration,
be integrated with operation, be coordinated with concealment of true intentions,
be tailored to needs of the setting and be imaginative and creative. Their taxon-
omy differentiates nine classes of deception: Concealment, camouflage, false and
planted information, lies, displays, ruses, demonstrations, feints and insight. Rowe
and Rothstein [13] proposed a taxonomy based on the works of Fillmore [14],
Copeck [15] and Austin [16] on linguistic cases. Their taxonomy consists of 32
cases in 7 groups. Stech et al. [17] found that deception is defined in literature
in a number of ways. They published a scientometric analysis of the concept of
deception in the cyber-space domain. Monroe [18] compared several competing
definitions of deception. He introduces a taxonomy that closely matches U.S.
Demystifying Deception Technology         3

Army doctrinal concepts. The taxonomy is structured in three layers, where
the two main categories (active deception and cover) comprise 13 atomic pecu-
liarities of deception. In 2012 the U.S. Army [19] published a document that
grouped MILDEC as part of information operation and related to electronical
warfare (EW), psychological operation (PSYOPS), computer network operation
(CNO) and operation security (OPSEC). Shim and Arkin [20] compared several
taxonomies for deception in respect to the domain. In the information security
domain, the previously introduced taxonomy from Rowe based on linguistic
cases was chosen. Other domains are: Philosophy, psychology, economics, military
(taxonomy from Bell and Whaley) and biology. Almeshekah and Spafford [21]
published a work on planning and integrating of deception. They proposed a tax-
onomy with focus on the deception targets. Later on, Almeshekah [22] extended
the previous work by grouping deception and other security mechanisms into
four categories. He then identified intersections of the categories and mapped the
mechanisms to the famous kill chain proposed by Hutchins et al. [23]. Pawlick et
al. [24] created a taxonomy by grouping existing game-theoretical approaches to
deception as discussed later in this section. Their taxonomy divides deception
into: Perturbation, moving target defense (MTD), obfuscation, mixing, honey-x
and attacker engagement.
     More specific taxonomies for example for HPs are given by Seifert et al. [25]
or Pouget et al. [26]. The term Deception Technology recently became famous and
is currently used for deception-based security that is superior to the 30-year old
HP technology. It is frequently associated with holistic frameworks that comprise:
Adaptive HT and decoy generation, automated deployment and monitoring as
well as integrated visualization and reporting capabilities.

2.2   Deception in the IT-Security Environment
The term IT security summarizes a broad range of technical and organizational
measures aiming for the protection of a predefined set of assets. These assets
can be classified according to the so-called CIAA security goals. CIAA stands
for Confidentiality, Integrity, Authentication and Availability and describes the
protection goals of common IT-systems. DT cannot be mapped onto those
security targets, but is commonly used for more abstract objectives, e.g. intrusion
detection or analyzing attacker behavior. Another fundamental difference between
classic IT security and DT is the use of Security by Obscurity. This term describes
the intentional use of methods more complex than necessary or not published in
order to prevent an attacker from gaining knowledge about the system. While it
is not advised, and has repeatedly backfired in classic IT security, it is a valid
technique to slow down attackers or distract them from possible targets. DT
does not necessarily depend on obscurity, but is significantly more effective if
the presence is unknown. The relation of classic IT security and DT is shown in
figure 1.
    In this figure, three different layers of resources to protect, as well as three
different protection mechanisms are shown. The resources can be grouped into
network-, system and data-resources. Protection can be done by preventing an
4       Fraunholz et al.

Fig. 1. Overview of relevant security techniques and primitives on different layers, grey:
deception-based technologies

attack, by detecting it and by responding to it in a way that hinders the result of
it, so-called correction. There are a lot of classic IT security building blocks that
deal mostly with prevention, such as Virtual Private Networks (VPN), Firewalls,
Data Execution Prevention (DEP), Identity and Access Management (IAM) and
encryption. A few of the well-established mechanisms, such as (Kernel) Address
Space Layout Randomization ((K)ASLR) or steganography, can be considered as
DTs according to the taxonomies as presented in the previous subsection, as they
harden systems and protect resources by making it harder for an attacker to find an
attack surface. A few more novel DTs address prevention, such as MTD, Random
Host Mutation (RHM), Dynamic Instruction Set (DIS) and Unified Architecture
(UA). More well-known DTs, such as HPs and HTs, address detection. This area
is furthermore only addressed by a few classic IT security-mechanisms, such as
Network and Host Intrusion Detection (and Prevention) Systems (N/HID(P)S),
Security Information and Event Management Systems (SIEMs) and antivirus-
software. More novel technologies, such as anomaly detection algorithms, address
this protection goal as well, with similar methodologies as DTs. According to
Corey, HPs, in their very essence, act as a anomaly-based intrusion detection
system [27]. Even though there is an overlap, this statement is only true for the
detection mechanism, as all incoming traffic can be considered an anomaly and
can automatically be classified as an attack. In general the characteristic of an
HP, as well as DTs in general, is to send misleading and wrong data, whereas
anomaly detection is only receiving and analyzing information.

2.3    Cognitive Vulnerabilities
Deception as defined by Whaley [9] as well as other authors, strongly relies on
human psychology. In 1994 Libicki [28] coined the term semantic attacks. He
Demystifying Deception Technology         5

distinguished it from physical and syntactic attacks and discussed it in the context
of IS. Later on, 18 relevant propositions for deception in the context of general
communication were identified and described by Buller and Burgoon [29]. They
also investigated relevant factors for deception. 6 years later, Cybenko et al. [30]
introduced the term cognitive hacking and proposed several counter measures
such as source authentication, trajectory modeling and Ulam games. The term
cognitive hacking is mostly referred to in the context of deception in the news or
social media. Mitnick [2] published an extensive work on social engineering, which
he defined as the use of influence and persuasion to deceive people. Until today,
his work is often referred to as the definitive book for deception, even though
it only focuses on social engineering. The automated training of techniques to
detect deception was investigated by Cao et al. [31]. Cranor and Simson [32]
stated that the security of any sociotechnical system is based on three elements:
product, process and panorama. In their work, they analyzed the correlation of
usability and security. The impact of design on deception was further analyzed by
Yuill [33]. Ramsbrock et al. [34] conducted an empirical study to invstigate the
behaviour of intruders after they were granted access to a honeypot. This idea
was later on employed by Sobesto [35] as well as by Howell [36] to investigate the
impact of design characteristics such as the welcome banner or system resources
on the behavior of attackers in empirical studies. A more systematic approach to
identify a set of relevant cognitive biases was proposed by Fraunholz et al. [37].
They analyzed cognitive biases such as those introduced by Kahneman [38],
aiming to match biases and design characteristics in order to finally derive basic
design principles.

2.4   Formal Deception Models

Several researchers targeted the formal description of deception in games and
models. As introduced in a previous subsection, Pawlick et al. [24] recently
published a game-theoretical taxonomy for deception. In their work, they also
reviewed and categorized 24 publications of deception games. Two works that
were published after their review are discussed in this subsection. Wang et al. [39]
published a deception game for a specific use case. They focus on smart grid
applications and the security benefits of deception against distributed denial of
service attacks. In 2018, Fraunholz and Schotten [40] published a game which
allows to model probes within the game. An attacker as well as a defender are
able to choose the effort they want to invest in the obfuscation of the deception
systems or the examination of systems of unknown nature. After the strategy is
chosen, the attacker decides whether to attack, probe or ignore the system. They
provided a heuristic solution for their game.

3     Deception Technology and Implementations

In this section three noteworthy topics are discussed. First, an overview of research
on deceptive software and HTs is given. This overview excludes HPs, as they
6       Fraunholz et al.

are reviewed in the second subsection. Finally, an overview of field studies and
technological surveys on DTs is given.

3.1   Deceptive Software and Honeytokens
The term Honeytoken has been coined in 2007 by de Barros [41]. The idea can
be traced back to Spitzner [42], who first defined a HT as the equivalent to a HP,
with the constraint of representing an entity different from a computer. Since the
inception of deception in information security, a vast number of concepts have
been published. They cover different types of entities and focus on the generation
of deceptive twins and their deployment. Many of these entities were given specific
names referencing to the term honeypot e.g. honeywords for fake passwords in a
database. The range of domains for HTs can be distinguished by the categories
Server, Database, Authentication and File. As a further abstraction, HTs can be
issued the labels host-based (H) or network-based (N). Both distinctions have
been applied for table 1, which gives an overview of recent deception and HT
techniques. An effective deception to protect files is giving files fake metadata,
such as false author, creation and modification date and file size. The latter
has been approached by Spafford [43], using the Unix Sparse File structure to
deceit copy applications. Rowe [44] presented the concept of making targets
appear like HPs, taking advantage of the attackers fear of having their techniques
exposed and resources wasted. The deceptive elements he suggests for fake HPs
are the usage of HP tools, mysterious processes, nonstandard system calls in key
security subroutines, dropping and modification of packets and manipulation
of metadata to make it seem abandoned. He also anticipates that this will lead
to a further level of deception by making HPs appear like fake HPs. Laurén et
al. [45] suggest system call manipulation by changing the system call numbers,
while monitoring the original ones, which the attacker will possibly try to use.
Bercovitch [46] developed a technique of automatically generating database entries
through a machine learning algorithm that learns from existing entries. This HT
generator is protecting manually marked sensitive data by changing it prior to the
learning routine. Juels and Rivest [47] coined the term honeyword for the idea of
assigning multiple false passwords together with the real password to an account
to decrease the value of data breaches. For the distinction between honeywords
and the real password, a dedicated authentication system was proposed and
named honeychecker. An idea to forward suspicious authentication attempts
into a honey account was proposed by Almeshekah et al. [22]. Lazarov [48]
analyzed malicious and ordinary behavior by publishing and tracking Google
spreadsheets that contained fake bank accounts and URLs on paste sites. The
idea of patching a systems security vulnerability but make it appear unpatched
was proposed by Araujo et al. [49] and termed honeypatch. When trying to
exploit the patched vulnerability, the attacker’s connection is redirected to a
HP. They suggest a wide scale adoption would reduce vulnerability probing and
other attacking activities. A similar approach named ghost patches was made
by Avery and Spafford [50]. They propose to prevent security patches from
exposing vulnerabilities by inserting fake patches into actual security patches.
Demystifying Deception Technology                 7

For the protection of web servers, Fraunholz et al. [51] proposed deceptive
response messages to mitigate reconnaissance activities. Their deceptive web
server provides fake banners with a false server and operating system versions,
fake entries in the robot.txt file, dynamically injected honeylinks and false error
responses on file access to mitigate brute-forcing attempts of the file system. A
deception technique called MTD does not consist of fake entities, but rather
approaches the randomization and mutation of the network topology and host
configuration. Okhravi et al. [52] distinguished MTD into the categories dynamic
networks, platforms, runtime environments, software and data. Commonly used
MTD techniques are ASLR, Instruction Set Randomization and Code Sequence
Randomization. Al-Shaer et al. [53] proposed a MTD technique they call RHM,
allowing host randomization using DNS without requiring changes in the network
infrastructure. Park et al. [54] combined the principles of MTD to dynamically
generate network topologies and the injection of decoys.

                  Table 1. Overview of deceptive software techniques

      Technique           Deceptive Entity                   Domain         H N   Ref
      Fake Honeypot       Honeypot                           Server         7 3   [44]
      Honeyentries        Table, data set                    Database       3 7   [46, 55] [56]
      MTD                 Topo., net. interf., memory, arch. Versatile      3 3   [52–54]
      Honeyword           Password                           Authentication 3 7   [47]
      Honeyaccount        User account                       Authentication 3 7   [22, 55]
      Honeyfile           (Cloud-)File                       File system    3 3   [48, 55]
      Honeypatch          Vulnerability                      Server         3 3   [49, 50]
      -                   Memory                             Server         3 7   [57]
      -                   Metadata                           File           3 7   [44]
      HoneyURL            URL                                File           7 3   [48]
      Honeymail           E-Mail adress                      File           7 3   [55, 58]
      Honeypeople         Social network profile             File           7 7   [59]
      Honeyport           Network port                       Server         7 3   [55]
      Decep. web server   Error codes, Robot.txt             Server         7 3   [51]
      OS interf.          System call                        Server         3 7   [44]

3.2    Honeypots
Nawrocki et al. [60] recently reviewed honeypot systems and data analysis.
Therefore this subsection focuses on neglected, but in the authors’ opinions
important, topics in their work. First, industrial honeypots are briefly described.
After that, Virtual Machine Introspection (VMI)-based systems are reviewed.
Intelligent honeypots and automated deployment are important topics as well, but
they were extensively reviewed by Mohammadzadeh et al. [61] and Zakaria [62,63]
in 2012 and 2013 and are therefore not considered in this work. More recent
work in this domain includes adaptive deployment strategies [64,65] and machine
learning-based analysis [66].

Industrial Honeypots An interesting field of application for HPs is an in-
dustrial environment. Industrial systems and critical infrastructure are essential
8       Fraunholz et al.

for modern societies [67]. In 2004, the Critical Infrastructure Assurance Group
from Cisco Systems Inc. published a HP especially designed for industrial net-
works [68]. Their system supported Modbus, FTP, telnet, HTTP and was based
on honeyd [69]. Four years later, the first industrial high-interaction honeypot
(HIHP) was introduced by Digital Bond. [70]. The most prominent industrial HP
is Conpot [71] published by Rist in 2013. Conpot is a classified as low-interaction
HP but supports a vast number of industrial communication protocols such as
Modbus, IPMI, SNMP, S7 and Bacnet. Many industrial honeypots are based
on either honeyd or Conpot. Conpot-based systems are extended to HIHPs by
combining them with a Matlab/Simulink-based simulation of an air conditioning
system [72], the power grid simulation gridlabd [73,74] and the IMUNES network
simulator [75]. Wilhoit and Hilt [76] conducted an experiment with a HP imitat-
ing a gas station monitoring software. They captured several attacks against the
system e.g. a DDoS-attack that they attributed to the Syrian Electronic Army
(SEA). It was pointed out by Winn et al. [77] that cost-effective deployment of
HP is crucial for their establishment as security mechanisms. They experimented
with honeyd for cost-effective deployment of different real world industrial control
systems. Another technique for this was proposed in 2017 by Guarnizo et al. [78].
They forwarded incoming traffic from globally deployed so called wormhole servers
to a limited number of IoT-devices. This technique creates a vast number of
deployed systems with only a small number of real hardware devices.

          Table 2. Comparison of the different research for industrial HPs

                                                    Basis HP
                                     Conpot       Honeyd             Other
                              Low   [71, 75, 79] [68, 77, 80]      [76, 81–85]
                Interaction
                              High [73, 74, 86]       -         [70, 72, 78, 87–91]

     Jicha et al. [92] conducted an in-depth analysis of Conpot. A six months
long-term study with conpot was conducted by Cao et al. [79]. Serbanescu et
al. [93] in contrast employed a large-scale honeynet offering a variety of different
industrial protocols for a 28 day experiment.

VMI-based High-interaction Honeypot The authors believe that VMI-
based HPs are the most recent and also most future-proof technology for HIHPs.
In 2017, Sentanoe et al. [94] compared different (HIHP) technologies. They also
decided for VMI as a technology for their experiments with SSH HPs. Vrable
et al. [95] introduced the Potemkin honeyfarm. The honeyfarm is based on Xen
and is able to clone a reference virtual machine for each attack. Argos was
published by Portokalidis et al. [96] in 2006. It is able to conduct a taint analysis
to fingerprint malware. Later on, the Honeynet Project [97] modified Sebek to
include VMscope [98]. VMwatcher [99] is able to clone a VM. The cloned VM
is monitored by AV, IDS or high-level forensic methods. Hay and Nance [100]
Demystifying Deception Technology         9

proposed another idea for high-level forensic methods by running the forensic
tools directly on the VM while it is paused. Dolan-Gavitt et al. [101] automated
the generation of introspection scripts by translating in-guest applications into
out-guest introspection code. The work of Pfoh et al. [102] focused on performance
and flexibility for QEMU/KVM -based system call tracing. Timescope [103] was
proposed in 2011. It is able to record an intrusion and replay it several times to
observe different aspects. Biedermann et al. [104] published a framework which
is able to live-clone a VM in case of an occurring attack. The attacker is then
redirected to the cloned instance and system calls as well as network activity
and the memory state are monitored. A hybrid architecture was proposed by
Lengyel et al. [105], they focus on the detection of malware with a combination
of low-interaction honeypots (LIHPs) and HIHPs. Later on, they published an
advanced version of their architecture with the cloning mechanism from the
Potemkin honeyfarm [95] and the monitoring mechanism from their previous
work. In 2014, Lengyel et al. [106], again, proposed an advanced version of their
framework with additional features for automated deplyoment and malware
analysis. The advantages of VMI -based HPs for deception systems based on
MTD was pointed out by Urias et al. [107]. Shi et al. [108] introduced a framework
with an integrated module for the analysis of system call traces.
    Recently, the idea of using Linux containers as an alternative for virtual
machines was investigated by Kedrowitsch [112]. They concluded that containers
are well suited for the deployment on low-powered devices but are trivial to
detect.

Anti-Honeypot As any software, honeypots are prone to software bugs. In
2004, Krawetz [113] introduced the idea to identify honeypots by probing the
functionality of the simulated services. In his work, he proposed to send e-mails
from SMTP-HPs to himself. If no mails are received the suggested functionality
is not available and the systems’ environment is suspicious. Fu et al. [114] propose
several methods to detect virtual HPs such as honeyd based on the temporal
behavior. They use Ping, TCP and UDP based approaches to determine the
round trip time of a packet. Mukkamala et al. [115] integrated machine learning
into the temporal behavior based detection of honeypots. Counter measures
against this type of fingerprinting are developed by Shiue and Kao Shiue.2008.
They propose honeyanole to mitigate fingerprinting based on temporal behavior
by traffic redirection. Bahram et al. [116] investigated VMI and found that it is
prone to changes in the kernel memory layout. Changes in the layout increase
the semantic gap and render VMI infeasible. The issues of missing customization
was discussed by Sysman et al. [117]. They used Shodan to identify the addresses
of over 1000 conpot deployments based on the fictional default company name. A
taxonomy on anti-visualization and anti-debugging techniques was published by
Chen et al. [118]. They divide these techniques by: abstraction, artifact, accuracy,
access level, complexity, evasion and imitation. Additionally, they published a
remote fingerprinting method to fingerprint virtualized hosts. Another taxonomy
on HP detection techniques was published by Uitto et al. [119]. They define
10         Fraunholz et al.

         Table 3. Comparison of the different research for VMI-based honeypots

     Author                 Year   Research obj.          Virt. engine   Monitoring
     Vrable et al. [95]     2005   Generation             QEMU/KVM       Unknown
     Portokalidis et        2006   Signatures             QEMU/KVM       Volatile memory, taint
     al. [96]                                                            analysis
     Jiang and              2007   Stealthiness           QEMU/KVM       System calls
     Wang [98]
     Jiang et al. [99]      2007   Semantic gap           VMware, Xen,   Full system
                                                          QEMU/KVM,
                                                          UML
     Hay and                2008   High-level forensics   Xen            Emulated Unix utilities
     Nance [100]
     Tymoshyk et            2009   Semantic gap           QEMU/KVM       System calls
     al. [109]
     Honeynet               2010   VMI-Sebek              QEMU/KVM       System calls
     Project [97]
     Dolan-Gavitt et        2011   Automation             QEMU/KVM       System calls
     al. [101]
     Pfoh et al. [102]      2011   Performance            QEMU/KVM       System calls
     Srinivasan and         2011   Record and replay      QEMU/KVM       System calls
     Jiang [103]
     Biedermann et          2012   Generation             Xen            System calls, network
     al. [104]                                                           monitoring, memory
                                                                         state
     Lengyel et al. [105]   2012   Generation, file       Xen            Memory state
                                   capturing
     Lengyel et al. [110]   2013   Routing                Xen            Memory state
     Beham et al. [111]     2013   Visualization,         KVM, Xen       VMI-honeymon [105]
                                   performance
     Lengyel et al. [106]   2014   Automation             Xen            System calls, file system
     Urias et al. [107]     2015   Generation             KVM            System calls, processes,
                                                                         file system
     Shi et al. [108]       2015   System call analysis KVM              System calls
     Sentanoe et al. [94] 2017     SSH                    Unknown        System calls

temporal, operational, hardware and environment as fundamental classes. In
2016, Dahbul et al. [120] introduced a threat model for HPs. This model groups
attacks in three groups: poisoning, compromising and learning. They also propose
a number of fixes for honeyd, Dianaea and Kippo.
    Hayatle et al. [126] proposed a method based on Dempster-Shafer evidence
combining to use multi-factor decision making to detect HPs. Later on, they pub-
lished a work on the detection of HIHPs based on Markov models [127]. Honeypot-
aware attackers or botnets are investigated by Wang [128] and Costarella et
al. [129]. The risks of reflected attacks by the abuse of HPs was discussed by
Husak [130].

3.3      Field studies and technological surveys
In this subsection, field studies and technological surveys are considered. First,
applications and deployments are examined. The kind and duration of deployment
was analyzed, as well as the DT, success, attack vector and format of retrieved
Demystifying Deception Technology                11

                  Table 4. Overview of counter measures against HPs

Method       Detail                         Target          Mitigation                  Ref.
Temporal     Measure RTT to expose correla- honeyd, virtual Simulating   timing   be-     [114, 115,
behavior     tions between IP addresses     honeypots       havior                      118, 121,
                                                                                        122]
Stack finger- Send corrupted packets and ana- Simulated com- Implementation of full [27, 123]
printing      lyze responses                  munication     TCP/IP-stack
                                              stacks
Functional   Use provided functions and ver- SMTP and DNS Implementation of full [113, 121]
probing      ify status                                   functionality
System call Anomalies in temporal behavior Linux systems    Simulating timing     be-     [27, 121,
behaviour   or memory locations                             haviour, KASLR              122, 124]
Network      Analyze RX and TX network Network based Hinder network monitor- [121]
traffic      traffic e.g. number of bytes data exfiltration ing, VMI, Proxy
                                          e.g. Sebek
UML detec- dmesg output, network device, UML based host Manipulating tools to [122]
tion       /proc/, memory laylout        isolation      show related information
VMware de- Hardware e.g. MAC address, I/O VMware based Customize       hardware, [27, 122]
tection    backdoor                       host isolation patch I/O backdoor [27]
Debugger     Use ptrace() function, IsDebug- e.g. Cuckoo    -                            [122, 125]
detection    gerPresent() function or memory
             search for 0xCC
Semantic     Manipulate kernel data structure VMI           -                            [116]
gap
Customiz.    Search for default strings     -               Customize systems             [27, 117,
                                                                                        120–122]

data were part of the investigation. After that, works that evaluate deception
resources are analyzed. They are grouped according to the deception resources
they consider, as well as metrics and characteristics that are considered by the
authors. A summary of technological surveys of deception resources is given in
table 5. 14 noteworthy examples of field studies about deception technologies
were found. Cohen et al. [131] performed an analysis of real attackers’ behavior in
a vulnerable system. They introduced security experts to a system under attack
in which they had deployed deception resources and monitored their behavior.
Results were derived from a questionnaire answered by the experts after the
experiment. Fraunholz et al. performed two field studies. In the first study,
Fraunholz and Schotten [51] proposed server-based deception mechanisms in
order to hinder attackers. Fake banners, fake Robots.txt, tampered error response,
an adaptive delay and honey files were presented in order to study attacker
behavior under these circumstances. 1200 accesses were monitored. In the second
study [132, 133], they analyzed attacker behavior monitored by six honeypots
deployed in one consumer and five web hosting servers, during a period of 222
days. Almost 12 million access attempts were monitored by the LIHPs used.
Common protocols, such as HTTP, HTTPS, FTP, POP3, SMTP, SSH and
Telnet were offered by the honeypots. In addition to that, industrial protocols
Bacnet, Modbus and S7 were emulated in order to derive insight about the threat
landscape for industrial applications. Lazarov et al. [48] purposely leaked forged
confidential information in Google spreadsheets. IP addresses were contained in
these spreadsheets and were supposed to lure attackers. 174 clicks were monitored,
12      Fraunholz et al.

as well as 44 visits by 39 unique IP-addresses. Liu et al. [134] followed a similar
approach of publishing apparently confidential information. SSH keys were leaked
on github, luring attackers to connect to Cowrie-based HPs. About 31000 unique
passwords were monitored, as well as the user behavior after log in over the
duration of two weeks. Zohar et al. [135] set up a comprehensive organizational
network consisting of users, mail data, documents, browser profiles and other IT
resources. Traps and decoys were introduced into this network. Similar to the
work of Cohen et al. [131], 52 security professionals took a Capture the Flag (CTF)
challenge in this network and tried to compromise it. The goal of this experiment
was to determine the best suited means for different organizational networks, as
well as the best deployment strategies for traps and decoys in computer networks.
Howell [36] used Sebek in order to derive insight about attacker behavior after
compromise. The dataset of Jones [136], gathered by HIHPs, was used for this
experiment, containing 1548 accesses by 478 attackers. Sobesto [137] analyzed
attacker’s reactions to system configuration and banners. Unused IP addresses of
an university network were used to deploy Dionaea honeypots with a number
of vulnerabilities and monitor attacker behavior after intrusion. This took place
from 17th of May to 31st of October, whereby 624 sessions were monitored
with the honeypot tool Spy. Maimon et al. [138] performed two runs of their
experiments: One for two and one for six months. During that time, 86 and 502
computers respectively were set up and made available from the internet. Upon
connection, some presented warning banners and some didn’t. They contained
different vulnerable entry points, created with Sebek and OpenVZ as a gateway,
that attracted 1058 and 3768 trespassing incidents respectively. The aim was to
determine the effect of warnings on attackers. Kheirkhak et al. [139] analyzed the
credentials used by login attempts. Eight HPs with enabled SSH connectivity were
introduced to six different university campus networks over a duration of seven
weeks. 98180 connections from 1153 unique IPs in 79 countries were captured.
Zhan et al. [140] used different kinds of HPs, namely Dionaea, Mwcollector, Amun
and Nepenthes. These were used to perform statistical analysis about attack
patterns. Five periods were distinguished, during each of which 166 attackers
attacked the vulnerable services: SMB, NetBIOS, HTTP, MySQL and SSH. Salles-
Loustau et al. [141] analyzed the attacker behavior based on keystroke patterns.
Three honeypots with different configurations captured 211 attack sessions during
a period of 167 days. Berthier et al. [142] focused on the behavior and actions
of an attacker after compromising the system. In order to do so, 24 different
actions were monitored as indicators for different types of behavior. The HPs
were set up for a duration of eight months in university networks, were available
via SSH and captured 20335 typed commands in 1171 attack sessions. Ramsbrock
et al. [143] followed a similar approach. Four Linux HPs were introduced into
a university network, available via SSH with easily guessable credentials for a
duration of 24 days. Attacker actions were monitored with syslog-ng for capturing
commands, strace to log system calls and Sebek to collect keystrokes. 269262
attacking attempts from 229 unique IPs were collected. It can be seen that
a strong focus in the application of deception technologies lies on HPs. Apart
Demystifying Deception Technology                   13

from that, real systems that are extended with monitoring capacities are often
employed. By definition, they count as HIHPs. Only two of the works described
above employ significantly different DTs, namely Fraunholz and Schotten [51]
and Lazarov et al. [48]. Seven noteworthy examples of technological surveys
about DTs were found. They are summarized in table 5. In these surveys, eleven
evaluation features that were shared by at least two surveys were identified.
They are numbered from F1 to F11 and are defined as follows: Interactivity
(F1), scalability (F2), legal or ethical considerations (F3), type (F4), deployment
(F5), advantages and disadvantages in comparison with other kinds of defense
technologies (F6), quality and type of data and the derived insights (F7), type
of the DT resource (F8), technical way of deployment and kind of DT (F9),
detectability and anti-detection capabilities (F10) and extensibility (F11).

                     Table 5. Overview of technological survey papers

    Work                      Year   DT      F1   F2   F3   F4   F5   F6   F7   F8   F9 F10 F11 other
    Smith [144]               2016   HP/HT   3     7   3    3     7    7   3    3    3   3   7     -
    Nawrocki et al. [60]      2016   HP      3     7    7   3     7   3    3    3    3   7   7     -
    Grudziecki et al. [145]   2012   HP      3    3     7   3    3    3    3     7   3   7   3 Rel./Sup.
    Girdhar and Kaur [146]    2012   HP      3     7    7   3     7   3     7    7    7  7   7     -
    Gorzelak et al. [147]     2011   HP      7    3     7    7   3     7   3     7   3   7   3     -
    Lakhani [148]             2003   HP      7     7   3    3    3     7    7    7   3   7   7     -

     Smith [144] considers a variety of DTs. HPs, as well as HTs and honeycreds,
honeytraps and honeynets are examined as general concepts. In terms of tools,
he analyzes VMWare, Chroot and Honeyd with respect to their usability as DT
tools. Nawrocki et al. [60] analyze and compare 68 different HPs. Grudziecki et
al. [145] analyze 33 different HPs. They are the only ones considering reliability
and support of the tools as an evaluation feature. Bringer et al. [149] survey
a number of more than 80 paper that present HPs technologies, 60 of which
supposedly had a significant impact on the field of DT. Girdhar and Kaur [146]
analyze five different HPs: ManTrap, Back officer friendly, Specter, Honeyd
and Honeynet. Gorzelak et al. [147] compare honeypots with other incident
detection and response tools. 30 different tools or services and twelve different
methodologies are considered in their work. Lakhani [148] compares four different
honeypots in his work: LaBrea, Specter, Honeyd and ManTrap. In summary,
most works distinguished between type of HP, namely research and production.
The two second most important evaluation features are the kind of deception
technology and the quality and type of data generated by the resource. Only
Grudziecki et al. [145] considered reliability and the technical support of a DT.

4      Legal Considerations and Ethics

This sections discusses legal and ethical aspects. Entrapment, privacy and liablity
are found to be discussed in most literature. Therefore table 6 discriminates the
14         Fraunholz et al.

reviewed literature by these subjects. Furthermore, the works are grouped by the
country they consider and if ethical aspects are also taken into account.
    Spitzner [42] and Mokube et al. [150] discussed the Fourth Amendment to
the US Constitution, while the Wiretap Act was analyzed Burstein [151], Ohm
et al. [152] or [42]. Dornseif et al. [153] discussed aspects of the Criminal Law
and the Tort Law. The Patriot Act was examined Spitzner [42]. Burstein [151]
links the Digital Millennium Copyright Act, the Computer Fraud and Abuse
Act and the Stored Communications Act. Scottberg et al. [154] and Jain [155]
concluded that only law enforcement officers can entrap anyone. Furthermore,
Jain concludes that “organizations or educational institutions cannot be charged
with entrapment”. Privacy is granted by the European convention on Human
Rights [152] and addressed in the context of deception by Jain [155], Mokube
[150], Schaufenbuel [156] and Sokol [157, 158]. Fraunholz et al. [159] adresses
privacy, copyright, self-defence and domain specific law, such as rights of law
enforcement, research institutes and telecommunication providers. Liability is
covered in Schaufenbuel [156], Campell [160], Fraunholz et al. [159] and Nawrocki
et al. [60]. Schaufenbuel [156] proposed suggestions for the operation and use of
DTs to mitigate legal risks. The 16 works covering aspects of US jurisdiction
in contrast to the five ones covering European law show a comparatively higher
scientific interest in US law. Finally, Campbell [160] is comparing the US with
the African jurisdiction, while Warren et. al. [161] addresses Australian law.

                  Table 6. An overview of legal and ethical studies on DT

                                     Country                  Legal aspects
     Work                     US Europe    Others      Entrapment Privacy Liability Ethics
     Dornseif [153]           3    7           7            7         3      7        3
     Spitzner [42]            3    7           7            3         7      7        7
     Scottberg [154]          3    7           7            3         3      7        7
     Jain [155]               3    7           7            3         3      7        7
     Mokube [150]             3    7           7            3         3      3        7
     Bringer [149]            3    7           7            7         7      7        3
     Schaufenbuel [156]       3    7           7            3         3      3        7
     Sokol [157]               7   7           7            7         3      7        7
     Campbell [160]            7   7    African vs. US      3         3      3        3
     Nawrocki [60]            3    3           7            3         3      3        7
     Rowe [162]               3    7           7            3         7      7        3
     Burstein [151]           3    7           7            7         7      7        7
     Radcliffe [163]          3    7           7            7         7      7        7
     Rubin [164]              3    7           7            7         3      3        7
     Karyda [165]             3    7           7            7         7      7        7
     Belloni [166]            3    7           7            7         7      7        7
     Sokol [158]               7   3           7            7         3      7        7
     Ohm [152]                 7   3           7            7         3      3        7
     Nance [167]              3    7           7            7         7      7        3
     Warren [161]              7   7      Australia         7         7      7        7
     Dornseif [153]            7   3           7            7         7      7        3
     Fraunholz et al. [159]    7   3           7            3         3      3        7

   In contrast to the 22 works covering the legal aspects, there are only six
works addressing ethical issues with DTs as well. Dornseif et al. [153] describe
the problem of making the internet safer by introducing vulnerabilities to the
Demystifying Deception Technology          15

public internet. The question how it is possible to secure the internet by adding
weaknesses, and about the moral soundness of making people part of an experi-
ment without their knowledge and consent is asked by Holz [168]. Campbell [160]
questions moral implications of enticing someone to commit a crime, as well
as mentions the problem of “adding fuel to the fire” when making a system
vulnerable to attacks. Rowe and Rrushi [162] attribute the ethical responsibility
of DT to the programmer, while addressing the issue of self-modifying code and
artificial intelligence.
    Inclusion of DT into guidelines is only just starting. The German federal office
for information security (BSI) published a baseline protection guideline [169],
which mentions the use of HPs. They consider them as anomaly detection
mechanisms, while not mentioning DTs explicitly. Implicitly, however, they are
described, for example in spoofing server banners.

5     Conclusion

In this work, the current state of the art in DT and adjacent domains is presented.
Where applicable, previous surveys are referenced and supplemented with recent
research. It was pointed out that DT is a beneficial extension for traditional IT-
security. Emphasis was placed on requirement categories, such as psychological,
formal, legal and ethical, as well as on recent trends, such as VMI and the field
of industrial and critical infrastructure security.

Acknowledgment

This work has been supported by the Federal Ministry of Education and Research
of the Federal Republic of Germany (Foerderkennzeichen KIS4ITS0001, IUNO).
The authors alone are responsible for the content of the paper.

References
    1. Wu Sun. The Art of War. Mens sana. Knaur, München, 2001.
    2. Kevin David Mitnick and William L. Simon. The art of deception: Controlling the
       human element of security. Safari Books Online. Wiley, Indianapolis, Ind, 2002.
    3. Clifford Stoll. The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer
       Espionage. 1989.
    4. Bill Cheswick. An evening with berferd in which a cracker is lured, endured, and
       studied, 1991.
    5. Fred Cohen. The deception toolkit home page and mailing list, 1995.
    6. Simon Duque Anton, Daniel Fraunholz, and Hans Dieter Schotten. Angriffserken-
       nung für industrielle netze innerhalb des projektes iuno. Mobilkommunikation -
       Technologien und Anwendungen, 2017.
    7. Daniel Fraunholz, Simon Duque Anton, and Hans Dieter Schotten. Introducing
       gamfis: A generic attacker model for information security. International Conference
       on Software, Telecommunications and Computer Networks, 25, 2017.
16     Fraunholz et al.

  8. Daniel Fraunholz, Daniel Krohmer, Simon Duque Anton, and Hans Dieter Schotten.
     Yaas - on the attribution of honeypot data. International Journal on Cyber
     Situational Awareness, 2(1), 2017.
  9. Barton Whaley. Toward a general theory of deception. Journal of Strategic Studies,
     5(1):178–192, 2008.
10. John B. Bell and Barton Whaley. Cheating and deception. Transaction Publ, New
     Brunswick, repr edition, 1991.
 11. James F. Dunnigan and Albert A. Nofi. Victory and deceit: Deception and trickery
     at war. Writers Club Press, San Jose [etc.], 2nd ed. edition, op. 2001.
12. Charles A. Fowler and Robert F. Nesbit. Tactical deception in air-land warfare.
     Journal of Electronic Defense, 18(6):37–40, 1995.
13. Neil C. Rowe and Hy S. Rothstein. Two taxonomies of deception for attacks on
     information systems. Journal of Information Warfare, 3(2), 2004.
14. Charles Fillmore. The case for case. Universals in Linguistic Theory, pages 1–25,
     1992.
15. Terry Copeck, Sylvain Delisle, and Stan Szpakowicz. Parsing and case analysis in
     tanka. Proc. of COLING-92, pages 1008–1012, 1992.
16. John L. Austin and James O. Urmson, editors. How to do things with words: [the
     William James lectures delivered at Harvard University in 1955]. Harvard Univ.
     Press, Cambridge, Mass., 2. ed., [repr.] edition, ca. 2009.
17. Frank Stech, Kristin Heckman, Phil Hilliard, and Janice Ballo. Scientometrics of
     deception, counter-deception, and deception detection in cyber-space. PsychNology
     Journal, 9(2):79–122, 2011.
18. James D. Monroe. Deception: Theory and practice.
19. U.S. Army. Information operations: Joint publication 3-13, 2014.
20. Jaeeun Shim and Ronald C. Arkin. A taxonomy of robot deception and its benefits
     in hri. In 2013 IEEE International Conference on Systems, Man, and Cybernetics,
     pages 2328–2335. IEEE, 2013.
21. Mohammed H. Almeshekah and Eugene H. Spafford. Planning and integrating
     deception into computer security defenses. The 2014 workshop, pages 127–138,
     2014.
22. Mohammed H. Almeshekah. Using Deception to Enhance Security: A Taxonomy,
     Model, and Novel Uses. Ph.d. thesis, Purdue University, Department of Computer
     Sciences, 2015.
23. Eric M. Hutchins, Michael J. Cloppert, and Rohan M. Amin. Intelligence-driven
     computer network defense informed by analysis of adversary campaigns and
     intrusion kill chains.
24. Jeffrey Pawlick, Edward Colbert, and Quanyan Zhu. A game-theoretic taxonomy
     and survey of defensive deception for cybersecurity and privacy. ACM Computing
     Surveys, 2017.
25. C. Seifert, I. Welch, and P. Komisarczuk. Taxonomy of honeypots: Technical
     report cs-tr-06/12.
26. F. Pouget, M. Dacier, and H. Debar. Honeypot, honeynet, honeytoken: Termino-
     logical issues1.
27. Josef Corey. Advanced honey pot identification and exploitation. Phrack Inc., 11,
     2003.
28. Martin C. Libicki. Mesh and the net: Speculations on armed conflict in a time of
     free silicon. Diane Pub Co, [Place of publication not identified], 2004.
29. David B. Buller and Judee K. Burgoon. Interpersonal deception theory. Commu-
     nication Theory, 6(3):203–242, 1996.
Demystifying Deception Technology         17

30. G. Cybenko, A. Giani, and P. Thompson. Cognitive hacking: A battle for the
     mind. Computer, 35(8):50–56, 2002.
31. Jinwei Cao, Janna M. Crews, Ming Lin, Judee Burgoon, and Jay F. Nunamaker.
     Designing agent99 trainer: A learner-centered, web-based training system for
     deception detection. In G. Goos, J. Hartmanis, J. van Leeuwen, Hsinchun Chen,
     Richard Miranda, Daniel D. Zeng, Chris Demchak, Jenny Schroeder, and Therani
     Madhusudan, editors, Intelligence and Security Informatics, volume 2665 of
    Lecture Notes in Computer Science, pages 358–365. Springer Berlin Heidelberg,
     Berlin, Heidelberg, 2003.
32. Lorrie Cranor and Simson Garfinkel. Security and Usability: Designing Secure
    Systems that People Can Use. O’Reilly Media Inc, Sebastopol, 2008.
33. Jim Yuill, Dorothy Denning, and Fred Feer. Psychological vulnerabilities to
     deception, for use in computer security. DoD Cyber Crime Conference, 2007.
34. Ramsbrock Daniel, Berthier Robin, and Cuckier Michel. Profiling attacker behavior
     following ssh compromises. International Conference on Dependable Systems and
    Networks, 2007.
35. Bertrand Sobesto. Empirical Studies based on Empirical Studies based on Honey-
     pots for Characterizing Attackers Behavior. PhD thesis, University of Maryland,
     Maryland„ 2015.
36. Christian Jordan-Michael Howell. The Restrictive Deterrent Effect of Warning
    Banners in a Compromised Computer System. Master thesis, University of South
     Florida, 2011.
37. Daniel Fraunholz, Frederic Pohl, and Hans Dieter Schotten. Towards basic design
     principles for high- and medium-interaction honeypots. European Conference on
    Cyber Warfare and Security, 16, 2017.
38. Daniel Kahneman, editor. Judgment under uncertainty: Heuristics and biases.
     Cambridge Univ. Press, Cambridge, 24. printing edition, 2008.
39. Kun Wang, Miao Du, Sabita Maharjan, and Yanfei Sun. Strategic honeypot
     game model for distributed denial of service attacks in the smart grid. IEEE
    Transactions on Smart Grid, page 1, 2017.
40. Daniel Fraunholz and Hans Dieter Schotten. Strategic defense and attack in decep-
     tion based network security. International Conference on Information Networking,
    32, 2018.
41. Augusto de Barros. Dlp and honeytokens, 2007.
42. Lance Spitzner. The honeynet project: Trapping the hackers. IEEE Security and
    Privacy, 2003.
43. Eugene H. Spafford. More than passive defense, 2011.
44. Neil C. Rowe, John Custy, and Duong Binh. Defending cyberspace with fake
     honeypots. Journal of Computers, 2(2):25–36, 2007.
45. Samuel Laurén, Sampsa Rauti, and Ville Leppänen. An interface diversified
     honeypot for malware analysis. In Rami Bahsoon and Rainer Weinreich, editors,
    Proccedings of the 10th European Conference on Software Architecture Workshops
    - ECSAW ’16, pages 1–6, New York, New York, USA, 2016. ACM Press.
46. Maya Bercovitch, Meir Renford, Lior Hasson, Asaf Shabtai, Lior Rokach, and
    Yuval Elovici. Honeygen: an automated honey tokens generator. International
    Conference on Intelligence and Security Informatics, 2011.
47. Ari Juels and Ronald Rivest. Honeywords: Making password-cracking detectable.
    Proceedings of the 2013 ACM SIGSAC conference on Computer & communications
    security, pages 145–160, 2013.
18     Fraunholz et al.

48. Martin Lazarov, Jeremiah Onaolapo, and Gianluca Stringhini. Honey sheets: What
    happens to leaked google spreadsheets? USENIX Workshop on Cyber Security
    Experimentation and Test, 9, 2016.
49. Frederico Araujo, Kevin W. Hamlen, Sebastian Biedermann, and Stefan Katzen-
    beisser. From patches to honey-patches. In Gail-Joon Ahn, Moti Yung, and
    Ninghui Li, editors, Proceedings of the 2014 ACM SIGSAC Conference on Com-
    puter and Communications Security - CCS ’14, pages 942–953, New York, New
    York, USA, 2014. ACM Press.
50. Jeffrey Avery and Eugene H. Spafford. Ghost patches: Fake patches for fake
    vulnerabilities. Proceedings of the IFIP International Conference on ICT Systems
    Security and Privacy Protection, pages 399–412, 2017.
51. Daniel Fraunholz and Hans Dieter Schotten. Defending web servers with feints,
    distraction and obfuscation. International Conference on Computing, Networking
    and Communications, 2018.
52. Hamed Okhravi, Thomas Hobson, David Bigelow, and William Streilein. Finding
    focus in the blur of moving-target techniques. IEEE SECURITY & PRIVACY,
    12(2):16–26, 2014.
53. Ehab Al-Shaer, Qi Duan, and Jafar Jafarian. Random host mutation for moving
    target defense. Proceedings of the International Conference on Security and
    Privacy in Communication Systems, pages 310–327, 2012.
54. Kyungmin Park, Samuel Woo, Daesung Moon, and Hoon Choi. Secure cyber
    deception architecture and decoy injection to mitigate the insider threat. Symmetry,
    10(1):14, 2018.
55. Daniel Fraunholz, Daniel Krohmer, Frederic Pohl, and Hans Dieter Schotten.
    On the detection and handling of security incidents and perimeter breaches - a
    modular and flexible honeytoken based framework. Proceedings of the 9th IFIP
    International Conference on New Technologies, Mobility & Security, 9, 2018.
56. Michae Hoglund and Shawn Bracken. Inoculator and antibody for computer
    security, 2011.
57. Mark Dowd, John McDonald, and Justin Schuh. The art of software security
    assessment: Identifying and preventing software vulnerabilities. Addison-Wesley,
    Upper Saddle River, N.J., 2007.
58. Neil C. Rowe. Deception in defense of computer systems from cyber attack. In
    Lech Janczewski and Andrew Colarik, editors, Cyber Warfare and Cyber Terrorism,
    pages 97–104. IGI Global, 2007.
59. Nikos Virvillis, Bart Vanautgaerden, and Oscar Serrano. Changing the game: The
    art of deceiving sophisticated attackers. International Conference on Cyber Confl
    ict, 6, 2014.
60. Martin Nawrocki, Matthias Wählisch, Thomas Schmidt, Christian Keil, and Jochen
    Schönfelder. A survey on honeypot software and data analysis. CoRR, 2016.
61. Hamid Mohammadzadeh, Roza Honarbakhsh, and Omar Zakaria. A survey
    on dynamic honeypots. International Journal of Information and Electronics
    Engineering, (Vol. 2, No. 2), 2012.
62. Wira Zanoramy Ansiry Zakaria and Laiha Mat Kiah. A Review on Artificial
    Intelligence Techniques for Developing Intelligent Honeypot. PhD thesis, University
    of Malaya, Faculty of Computer Science and Information Technology, Kuala
    Lumpur, 2012.
63. Wira Zanoramy Ansiry Zakaria and Laiha Mat Kiah. A review of dynamic and
    intelligent honeypots. ScienceAsia 39S, 2013.
Demystifying Deception Technology          19

64. Daniel Fraunholz, Marc Zimmermann, Alexander Hafner, and Hans Dieter Schot-
    ten. Data mining in long-term honeypot data. IEEE International Conference on
    Data Mining series - Workshop on Data Mining for Cyber-Security, 2017.
65. Daniel Fraunholz, Marc Zimmermann, and Hans Dieter Schotten. An adaptive
    honeypot configuration, deployment and maintenance strategy. International
    Conference on Advanced Communications Technology. International Conference
    on Advanced Communications Technology, 19, 2017.
66. Daniel Fraunholz, Daniel Krohmer, Simon Duque Anton, and Hans Dieter Schotten.
    Investigation of cyber crime conducted by abusing weak or default passwords with
    a medium interaction honeypot. International Conference On Cyber Security And
    Protection Of Digital Services, 2017.
67. Simon Duque Anton, Daniel Fraunholz, Christoph Lipps, Frederic Pohl, Marc
    Zimmermann, and Hans Dieter Schotten. Two decades of scada exploitation: A
    brief history. Workshop On The Security Of Industrial Control Systems & Of
    Cyber-Physical Systems, 3, 2017.
68. Venkat Pothamsetty and Matthew Franz. Scada honeynet project: Building
    honeypots for industrial networks, 2004.
69. N. Provos. A virtual honeypot framework. USENIX Security Symposium, 2004.
70. Digital Bond Inc. Digitalbond virtual plc honeynet, 2007.
71. Lukas Rist, Johnny Vestergaard, Daniel Haslinger, Andrea Pasquale, and John
    Smith. Conpot ics/scada honeypot, 2015.
72. Samuel Litchfield. HoneyPhy: A physics-aware CPS honeypot framework. PhD
    thesis, Georgia Tech, 2017.
73. GridPot: Symbolic Cyber-Physical Honeynet Framework. sk4ld, 2015.
74. Owen Redwood, Joshua Lawrence, and Mike Burmester. A symbolic honeynet
    framework for scada system threat intelligence. In Mason Rice and Sujeet Shenoi,
    editors, Critical Infrastructure Protection IX, volume 466 of IFIP Advances in
    Information and Communication Technology, pages 103–118. Springer International
    Publishing, Cham, 2015.
75. Stipe Kuman, Stjepan Gros, and Miljenko Mikuc. An experiment in using imunes
    and conpot to emulate honeypot control networks. In 2017 40th International
    Convention on Information and Communication Technology, Electronics and
    Microelectronics (MIPRO), pages 1262–1268. IEEE, 2017.
76. Kyle Wilhoit, Hilt, and Stephen. The gaspot experiment: Unexamined perils in
    using gas-tank-monitoring systems.
77. Michael Winn, Mason Rice, Stephen Dunlap, Juan Lopez, and Barry Mullins.
    Constructing cost-effective and targetable industrial control system honeypots for
    production networks. International Journal of Critical Infrastructure Protection,
    10:47–58, 2015.
78. Juan David Guarnizo, Amit Tambe, Suman Sankar Bhunia, Martin Ochoa, Nils Ole
    Tippenhauer, Asaf Shabtai, and Yuval Elovici. Siphon. In Jianying Zhou and
    Ernesto Damiani, editors, Proceedings of the 3rd ACM Workshop on Cyber-Physical
    System Security - CPSS ’17, pages 57–68, New York, New York, USA, 2017. ACM
    Press.
79. J. Cao, W. Li, and B. Li. Dipot: A distributed industrial honeypot system. Smart
    Computing and Communication, 10699, 2018.
80. Dhavy Gantsou and Patrick Sondi. Toward a honeypot solution for proactive
    security in vehicular ad hoc networks. In James J. Park, Ivan Stojmenovic, Min
    Choi, and Fatos Xhafa, editors, Future Information Technology, volume 276 of
    Lecture Notes in Electrical Engineering, pages 145–150. Springer Berlin Heidelberg,
    Berlin, Heidelberg, 2014.
20     Fraunholz et al.

81. Paulo Simoes, Jorge Proenca, Tiago Cruz, and Edmundo Monteiro. On the use of
    honeypots for detecting cyber attacks on industrial control networks. European
    Conference on Information Warfare and Security, 12, 2013.
82. T. Holczer, M. Felegyhazi, and L. Buttyan. The design and implementation of
    a plc honeypot for detecting cyber attacks against industrial control systems.
    Proceedings of International Conference on Computer Security in a Nuclear World:
    Expert Discussion and Exchange, 2015.
83. Michael Haney and Mauricio Papa. A framework for the design and deployment of
    a scada honeynet. Cyber and Information Security Research Conference, 9(1):121–
    124, 2014.
84. Emmanouil Vasilomanolakis, Shreyas Srinivase, and Max Mühlhäuser. Did you
    really hack a nuclear power plant? an industrial control mobile honeypot. IEEE
    Conference on Communications and Network Security, 2015.
85. Kamil Koltys and Robert Gajewski. Shape: A honeypot for electric power sub-
    station. Journal of Telecommunications and Information Technologies, 4:37–43,
    2015.
86. Charlie Scott. Designing and implementing a honeypot for a scada network. SANS
    Institute InfoSec Readning Room, 2014.
87. Jules Disso, Kevin Jones, and Steven Bailey. A plausible soluation to scada
    security: Honeypot systems. International Conference on Broadband, Wireless
    Computing, Communication and Applications, 8:443–448, 2013.
88. Daniek Buza, Ferenc Juhasz, György Miru, Mark Felegyhazi, and Tamas Holczer.
    Cryplh: Protecting smart energy systems from targeted attacks with a plc honeypot.
    International Workshop on Smart Grid Security, pages 181–192, 2014.
89. Yin Minn Pa Pa, Shogo Suzuki, Katsunari Yoshioka, and Tsutomu Matsumoto.
    Iotpot: Analysing the rise of iot compromises. USENIX Workshop on Offensive
    Technologies, 9, 2015.
90. Daniele Antonioli, Anand Agrawal, and Nils Ole Tippenhauer. Towards high-
    interaction virtual ics honeypots-in-a-box. In Edgar Weippl, Stefan Katzenbeisser,
    Mathias Payer, Stefan Mangard, Alvaro Cárdenas, and Rakesh B. Bobba, editors,
    Proceedings of the 2nd ACM Workshop on Cyber-Physical Systems Security and
    Privacy - CPS-SPC ’16, pages 13–22, New York, New York, USA, 2016. ACM
    Press.
91. Celine Irvene, David Formby, Samuel Litchfield, and Raheem Beyah. Honeybot:
    A honeypot for robotic systems. Proceedings of the IEEE, 106(1):61–70, 2018.
92. Arthur Jicha, Mark Patton, and Hsinchun Chen. Scada honeypots: An in-depth
    analysis of conpot. In 2016 IEEE Conference on Intelligence and Security Infor-
    matics (ISI), pages 196–198. IEEE, 2016.
93. Alexandru Vlad Serbanescu, Sebastian Obermeier, and Der-Yeuan Yu. Ics threat
    analysis using a large-scale honeynet. Electronic Workshops in Computing, pages
    20–30. BCS Learning &Development Ltd, 2015.
94. Stewart Sentanoe, Benjamin Taubmann, and Hans P. Reiser. Virtual machine
    introspection based ssh honeypot. In Unknown, editor, Proceedings of the 4th
    Workshop on Security in Highly Connected IT Systems - SHCIS ’17, pages 13–18,
    New York, New York, USA, 2017. ACM Press.
95. Michael Vrable, Justin Ma, Jay Chen, David Moore, Erik Vandekieft, Alex C. Sno-
    eren, Geoffrey M. Voelker, and Stefan Savage. Scalability, fidelity, and containment
    in the potemkin virtual honeyfarm. In Andrew Herbert and Ken Birman, editors,
    Proceedings of the twentieth ACM symposium on Operating systems principles -
    SOSP ’05, page 148, New York, New York, USA, 2005. ACM Press.
You can also read