How to identify and measure trolling behaviour and state-sponsored hostile influence operations - January 2019 - ABT Shield

Page created by Alan Frank
 
CONTINUE READING
How to identify and measure trolling behaviour and state-sponsored hostile influence operations - January 2019 - ABT Shield
How to identify and measure trolling
  behaviour and state-sponsored
   hostile influence operations

             January 2019
How to identify and measure trolling behaviour and state-sponsored hostile influence operations - January 2019 - ABT Shield
Contents
General characteristic of trolling behaviour                     2
Definitions of trolls                                            2
General profile of trolls                                        3
General profile of cyber-victims                                 5
Effect of trolling on people’s online behaviour                  5
Strategies of state-led trolling activity                        5
Characteristics of large-scale influence operations              5
Introduction and research definitions of trolling                7
Explanatory and research approaches of trolling                  7
Psychological traits and motivations                             8
The trolling environment                                         8
Types of trolls and the difference between trolls and trolling   9
A general profile of trolls and victims                          10
A general model of trolling                                      11
Group trolling and state-sponsored troll activity                12
Trolling as a group behaviour                                    13
State-sponsored manipulation and trolling                        13
Actors                                                           15
Strategy behind state-led influence operations                   16
Capacity building                                                16
Content creation                                                 17
State-sponsored actions and patterns                             18
How trolling affects voters’ behaviour                           20
Trolling metrics                                                 22
The TAP measurement model                                        22
Hierarchy of measurements                                        23
Establishing baselines and platform analysis                     24
Trolling in everyday political discussions                       24
Identifying and predicting hostile influence operations          27

                                               1
How to identify and measure trolling behaviour and state-sponsored hostile influence operations - January 2019 - ABT Shield
General characteristic of trolling behaviour

Definitions of trolls
   !   Generally, trolling or trolling behaviour can be defined as “the posting of incendiary
       comments with the intent of provoking others into conflict.”
   !   In psychological research: trolling is defined as a psychological dysfunction traced
       back to “dark triad” or “dark tetrad” of personality traits, namely psychopathy,
       narcissism, Machiavellianism and sadism, sadism being one of the best predictors of
       such behaviour.
   !   In hybrid warfare and state-sponsored activity: NATO Strategic Communications
       Centre of Excellence distinguishes between “classic trolls” who “act in their own
       interests solely with the aim of sowing disagreement and inciting conflict in the
       online environment” and “hybrid trolls” who “are employed as a tool of information
       warfare” by state-actors.

                                             2
How to identify and measure trolling behaviour and state-sponsored hostile influence operations - January 2019 - ABT Shield
General profile of trolls
Types of trolls/   Organizational     Personal            Online           Online
traits             traits             traits              traits           strategy

Opportunistic      NO/RAT (Routine    Anybody             Average user;    NO
trolls             Activity Theory)                       hateful/
                   opportunity                            harmful
                                                          language;
                                                          negative
                                                          thread
Everyday           No organization    Male, antisocial    Frequent         Ad hoc or
trolls             or online          personality,        online           community-
                   grassroots         offline behaviour   activity;        driven
                   community          problem, poor       multiple         behaviour
                                      interpersonal       accounts
                                      skills, previous    across
                                      victims             platforms;
                                                          flagged posts;
                                                          banned
                                                          accounts;
                                                          hateful/
                                                          harmful
                                                          language;
                                                          membership
                                                          in online
                                                          trolling
                                                          community;
                                                          negative
                                                          thread

                                           3
State-related/   (1) a clear          Difference is in   Platform or      Individual
hybrid trolls    hierarchy and        social role or     ecosystem        targeting and/
                 reporting            actor types with   creation;        or large-scale
                 structure; (2)       certain traits:    multiple         operations.
                 content review       governmental       accounts         Actions can be:
                 by superiors; and    troops;            across           stat-executed;
                 (3) strong           politicians;       platforms        state-
                 coordination         private            coordinated;     coordinated;
                 across agencies      contractors;       original         state-incited.
                 or team; (4)         volunteers; paid   content          Human-
                 weak                 citizens           creation;        directed bot
                 coordination                            bot or use of    networks;
                 across agencies                         bot/human        infiltration of
                 or teams; (5)                           network;         online
                 liminal teams;                          intensively      communities;
                 (6) training staff                      reposted         information
                 to improve skills                       messages;        flooding.
                 and abilities                           repeated
                 associated with                         messages;
                 producing and                           posted from
                 disseminating                           different IP
                 propaganda; (7)                         addresses
                 providing                               and/or
                 rewards or                              nicknames;
                 incentives for                          republished
                 high-performing                         information
                 individuals; and                        and links;
                 (8) investing in                        identical
                 research and                            messages;
                 development                             unusual
                 projects                                message
                                                         frequency;
                                                         thread-
                                                         jacking;
                                                         hashtag-
                                                         latching;
                                                         account or
                                                         profile data
                                                         changes;
                                                         location of
                                                         the accounts
                                                         concentrated;
                                                         type of web
                                                         client;
                                                         extremely
                                                         precise
                                                         repetitive
                                                         patterns of
                                                         messages;
                                                         content:
                                                         unusual % of
                                                         interactive
                                                         contents,
                                                         Russian URLs;
                                                         specific world
                                                         events,
                                                         organizations,
                                                         political
                                                         personalities
                                           4
General profile of cyber-victims
Typical cyber-     Offline             Online             Online
victim             characteristics     characteristic     environment
                   Female;             Frequent           Popular posts;
                   previous            Internet           lack of
                   victimization;      activities;        “guardian”
                   high levels of      online proximity   mechanism like
                   depression,         to trolling        moderation;
                   helplessness,       trolls; fame or    trolling
                   stress, or          other online       community; RAT
                   loneliness; bully   character trait    opportunity
                   others offline;     standing out;      structure
                   isolated by
                   peers

Effect of trolling on people’s online behaviour
Impact on people’s behaviour may entail: diverting attention (smoke screen) from
important issues; altering one’s political stance; popularize specific political actors;
inciting online behaviour, like hate-speech against a common target; enhance conformity
to perceived “group behaviour”; intensify group divisions; strengthen “echo-chamber”
effect of opinion conformity; amplify marginal voices; flood information space or
community with false information; re-route community or individual communication to
state-sponsored content and platforms, accounts.

Strategies of state-led trolling activity
Agency                      Organization                  Action
State or contracted or      Hierarchy with clear        Individual targeting of groups;
otherwise manipulated       strategic goals, metrics of coordinated accounts across
actors                      online impact               different platforms; different
                                                        political campaigns; re-route
                                                        targets communication;
                                                        disinformation flooding;
                                                        combination of automated and
                                                        human networked actors;
                                                        original content; original
                                                        applications developed.

Characteristics of large-scale influence operations
State-led cyber troops or hybrid trolls, as part of a hybrid human and bot networks and
pre-established capacity of ecosystems made up of original content producing fake
accounts, homepages etc. across a variety of platforms, seek current hot political topics or
other usually divisive social issues on social platforms to infiltrate online communities and
amplify existing divisions Levels of engagement:

                                              5
1. Infiltrate online communities or discourses to increase the polarisation around
   important social issues, like the #BlackLivesMatter movement.
2. Injection of controversial topics (e.g., gender, GMOs, race, religion, or war) into
   debates, communities.
3. Increase real of fake group heterogeneity that provokes more debates.
4. Network of trolls or bots flood online platforms, communities with disinformation.
5. Re-route grassroots communication to original (manipulated) contents, fake news
   ecosystems on major platforms like Facebook, YouTube, Twitter etc.
6. Coordinate complex human/bot network’s simultaneous campaigns attacking a
   common target across different platforms, countries.

                                         6
Introduction and research definitions of trolling
The phenomena of trolling or trolling behaviour could be roughly defined as “the posting of
incendiary comments with the intent of provoking others into conflict” (Hopkinson citing
Hardaker).1 This kind of behaviour is mainly a product of and can be related to the
emergence of the internet’s complex info communications environment and the computer-
mediated communication (CMC) that makes human communication possible through the
use of electronic devices. The rise of social media and related technologies provided both
the sources and platforms for the dissemination of a “heterogeneous mass of information”
and unprecedented amounts of grassroots, written disinformation. Thus, the informational
environments of the 21st Century established all the necessary and sufficient conditions for
digital debates between individuals or groups based on disinformation, misinformation
coupled with “functional illiteracy, information overload and confirmation bias.”2 Although
new information technologies allow people to access a wide-range of different opinion or
types of information, this advantage is diminished by the consumers’ tendency, also as a
tool to reduce societal complexity, irritating volumes of information, to look for more
extreme versions of their opinions and reproduce these extreme views in a group
environment, thus forming a closed-minded echo-chamber made up of similar opinions,
people. Such group-behaviours and (dis)information reproduction is even more facilitated
by third parties, be it PR agencies, social media engines or malicious foreign powers, who
wish to intentionally manipulate societal groups for their business or political interests.
Three main market attributes of the new information society contribute to the
concentration and dissemination of disinformation. Firstly, any kind of algorithm geared
toward a pleasant “user experience” promotes the so called “Matthew effect” of
accumulated advantages that tends to reward those who are already in an advantageous
social situation, thereby algorithms are rewarding similarity, connecting similar opinions,
organisations or people etc. by design for real or perceived advantages.3 Secondly, the
“data is the new oil” approach means that the new media platforms of social media
companies are no longer selling primarily “places” or “public airtime,” “newspaper pages”
frequented by people, instead their business is selling actual consumers, groups of
costumers and all the data, access to data that comes with them. As a consequence,
modern PR agencies and data-managing companies like Google or Facebook have
developed elaborate technical skills to follow brands, products across all communication
platforms by collecting data not primarily on specific products or services, rather on
individual users, so they end up with huge databases on unknown, random individuals,
group behaviour. Thirdly, to make good (business) use of user data, companies acquired the
skills to collect, extract business data and use them to manipulate or target specific groups
of interests. This new digital market of data production, storing and dissemination,
together with the necessary e-infrastructures allow and enable states to politically
manipulate electoral groups in ways, including artificial trolling, we have never seen
before.

Explanatory and research approaches of trolling
Research has approached the phenomena of trolling from two angles so far. The
“individual” approach focused more on the individual side of psychological traits,
behaviour leading to trolling, while the “situational” research tried to uncover
environmental or cultural factors to explain individuals’ destructive and antisocial
behaviour.4 Most of the researches followed experimental quantitative or textual
qualitative methodological designs. The “individual” approach’s limits arise from the
environmental impact because several researchers concluded that “under the right
circumstances all people can act like trolls.” However, individual explanation cannot be
discarded either, since individual behaviour and psychology transforms into group
behaviour and data that can be used by third parties to manipulate large swathes of the
population for political or economic purposes. The “environmental” approach takes into

                                             7
consideration external factors that are theoretically well-defined and show a strong
correlation with individual or group behaviour such as the technology of instant messaging,
lack of physical or social knowledge of communication partners, lack of shared norms to
guide online interactions.5 The environmentalist explanation mainly attributes trolling to
“anonymity,” which liberates one from under the situations and community attentions
enforcing everyday politeness, conformity to rules and regulations. Still, the “routine
activity theory” (RAT) challenges this approach by denying the role well-known social
structural factors, like inequality, employment, play in driving antisocial online behaviour.
The criminology theory of RAT postulates crimes arise from the everyday opportunities of
everyday people, so the mundane “opportunity situation” has three core or universal
elements beyond other sociological circumstances: a motivated offender, an attractive
target and the absence of capable guardianship.6 Thus, Golf-Papez and Ekant Veer
conclude, trolling behaviour is rather a result of specific places of perpetration, where
offenders and targets can interact. Nevertheless, traditional sociological factors of
explanations should not be eliminated from the explanation either, since profiles of trolls
or trolling behaviour can be described along well-known sociological variables. Moreover,
the targeting of vulnerable individuals or groups by third parties is all based on structural
factors supplied by big data companies or platforms (Google, Facebook), like age, sex,
occupation, thus they still contribute to the execution of any trolling activity very much.

Psychological traits and motivations
Psychological researchers have identified the “dark triad” or “dark tetrad” of personalities
or dysfunctions likely contributing to the trolling behaviour on an individual level.
Psychology started to describe qualitative personal factors of “evil” faced with the Nazi
regime and crimes committed against humanity during World War II. Erich Fromm coined
the term “malignant narcissism” to define "the most severe pathology and the root of the
most vicious destructiveness and inhumanity."7 Today, psychology calls psychopathy, an
antisocial personality disorder based mostly on overt antisocial actions, narcissism, a self-
obsession without much inner self-restraint for self-interested action, Machiavellianism, a
calculated attitude towards manipulation of others, the “dark triad” of overlapping
personality factors, recently adding “sadism,” an inclination to cause pain, humiliation,
fear or some form of physical or mental harm to others, as the fourth psychological trait of
antisocial behaviour.8 Psychological literature defines trolling as a “practice of behaving in
a deceptive, destructive, or disruptive manner in a social setting on the Internet with no
apparent instrumental purpose.”9 Research concluded, while psychopathy was the best
predictor of trolling behaviour targeting popular individuals of the “dark triad,”10 sadism as
the fourth personality trait showed the strongest relationship with the enjoyment of
trolling behaviour. So, “online trolls are prototypical everyday sadists.”11 Narcissism, on
the other hand, has an impact on becoming a troll, but narcissistic people are preoccupied
with self-promotion first and foremost.

The trolling environment
Research focusing on dark personality traits name anonymity as the main reason why
computer-mediated communication or CMC factors are facilitating destructive behaviour
on the internet. The lack of public oversight and/or control enables trolls to engage
socially unacceptable behaviour that would violate social norms (tolerance) or would
collide with the perpetrators’ own offline contexts under normal circumstances.12 The RAT
model of criminality along with other analysis data found that everyday situations enable
people to troll others without the presence of guardian people or technologies capable of
deterring or limiting trolling. Moreover, Coles and West proved internet users or members
of online communities “do not treat each other as being anonymous – even when posters’
real names and identities are unknown,”13 whilst trolling occurs more and more on less
anonymous sites, which points to the fact that trolling can only be explained by including
additional environmental factors. Maja Golf-Papez and Ekant Veer mentions media

                                              8
discourses, membership in social groups, monetary awards and media literacy as such
contextual elements to be taken into consideration.14 From a more general perspective,
culture and ideology can play an important part in trolling behaviour and the
interpretation of it as well. Individuals’ antisocial behaviour can be explained as a product
of ideologically disruptive actions, minorities’ or individuals’ specific identity
constructions, status/fame enhancing motivations, all aimed to challenge the main
cultural norms and values. Thus, trolling or behaviours cast as “trolling” by the majority
can be also interpreted or explained by the social status bound action of individuals or
groups within a cultural setting.15

Types of trolls and the difference between trolls and trolling
 Thus, it is important to differentiate between trolls and trolling behaviour, since some
trolling behaviour can be counter-cultural in nature or have positive intent and impact, so
it needs to be treated differently as opposed to malicious or state-sponsored trolling
activity. Sanfilippo, Fichman and Yang created two typologies along seven behavioural
dimensions ( whether they (1) communicated serious opinions; (2) were representative of
public opinions; (3) were pseudo-sincere; (4) were intentional; (5) were provocative; (6)
repeated; or (7) were satirical) to distinguish between seemingly similar trolling-like
behaviours and between trolling and being an actual troll.16

Based on their multidimensional analysis they defined four distinct behaviours of trolling
and non-trolling:

“1) Serious trolling: intentionally provocative and pseudo-sincere behaviors that reflect
serious opinions and values.
2) Serious non-trolling: sincere behaviors intentionally reflecting public opinion and can
be interpreted prima facie.
3) Humorous trolling: intentionally provocative and repetitive behaviors motivated by
personal or social enjoyment or entertainment. Humorous trolls are more effective when
pushing the boundaries of social acceptability, rather than reflecting extreme opinions
(Goel and Nolan 2007; Kirman, Lineham, and Lawson 2012).
4) Humorous non-trolling: repetitive, satirical, and often provocative, yet distinct from
trolling behaviors in that it is not pseudo-sincere.”17

There could be recognized relationships or correlations between the four behavioural types
and the seven dimensions as displayed on figure 1.

     Premise                                    Conclusion                            Relationship
1    Provocative, satire                        Repetition                            Bidirectional
2    Intentionality, provocative                Serious opinions, representative of   Directional
                                                public opinions, repetition
3    Pseudo-sincerity                           Serious opinions, intentionality,     Directional
                                                satire
4    Repetition, intentionality, provocative,   Serious opinions, representative of   Directional
     satire                                     public opinions, pseudo-sincerity
5    Representative of public opinion           Serious opinions, provocative         Bidirectional
6    Serious opinions, satire                   Pseudo-sincerity, intentionality      Directional
7    Serious opinions, intentionality           Pseudo-sincerity, satire              Directional
8    Serious opinions, repetition,              Representative of public opinion,     Directional
     provocative, satire                        pseudo-sincerity, intentionality

                                                 9
9    Serious opinions, repetition, pseudo-         Representative of public opinion      Directional
     sincerity, intentionality, provocative,
     satire
10   Serious opinions, representative of           Repetition                            Directional
     public opinions, pseudo-sincerity,
     intentionality
11   Serious opinions, representative of           Pseudo-sincerity, intentionality      Directional
     public opinions, repetition, provocative,
     satire
              Figure 1. Multidimensional relationship between trolling behavioural elements

According to the same research, trolling itself can be categorized into four sub-types based
on harmful intentions and antisocial impact on society (see figure 1) to distinguish between
occasional trolling and a troll identity and standard negative behaviour.

                     2. Figure Typology of trolling by humour and social inclusivity18

Thus, it is very important to highlight the opportunity model of the RAT theory based
on the virtual place of opportunity and the interactions between perpetrators, victims,
guardians that can predict all four types of anti-social trolling or troll behaviour. At the
same time, knowledge of the actual community and local context, where the trolling is
happening, is essential in the discrimination between trolling and non-trolling
behaviour. In another words: the same opportunities can result in strikingly different
trolling behaviours or in behaviours that can be judged as trolling only against local/
national/online communal norms and values (cultures). Essentially, non-personal
metadata can predict or identify successfully antisocial troll behaviour only to a certain
degree without taking into account (individual or community) culture, context,
contents.

A general profile of trolls and victims
Siying Guo identified 15 predictors of cyberbullying perpetration and victimization based
on the meta-analysis of 77 researches. “The typical cyberbully is likely to
(a) be an older male;
(b) be involved in prior offline bullying behaviors;
(c) exhibit noticeable behavioral problems;
(d) perceive aggression as appropriate, profitable, or even morally justified;
(e) engage in frequent online activities;
(f) experience offline victimization;
(g) report a variety of internalizing symptoms;

                                                    10
(h) have antisocial personality (e.g., narcissism, impulsivity, callous unemotional traits, or
other psychopathic traits);
(i) lack moral values, remorse, or empathy toward others;
(j) come from a family with high parental conflict or low parental supervision;
(k) be in a negative school climate; and
(l) have poor peer relationships, with a susceptibility to deviant or violent peers.” 19
Additionally, “the typical cybervictim is one who is likely to
(a) be female;
(b) experience offline victimization;
(c) demonstrate high levels of depression, helplessness, stress, or loneliness;
(d) engage in frequent Internet activities;
(e) bully others offline;
(f) be involved in a series of problem behaviors;
(g) possess antisocial personality traits;
(h) have low levels of self-satisfaction, self-concept, or self-esteem;
(i) possess relatively positive beliefs or attitudes about aggression;
(j) live in a family with a negative environment;
(k) have less school commitment; and
(l) be noticeably rejected and isolated by peers.”20

A general model of trolling
Golf-Papez and Veer proposed a general model of micro- and macro-factors based on the
RAT theory to describe the favourable environment for trolling behaviour as seen on figure
3.

                                              11
3. Figure The opportunity model of everyday trolling based on three main factors

The model postulates trolling occurs in online environments where motivated trolls and
reactive targets are present without the oversight of capable guardians controlling the
situation and interactions. This means that a particular social media platform can host a
number of trolls and possible victims, however, trolling activity can be absent due to
proper supervision executed by, for example, comment vetting, reporting or flagging tools,
moderators.
On the micro level individual structural or psychological traits, like one’s age, sex,
emotional state, group membership constitutes the opportunity for trolling, macro factors
are those features of the technical platform, online community that structurally allow or
limit certain behaviours to occur on the part of victims, trolls and guardian actors (persons
or algorithms). Both perpetrators and victims can actively contribute to the establishment
of a troll-friendly environment. For example, possible victims visit places without proper
defensive mechanisms against trolling or have bad privacy settings, whereas motivated
trolls can search for online communities tolerating trolls or incapacitate programming or
moderator’s defences against trolling.21

Group trolling and state-sponsored troll activity

Trolling as a group behaviour can be understood as a “civic activity” of interested people,
who bully others online, and as a state-sponsored activity carried out by foreign
adversaries to achieve strategic military, intelligence goals or to influence foreign
audiences according to specific foreign policy goals. Russia has been notorious for such
“active measures” that utilized both intelligence operations and public communication
during the US, French, German, Italian etc. elections.

                                                  12
However, as pointed out earlier, state-led influence operations are much more widespread
extending to democratic governments, parties as well, relying on leading ICT (Information
and Communication Technologies) mass manipulation technologies acquired from private
PR companies curating audiences and brands. Therefore, NATO Strategic Communications
Centre of Excellence distinguishes between “classic trolls” who “act in their own
interests solely with the aim of sowing disagreement and inciting conflict in the online
environment” and “hybrid trolls” who “are employed as a tool of information
warfare.”22

Trolling as a group behaviour
Online communities dedicated to trolling in general express unique features as compared
to regular social groups of other various activities. Nekmat and Lee highlighted that
trolling groups on Facebook are similar to other prosocial groups by displaying mostly
emotional support and less flaming messages as an ingroup behaviour.23 Moreover, the
trolling community was found to share information in the form of cognitive statements the
most (45% of messages analysed), while the trolling activity itself did not occur as a form
of ill intended, deceiving or disrupting communication among members, instead trolling
was perceived as a casual activity alongside personal expression of opinions and arguments
on a certain topic. One significant difference between trolling and prosocial or normal
communities was that participants in a trolling community display more individualized and
less community-focused information exchange behaviours. Altogether, a community of
trolls proved to be more collaborative than expected, especially when hostile behaviours,
attacks were directed at a common entity.
The general mechanism behind trolling could be identified as a process of “baiting, biting
and flaming.”24 The troll or trolls make the first move by posting a provocative message as
“bait.” Such an opening action “may develop into a chain of mutually antagonistic
responses (‘flaming’) which frequently escalate in intensity to become a ‘flame war’.”25
One of the primary tools utilized by trolls are “face attacks,” which could target one’s
quality face, social identity face, and relational face. Data supported only the previous
two, since social identity face (i.e. a person’s membership of a social, ethnic, professional
etc. group) was not found to be a significant target.26

State-sponsored manipulation and trolling
The Oxford Internet Institute published an extensive report on “organized social media
manipulation” and trolling, covering 28 countries since 2010 as seen on the figure below.27
Their results reveal that while every authoritarian regime targets its own population with
social media campaigns, only a few engage foreign publics. Moreover, democracies are
leading in organized online manipulation techniques due to high-profile elections involving
new innovations. According to the report, almost every government employs “cyber
troops” to “actively engage with users by commenting on posts that are shared on social
media platforms:”
“Some cyber troops focus on positive messages that reinforce or support the government’s
position or political ideology. Israel, for example, has a strict policy of engaging in
positive interactions with individuals who hold positions that are critical the government
(Stern- Hoffman, 2013). Negative interactions involve verbal abuse, harassment and so-
called “trolling” against social media users who express criticism of the government. In
many countries, cyber troops engage in these negative interactions with political
dissidents.” 28

                                             13
Country      Messaging and valence           Communication strategy

               Social     Individual          Fake                     Content
               media       targeting        accounts   Government      creation
                                                        websites,
             comments                                   accounts
                                                             or
                                                        applications
                           Evidence
Argentina       +/-         found       Automated           ..            ..
Australia       +/-           ..        Automated           ..            ..
Azerbaija                  Evidence
n              +/-/n        found       Automated           ..            ..
                           Evidence     Automated
Bahrain          -          Found        , Human            ..            ..
                           Evidence     Automated                      Evidence
Brazil          +/n         found        , Human,           ..          found
                                            Cyborg
                                                                       Evidence
China          +/-/n          ..            Human          ..           found
Czech
Republic         n            ..              ..            ..            ..
                           Evidence     Automated       Evidence
Ecuador         +/-         found        , Human         found            ..
                           Evidence                     Evidence       Evidence
Germany         +/-         found       Automated        found          found
                                                                       Evidence
India           +/-           ..              ..            ..          found
                                                                       Evidence
Iran            +/n           ..        Automated           ..          found
                                                        Evidence       Evidence
Israel          +             ..              ..         found          found
                           Evidence     Automated                      Evidence
Mexico          +/-         found        , Human,           ..          found
                                            Cyborg
North
Korea           +/-           ..            Human          ..             ..
                           Evidence
Poland           -          Found           Human           ..            ..
Philippine                 Evidence
s               +/-         found       Automated           ..            ..

                                       14
Evidence        Automated                             Evidence
  Russia            +/-/n            found           , Human                ..             found
  Saudi
  Arabia             +/n               ..           Automated               ..               ..
  Serbia             +/-               ..              Human                ..               ..
  South                             Evidence        Automated
  Korea              +/-             found           , Human                ..               ..
                                    Evidence
  Syria              +               found          Automated               ..               ..
                                    Evidence           Cyborg,         Evidence           Evidence
  Taiwan            +/-/n            found             Human            found              found
                                    Evidence        Automated          Evidence
  Turkey             +/-             found           , Human            found
  United                            Evidence                           Evidence           Evidence
  Kingdom                            found             Human            found              found
                                                                       Evidence
  Ukraine            +/-               ..              Human            found
  United                                            Automated                             Evidence
  States            +/-/n              ..            , Human,               ..             found
                                                       Cyborg
                                                    Automated          Evidence
  Venezuela          +                 ..            , Human            found                ..
                                                                                          Evidence
  Vietnam            +                 ..              Human                ..             found

                1. Table Strategies, tools and techniques for social media manipulation

The design or process of state-sponsored manipulation entails (1) specific actors
designed to engage domestic or foreign audiences, (2) the creation of a technical
ecosystem (consisting of different platforms, messaging applications), (3) contents and
(4) specific forms of communication behaviours as part of an overall strategic
approach.

Actors
Government
“Government-based cyber troops are public servants tasked with influencing public
opinion. These individuals are directly employed by the state as civil servants, and often
form a small part of a larger government administration. Within the government, cyber
troops can work within a government ministry, such as in Vietnam, in Hanoi Propaganda
and Education Department (Pham, 2013), or in Venezuela, in the Communication Ministry
(VOA News, 2016). In the United Kingdom, cyber troops can be found across a variety of
government ministries and functions, including the military (77th Brigade) and electronic
communications (GCHQ).”
Politicians and parties
“Political parties or candidates often use social media as part of a broader campaign
strategy. Here we are interested in political parties or candidates that use social media to

                                                  15
manipulate public opinion during a campaign, either by purposefully spreading fake news
or disinformation, or by trolling or targeting any support for the opposition party. For
example, in the Philippines, many of the so-called “keyboard trolls” hired to spread
propaganda for presidential candidate Duterte during the election continue to spread and
amplify messages in support of his policies now he’s in power.”
Private contractors
“In some cases, cyber troops are private contractors hired by the government. Private
contractors are usually temporary, and are assigned to help with a particular mission or
cause. In Russia, the Internet Research Agency, a private company, is known to coordinate
some of the Kremlin’s social media campaigns.”
Volunteers
“Some cyber troops are volunteer groups that actively work to spread political messages
on social media. They are not just people who believe in the message and share their
ideals on social media. Instead, volunteers are individuals who actively collaborate with
government partners to spread political ideology or pro-government messages. In many
cases, volunteer groups are made up solely of youth advocacy organizations, such as IRELI
in Azerbaijan (Geybulla, 2016) or Nashi in Russia (Elder, 2012).”
Paid citizens
“Some cyber troops are citizens who are actively recruited by the government and are paid
or remunerated in some way for their work. They are not official government employees
working in public service, nor are they employees of a company contracted to work on a
social media strategy. They are also not volunteers, because they are paid for their time
and efforts in supporting a cyber troop campaign. Normally, these paid citizens are
recruited because they hold a prominent position in society or online. In India, for
example, citizens are actively recruited by cyber troop teams in order to help propagate
political ideologies and messages (Kohlil, 2013). Since these citizens are not officially
affiliated with the government or a political party, their “independent voice” can be used
to help disseminate messages from a seemingly neutral perspective.”29

Strategy behind state-led influence operations
The overall strategy of state-led manipulation and motivations can be broken down into (1)
organizational behaviour such as capacity building and content creation and the (2)
execution of influence operations or specific patterns of trolling behaviour, actions.

Capacity building
In organizational terms, the establishment of “cyber troops” usually entails (1) a clear
hierarchy and reporting structure; (2) content review by superiors; and (3) strong
coordination across agencies or team; (4) weak coordination across agencies or teams; (5)
liminal teams. “In some cases, teams are highly structured with clearly assigned duties and
a reporting hierarchy, much like the management of a company or typical government
bureaucracy. Tasks are often delegated on a daily basis. In Russia and China, for example,
cyber troops are often given a list of opinions or topics that are supposed to be discussed
on a daily basis.”30 After the organizational structure is laid, cyber troops begin capacity-
building activities. “These include: (1) training staff to improve skills and abilities
associated with producing and disseminating propaganda; (2) providing rewards or
incentives for high-performing individuals; and (3) investing in research and development
projects. When it comes to training staff, governments will offer classes, tutorials or even
summer camps to help prepare cyber troops for engaging with users on social media. In
Russia, English teachers are hired to teach proper grammar for when they communicate
with Western audiences (Seddon, 2014).” Other training measures focus on “politology”,

                                             16
which aims to outline the Russian perspective on current events (Chen, 2015). In
Azerbaijan, young people are provided with blogging and social media training to help
make their microblogging websites more effective at reaching desired audiences. Reward
systems are sometimes developed to encourage cyber troops to disseminate more
messages. For example, in Israel, the government provides students with scholarships for
their work on pro-Israel social media campaigns (Stern- Hoffman, 2013). It is important to
note that training and reward programs often occur together. In North Korea, for example,
young computer experts are trained by the government, and top performers are selected
to join the military university (Firn, 2013). Finally, some cyber troops in some democracies
are investing in research and development in areas such as “network effects” and how
messages can spread and amplify across social media. For example, in the United States, in
2010, DARPA funded a USD8.9 million study to see how social media could be used to
influence people’s behavior by tracking how they responded to content online (Quinn and
Ball, 2014).”31

Content creation
Content creation means two things at the same time: the establishment the platforms or
media, such as Twitter account, Facebook pages, homepages, GONGOs, paramilitary
groups, and the creation of specific manipulative contents, such as fake news, doctored
videos, conspiracy theories, to be spread in the aforementioned “cyber ecosystem.”
“Some countries run their own government-sponsored accounts, websites and applications
designed to spread political propaganda. These accounts and the content that comes out
of them are clearly marked as government operated. In the United Kingdom, for example,
the 77th Brigade maintains a small presence on Facebook and Twitter under its own
name.” In addition to official government accounts, many cyber troop teams run fake
accounts to mask their identity and interests. (…) This phenomenon has sometimes been
referred to as “astroturfing”, whereby the identity of a sponsor or organization is made to
appear as grassroots activism (Howard, 2003). In many cases, these fake accounts are
“bots”—or bits of code designed to interact with and mimic human users. According to
media reports, bots have been deployed by government actors in Argentina (Rueda, 2012),
Azerbaijan (Geybulla, 2016), Iran (BBC News, 2016), Mexico (O’Carrol, 2017), the
Philippines (Williams S, 2017), Russia (Duncan, 2016), Saudi Arabia (Freedom House, 2013),
South Korea (Sang-Hun, 2013), Syria (York, 2011), Turkey (Shearlaw, 2016) and Venezuela
(VOA News, 2015).”32
Despite elaborate campaign techniques, cyber troops’ communication often falls back on
traditional communication practices, such as listed by Carly Nyst and Nicholas Monaco:
“Accusations of collusion with foreign intelligence agencies. Martha Roldós was accused
of CIA affiliation, while Azeri journalist Arzu Geybulla was called an Armenian spy. Bahraini
activist Maryam Al-Khawaja and her family were labeled as terrorists and Iranian agents by
government spokesmen, and Selin Girit was called an English agent by Turkish trolls. - 12
Accusations of treason. Venezuelan trolls labeled businessman Lorenzo Mendoza a traitor
who was leading an economic war against the country. Government-backed bloggers in the
Philippines attempted to trend #ArrestMariaRessa on Twitter after Rappler published a
transcript of the first phone conversation between US president Donald Trump and
Philippines president Rodrigo Duterte (Posetti 2017). The campaign mirrored that
previously waged against Senator Leila de Lima, recognized by Amnesty International as a
“human rights defender under threat,” who was ultimately arrested after an online
campaign urging #ArrestLeiladeLima (Etter 2017).
Use of violent hate speech as a means of overwhelming and intimidating targets. Every
female target of government-backed harassment receives rape threats and is subjected to
sexist and misogynistic language. Turkish journalist Ceyda Karan received explicit rape

                                             17
threats. Filipino journalist Maria Ressa received, on average, ninety hate messages an hour
during one attack, including a call for her to be raped repeatedly until she died.
Creation of elaborate cartoons and memes. Those used in attacks on Maryam Al-Khawaja
and Brian Dooley in Bahrain are shown in Figure 1. This is a pattern seen in nearly all cases
and across all countries.”33

State-sponsored actions and patterns
One of the most often used strategy used involves “individual targeting” of an individual
or a group on social media to influence their behaviour either by “positively” feeding them
or their followers carefully crafted messages, narratives founded on certain values or
beliefs or by online harassment.34
The Institute for the Future mentions four basic types of “digital harassment” actions
categorized according the direct/indirect role the state plays in each:
   1. State-executed actions: cyber troops execute strategies designed by the
      government to disseminate propaganda, isolate dissenting views, and drown out or
      remove anti-government sentiment.
   2. State-directed or coordinated actions: these campaigns involve the use of
      coordination channels to disseminate signals and messaging to committed
      supporters and volunteers, and to outsource harassment campaigns to private
      actors. In Venezuela, for example, the Venezuelan Ministry of Communications and
      Information and its dependent office the Sistema Integrado Bolivariano de
      Generación de Contenido en Venezuela (SIBGECOV, the Bolivarian Integrated System
      of Content Generation in Venezuela) deployed Telegram channels as a central
      messaging service, so the Chavez en Red Telegram channel directed supporters to
      troll against Lorenzo Mendoza, CEO of Empresas Polar.
   3. State-incited or -fuelled actions: such methods rely on the manipulation of
      internet users’ psychology to ignite and sustain a campaign and on the autovirality
      of online campaigns. Governments use high-profile proxies and other government
      stand-ins to signal state support for a particular attack. For example, Breitbart
      executed such attacks against political enemies directed by former White House
      chief strategist, Steve Bannon.
   4. State-leveraged or endorsed: state affiliated or endorsed actors engage in online
      bullying that is used by the state to legitimize further trolling actions based on
      manipulated or fake “public opinion.” -35
Stratcom CoE in Riga identified five distinct types of hybrid trolls that perform “individual
targeting” actions against groups:
   !   Blame the US conspiracy trolls disseminate information based on conspiracy
       theories and blaming the US for creating international turmoil. Conspiracy trolls
       write long texts with the intention of presenting logical argumentation and
       unveiling the truth for readers. However, logic breaks down within these texts, and
       the end result is always the same – it is the fault of the US. Comment length is the
       first sign that this is a conspiracy troll.
   !   Bikini trolls refer to commenters that post rather naïve, mostly anti-US comments
       typically accompanied by a profile picture of an attractive young girl. The content
       is simple – it can contain a question or/and a suggestion – “could it be that only
       Russia is bad? The world doesn’t work like that – maybe we should look…” which is
       then followed by a “blame the US” motive. The bikini troll, despite the primitive
       message, is in fact, affecting a large part of the internet community as it is often
       not recognised as a troll.

                                             18
!   Aggressive trolls, similarly to classic trolls, post emotion-laden, highly opinionated
       comments intended to stir up emotional responses from general users. Classic trolls
       are usually highly responsive, as they are interested in prolonging verbal conflict,
       whereas the responsiveness of this hybrid troll is very low.

   !   Wikipedia trolls tend to post factual information from Wikipedia (or other
       authoritative information sources such as history blogs). The posted information is
       true per se, however it is used in a context which leads the audience to false
       conclusions and is thus unlikely to be discredited, even by more experienced users.
   !   Attachment trolls post very short messages with links to other news articles or
       videos containing value-laden information (for example, from Russian news
       platforms, TV news, eye-witness videos in YouTube, etc.).36
Ultimately, these types of trolling behaviours boil down to intensively reposted
messages, repeated messages posted from different IP addresses and/or nicknames,
and republished information and links. These bots are often used to flood social media
networks with spam and fake news. They can also amplify marginal voices and ideas by
inflating the number of likes, shares and retweets they receive, creating an artificial
sense of popularity, momentum or relevance.37 However, we can witness a significant
evolution of state-sponsored, mass manipulation techniques and strategies applied over
time.
In the beginning, for example, Russian troll activity could be characterized with poor
language skills and flooding social media with easily detectable botnet activity. Later
Russian methods became more refined, they used foreign languages with more proficiency
and combined human CMC with networks of bots. A study on Russian separatists’ bot
networks revealed that groups of bots (see figure below) were managed by group of
brokers who were disseminating information on Twitter related to the Crimean water crisis
in 2014 that accused Ukraine with cutting the water supply to the Crimean Peninsula.38

1. Figure The real person network is connected to the brokers who coordinate the dissemination of
propaganda through the bots in their respective syndicates

This strategy, summarized by Samer Al-khateeb and Nitin Agarwal, is combining human and
botnet resources utilized “thread-jacking” (the change of topic in a “thread” of discussion
in an open forum) and “hashtag-latching” (strategically associating unrelated but popular
or trending hashtags to target a broader, or in some cases a very specific audience). The
actual mechanism of the attack was unveiled through identifying the network structure via
the four basic types of communicational relations (follows, mentions, replies, tweets)

                                               19
established between individual accounts taking part in the actual campaign. The
researchers took notice of other unusual behaviours as well:

   !   “Many tweets are identical, i.e., different Twitter users posted the same tweets.
       Note that these are not retweets.
   !   The frequency of the tweets was unusually high, i.e., a large number of tweets
       were posted in a very short duration – a behaviour that is humanly impossible.
   !   All tweets contain ‘short’ links, pointing to the same article on a specific website.
   !   All the tweets are bracketed within a pair of hashtags, i.e., there is a beginning and
       an end hashtag for every tweet.
   !   These hashtags are not related to the tweet content. This indicates the presence of
       “misdirection” and “smoke screening” (Abokhodair et al., 2015 pp. 839–851)
       strategies. More specifically, the hashtags correspond to the names of cities, states,
       and countries of the world, completely unrelated to the content of the tweet as
       well as the webpage pointed to by the short link.
   !   Extremely precise repetitive patterns and correlations were observed, e.g., users
       with Arabic names did not provide location information and users with non-Arabic
       names provided locations in the Arab/Middle-East regions.”39
As a consequence, the evolving nature of state-sponsored foreign influence campaigns
prevent automated system from easily detecting trolls nowadays. Three aspects of
foreign attacks need to be addressed for successful monitoring: (1) troll, bot or their
combined behavioural activity can change very fast from one campaign to the other,
(2) their behavioural patterns can be pretty close to regular users,40 finally (3)
indicators of troll activity are still dependent of or relying on specific platforms
communicational features, actions allowed between users and the data released on
them by specific APIs.

How trolling affects voters’ behaviour
In general, trolls or cyber troops seek current hot political topics or other usually divisive
social issues on social platforms to infiltrate online communities and amplify those existing
divisions. They can use the segmentation of online communities and the conflicts between
them to infiltrate all interested parties (for example, pro-Trump and pro-Hillary groups at
the same time) and make use of the echo-chamber effect by amplifying in-group
sentiments around issues, then start or increase a conflict between groups already
attacking each-other in a current heated political debate or in the course of a heated
political campaign. Research proved that young voters’ voting behaviour can be
significantly influenced directly by the comments/tweets/remarks made by political actors
on social media.41
These types of actions represent different and increasingly complex levels of engagement.
Firstly, they can increase the polarisation around important social issues, like the
#BlackLivesMatter movement, present in different sub-groups/echo chambers of publics by
amplifying certain trending topics, hashtags, messages within those public segments. In the
case of the #BlackLivesMatter movement “Right Trolls” behaved like “bread-and-butter
MAGA Americans, only all talking about politics all day long,”42 Whereas, “Left Trolls” often
adopted the personae of Black Lives Matter activists, typically expressing support for
Bernie Sanders and derision for Hillary Clinton, along with “clearly trying to divide the
Democratic Party and lower voter turnout.” Secondly, trolling accounts inject controversial
topics or actors (e.g., gender, GMOs, race, religion, or war) that may divide a community
to increase groups heterogeneity and the occurrence of trolling behaviour, thus attacking
the social cohesion of a political group or an electorate to ready the ground for other
disturbing disinformation, for example about Hillary Clinton, Emmanuel Macron, pieces to
be released later.43 Thirdly, a network of trolls or bots can flood social media networks
with spam and fake news on a large scale to amplify marginal voices and ideas by inflating

                                             20
the number of likes, shares and retweets they receive, creating an artificial sense of
popularity, momentum or relevance.44 Fourthly, the mass-scale “infiltration” can re-route
communication to the fake accounts, webpages, all the infrastructural capacity cyber
troops created a priori to launching the campaign against a specific target group, thus
grassroots political discourse is taken over by trolls or people start spontaneously
referencing content found in the attackers’ artificial ecosystems. As this happened during
the Brexit debate in 2016, when trolls switched from generalised disruptive tweeting to
retweeting each other in order to amplify content produced by other troll accounts.45
Finally, the most complex strategies are developed to conduct multiple campaigns across a
range of different platforms to harness the “network effect” of the modern media space.
For example, in the United States, in 2010, DARPA funded a USD 8.9 million study to see
how social media could be used to influence people’s behaviour by tracking how they
responded to content online.46

                              Internet Research Agency
As former employees described, “the Internet Research Agency had industrialized the art
of trolling. Management was obsessed with statistics — page views, number of posts, a
blog’s place on LiveJournal’s traffic charts — and team leaders compelled hard work
through a system of bonuses and fines. (…) trolls’ schedule followed two 12-hour days in
a row, followed by two days off. Over those two shifts she had to meet a quota of five
political posts, 10 nonpolitical posts and 150 to 200 comments on other workers’ posts.
The grueling schedule wore her down.”

Researchers identified five categories of IRA-associated Twitter handles, each with
unique patterns of behaviors: Right Troll, Left Troll, News Feed, Hashtag Gamer, and
Fearmonger. „With the exception of the Fearmonger category, handles were consist and
did not switch between categories. Right Troll (617 handles, 663,740 tweets, M =
1075.75, SD = 2949.82). These handles broadcast nativist and right-leaning populist
messages. They employ common hashtags used by similar real Twitter users, including
#tcot, #ccot, and #RedNationRising. These handles’ themes were distinct from
mainstream Republicanism. They rarely broadcast traditionally important Republican
themes, such as taxes, abortion, and regulation, but often sent divisive messages about
mainstream and moderate Republicans. Left Troll (230 handles, 405,549 tweets, M =
1763.26, SD = 2468.32). These handles sent socially liberal messages, with an
overwhelming focus on cultural identity. They discussed gender and sexual identity (e.g.,
#LGBTQ) and religious identity (e.g., #MuslimBan), but primarily focused on racial
identity (e.g., #blacklivesmatter). Many handles, including @Blacktivists and
@BlackToLive, tweeted in a way that mimicked the Black Lives Matter movement, with
posts such as @Blacktivists, May 17, 2016, “Justice is a matter of skin color in America.
#BlackTwitter”. Hashtag Gamer (110 handles, 216,895 tweets, M = 1955.31, SD =
3176.10). These handles are dedicated almost entirely to playing hashtag games, a
popular word game played on Twitter. Fearmonger (122 handles, 10,161 tweets, M =
82.79, SD = 60.06). These accounts spread news of a fabricated crisis event— that
salmonella-contaminated turkeys were produced by Koch Foods, a U.S. poultry producer,
near the 2015 Thanksgiving holiday. The tweets described the poisoning of individuals
who purchased these turkeys from Walmart.”

The different types of account were used differently and their efforts were conducted
systematically, with different allocation when faced with different political
circumstances or shifting goals. E.g.: there was a spike of activity by right and left troll
accounts before the publication of John Podesta's emails by WikiLeaks. According to the
authors, this activity can be characterised as “industrialized political warfare.”

                                            21
When it comes to targeting, the effect on behaviour can be direct or indirect. Cyber troops
can target directly the target group to achieve imminent change of behaviour, discourse in
ongoing discussions, or they can indirectly target the operational objects’ online
environment. For example, opinion leaders, including prominent bloggers, journalists and
activists, are carefully selected and targeted with messages in order to convince them that
their followers hold certain beliefs and values. Thus, the desired behavioural change of an
actor is the result of the manipulation of the communicational environment, like public
opinion, debates, perceptions on electorate, on which those actors base their decisions on.
In Russian military theory, the latter is defined as “reflexive control”:
“a means of conveying to a partner or an opponent specially prepared information to
incline him to voluntarily make the predetermined decision desired by the initiator of the
action.”47

Trolling metrics
Several researches tried to identify behavioural patterns of trolling activity, especially
after the alleged Russian attempt to meddle into the 2016 U.S. presidential election and
the Brexit referendum, followed by the French and German elections in 2017. While
previous attempt mainly focused on measuring psychological traits or group dynamics of
everyday trolling, the latest researches attempted to identify and forecast hostile foreign
influence operations based on a network of trolls’, bots’ contextual/content and metadata
analysis.

The TAP measurement model
It is, therefore, highly important to construct a basic or general model of measurement to
understand the pros and cons of each measurement technique applied in different research
designs, the ability of certain qualitative or quantitative approaches to detect, identify
and forecast online trolling (or any other) behaviour on an individual or group levels. The
measurement model I created is called the TAP or the triad model of traits-accounts-
platforms and its elements reflect on the previous theoretical definitions of trolling, as
seen below.

                   1The TAP heuristic or measurement model of trolling behaviour

The “traits” aspects cover all the individual qualitative or quantitative indicators used to
measure individual trolling activity’s inherent or inside components, such as personality
traits of gender, age, name, motivations, offline behaviour, offline group membership,
online linguistic styles, ideologies etc. The account activity can be understood as the more

                                                22
You can also read