Conversational Agent Voting Advice Applications - University ...

 
CONTINUE READING
Conversational Agent Voting Advice Applications - University ...
Running head: CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

                  Conversational Agent Voting Advice Applications
  The effect of tone of voice in (CA)VAAs and political sophistication on political knowledge,
                          voting intention, and (CA)VAA evaluation.

Simone van Limpt
Snr 2013738

Master’s Thesis
Communication and Information Sciences (Business Communication and Digital Media)
School of Humanities and Digital Sciences
Tilburg University, Tilburg

Supervisors: Christine Liebrecht and Naomi Kamoen
Second Reader: Rianne Conijn

July 2020
Conversational Agent Voting Advice Applications - University ...
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

                                             Abstract

During election times, more and more citizens consult Voting Advice Applications (VAAs) to

inform themselves about political party standpoints towards relevant political issues. VAAs have

been shown to increase political knowledge and interest in politics and enable people to make a

well-informed voting choice. However, research shows at the same time that VAA users

experience comprehension difficulties when filling out a VAA and make little effort to solve

these problems by searching for additional information. The current study developed and tested a

new type of VAA: a Conversational Agent Voting Advice Application (CAVAA). By using

technology from the field of conversational agents, this study investigated whether the additional

political information provided by CAVAAs enhances voters’ political knowledge, voting

intention, and tool evaluation. Besides, the study examined the role of tone of voice in CAVAAs

and the moderating role of political sophistication. An online experiment (N = 229) was

conducted that contained a 3 (VAA type: traditional VAA, CAVAA with formal tone of voice,

or CAVAA with conversational human voice) x 2 (political sophistication: high versus low)

between-subjects design. Results showed that CAVAAs (regardless of their tone of voice) lead to

more political knowledge and a better evaluation than traditional VAAs. However, no effect was

observed for voting intention. In addition, the study found no interaction effect for political

sophistication. Theoretical implications and practical implications for (CA)VAA developers are

discussed.

       Keywords: Voting Advice Applications (VAAs), conversational agents, comprehension

process, political sophistication, political knowledge, voting intention

                                                                                                  2
Conversational Agent Voting Advice Applications - University ...
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

                                            Table of contents

Abstract                                                        2
1. Introduction                                                 5
2. Theoretical framework                                        8
      2.1 Voting Advice Applications                            8
      2.2 The cognitive process of answering VAA statements     10
      2.3 Semantic and pragmatic comprehension problems         11
      2.4 Comprehension problems and response behavior          12
      2.5 Conversational Agent Voting Advice Applications       13
      2.6 Tone of voice                                         15
      2.7 Political sophistication                              17
      2.8 Conceptual model                                      20
3. Method                                                       20
      3.1 Design                                                21
      3.2 Material                                              21
           3.2.1 Political statements                           21
           3.2.2 Traditional VAA                                22
           3.2.3 CAVAAs                                         22
           3.2.4 Tone of voice in CAVAAs                        24
      3.3 Pre-test                                              26
           3.3.1 Pre-test procedure                             26
      3.4 Participants                                          27
      3.5 Measurements                                          28
           3.5.1 Political sophistication                       28
           3.5.2 Political knowledge                            29
           3.5.3 (CA)VAA evaluation                             30
           3.5.4 Voting intention                               30
           3.5.5 Factor analysis                                31
      3.6 Procedure                                             32

                                                                     3
Conversational Agent Voting Advice Applications - University ...
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

4. Results                                                     34
       4.1 Political knowledge                                 34
       4.2 Voting intention                                    37
       4.3 (CA)VAA evaluation                                  38
       4.4 Additional analyses political sophistication        39
5. Discussion                                                  40
       5.1 Conclusion                                          40
       5.2 Theoretical implications                            40
       5.3 Limitations and suggestions for future research     43
       5.4 Practical implications                              44
References                                                     46
Appendices
       Appendix A: CAVAA statements with explanations
       Appendix B: Informed consent and material pretest
       Appendix C: Results pretest
       Appendix D: Online experiment main study
       Appendix E: Chatbot conditions main study
       Appendix F: Coding scheme factual political knowledge
       Appendix G: Assumptions main study

                                                                    4
Conversational Agent Voting Advice Applications - University ...
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

                                          1. Introduction

       Nowadays, citizens experience difficulties in deciding which political party to vote for.

The increasing number of (new) political parties and candidates in multiparty systems has

resulted in electoral instability (Dalton, 2002). As a consequence, it has become difficult for

voters to stay informed about political issues and to make a well-informed voting decision

(Cedroni & Garzia, 2010). Moreover, there are indications that the political interest among Dutch

citizens is declining. This can be seen from, among other things, the trend of decreasing trend in

turnout rates of the Dutch Parliamentary Elections. Where in 1967, 94.4% of the Dutch citizens

turned out to vote, this percentage had decreased to 81.9% by 2017 (CBS, 2012; Kiesraad, 2017).

       Voting Advice Applications (VAAs) might provide a solution for citizens with less

political interest who want to inform themselves about party positions towards relevant political

issues without reading and comparing the election programs of every party. VAAs are

accessible, user-friendly, and time-efficient online survey tools that fulfill two main functions

(Garzia, 2010; Kamoen, Holleman, Krouwel, Van de Pol, & De Vreese, 2015). First, VAAs aim

to inform voters about political parties and their viewpoints (Garzia, 2010). Secondly, VAAs

provide voters personalized voting advice based on a comparison between the political parties’

and user’s opinions on a set of political statements. Indeed, research shows that VAAs increase

political knowledge and interest and enable citizens to make a more conscious and well-informed

political-party choice (e.g., Kamoen et al., 2015).

       However, the process of making a well-informed voting decision has some drawbacks.

Kamoen and Holleman (2017) found that VAA users encounter comprehension problems in circa

one in five political statements. For example, the statement: To maintain social services, the OZB

may be increased raises semantic (What does OZB mean?), as well as pragmatic (What are the

                                                                                                     5
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

current OZB rates?) questions. In general, traditional VAAs do not offer users additional

information and instead of consulting external sources to solve these problems, VAA users often

make assumptions about the meaning of the term, which could affect the voting advice (Kamoen

& Holleman, 2017). However, little attention has been paid to exploring solutions that make

complex political information more understandable in VAAs, and as to whether these solutions

can increase political knowledge.

       Therefore, the current study develops and tests a completely new type of VAA that

provides users with additional political information: a Conversational Agent Voting Advice

Application (CAVAA). A CAVAA is a combination of a conversational agent (i.e., chatbot) and

a VAA. Just like traditional VAAs, CAVAA users can give their opinion on a set of political

statements to receive voting advice. However, to solve comprehension problems before

answering a statement, voters can request CAVAAs for additional information. For example, by

asking for the current OZB rate. Therefore, CAVAAs have the potential to help VAA users find

the right information to enlarge their political knowledge compared to traditional VAAs. The

current study is, to the best of our knowledge, the first that applies chatbot technology in a

political context in order to investigate the effect of (CA)VAAs on political knowledge and

voting intention.

       Receiving additional information via CAVAAs could be particularly beneficial for people

who are already less informed about political issues. According to the National Voter Survey of

2017, political information should be made more accessible, especially for voters with low levels

of political sophistication. Political sophistication is a measure that combines people’s political

interest, political knowledge, and educational level (Lachat, 2008). Citizens with low levels of

political sophistication are overall less informed about politics, less interested in politics, and

                                                                                                      6
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

vote less often than citizens with high levels of political sophistication (Lachat, 2007). Since

chatbot research shows that the threshold for asking questions to a chatbot is lower than asking

questions to people (Følstad, Nordheim, & Bjørkli, 2018), interaction with a CAVAA could

make it easier for voters to receive political information and consequently making well-informed

voting decisions.

       Since CAVAAs are new VAA tools, it is still unknown how they should respond towards

users. Previous chatbot studies in the customer service context have shown that the use of a more

engaging and personal communication style (i.e., Conversational Human Voice, CHV; Kelleher,

2009) can make a difference in how people experience and evaluate chatbots compared to a more

formal tone of voice (e.g., Araujo, 2018; Liebrecht & van der Weegen, 2019). Therefore, the

current study investigates how users evaluate a CAVAA containing a conversational human

voice compared to a formal tone of voice.

       Overall, this study will contribute to a wider understanding of the influence of political

sophistication and tone of voice in (CA)VAAs on political knowledge, voting intention, and

evaluation. The results of the study will provide more insight into the effect of chatbots in the

political domain, but could also help VAA developers to supply and present the political

information to citizens in the most optimal way. In order to investigate this, the following

research question will be examined:

       RQ: What is the effect of a traditional VAA, a formal CAVAA, and a CAVAA with

CHV on political knowledge, voting intention, and (CA)VAA evaluation, and is this relationship

moderated by users’ level of political sophistication?

                                                                                                    7
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

                                        2. Theoretical framework

2.1 Voting Advice Applications

       The Dutch political landscape is becoming increasingly complex due to an expansion of

political parties and blurred boundaries between the different ideologies of these parties (Dalton

& Wattenberg, 2002). Due to this complex political landscape, it has become difficult for voters

to make a well-reasoned voting choice (Cedroni & Garzia, 2010). In order to help people to

assess which political parties are most in line with their preferences, VAAs have been developed

(Holleman, Kamoen, Krouwel, Van de Pol, & De Vreese, 2016). Based on people’s answers

about a range of political statements, the VAA gives an overview of political parties that best

match the political preferences of the voter. VAAs have become increasingly popular over the

past decade. For example, in the week before the 2017 Dutch National Elections, Kieskompas

and Stemwijzer together gave voting advice to 6.85 million citizens (De Telegraaf, 2017).

       However, VAAs do not only aim to provide clear voting advice. In general, VAAs have

the intention to increase citizens’ political knowledge and political interest to make a well-

informed voting choice. Political knowledge is an important predictor for participation in

politics. According to Westle (2006), only citizens who have at least a basic knowledge of

politics can participate in democratic processes in a meaningful way. The effects of voting aids

on political knowledge have been investigated in several previous studies. For example, Kamoen

et al. (2015), and Ladner, Fivaz, and Pianzola (2012) point out that VAA usage improves

political knowledge. Better political knowledge, in turn, affects voting choice and will lead to a

higher voter turnout during election times (Kamoen et al., 2015; Ladner & Pianzola, 2010).

Lassen (2005) also argues that a lack of political knowledge is the main reason for not voting.

                                                                                                     8
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

       Political knowledge can be divided into people’s perceived (i.e., the feeling of knowing)

and factual (i.e., actually knowing) political knowledge. In the studies on the effects of VAAs

(e.g., Kamoen et al., 2015; Ladner et al., 2012; Ladner & Pianzola, 2010), respondents are

generally asked to point out to what extent they feel that the VAA has enhanced their political

knowledge in a post-VAA survey. These studies then show that VAA users experience a

knowledge increase because of VAA usage (i.e., perceived political knowledge). However, the

question arises whether this perceived political knowledge corresponds to their factual political

knowledge (i.e., having more concrete political knowledge then before). Schultze (2014)

therefore states that systematic consideration of the factual level of voters’ political knowledge is

needed to acquire a more detailed explanation of their voting behavior. The findings of his study

indicate a positive effect of the German VAA (the “Wahl-O-Mat”) on factual political

knowledge about party positions (Schultze, 2014). The current study focuses on perceived, as

well as factual political knowledge because previous studies have shown positive correlations

between perceived political knowledge and voting intention (Ladner & Pianzola, 2010).

Moreover, insight into people’s actual knowledge is needed to determine the exact contribution

of VAAs.

       Next to informing citizens about political issues and increasing their knowledge and

interest in politics (Cedroni & Garzia, 2010), a number of studies have examined the effects of

VAAs on voting intention and voting advice (Krouwel, Viteillo, & Wall, 2012). An important

condition for receiving voting advice is that VAA users understand the political statements from

the VAA (Kamoen & Holleman, 2017). Still, people seem to experience a sense of

incomprehension when consulting a VAA (Kamoen & Holleman, 2017). Comprehension

problems can hinder the learning process about political issues. To understand how these

                                                                                                    9
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

comprehension problems appear, theories about cognitive processes in the field of survey

response psychology can be used.

2.2 The cognitive process of answering VAA statements

       The “Tourangeau model” (Tourangeau, Rips, & Rasinski, 2000) is used as a starting

point to theorize about comprehension problems in VAA. The model has its origins in the

psychology of survey response literature and describes the cognitive processes that underlie the

process of answering questions in four different steps. To apply the Tourangeau model to VAAs,

the following example statement is used in this section: The Netherlands must allow the

cultivation of genetically modified crops (GMOs). In the first step, the respondent interprets the

information retrieved from the question by making a logical representation of the question (e.g.,

So apparently, GMOs are still forbidden in the Netherlands). In the second stage, respondents

have to retrieve the relevant information from their long-term memory. This stage can be

described as a sampling process that activates the most accessible beliefs. Sometimes,

respondents are able to directly activate a summary evaluation of their beliefs (e.g., A GMO is an

organism that has had its DNA modified in some way through genetic engineering.). Other

respondents, however, need an extra step (i.e., stage 3) of weighting and scaling individual

beliefs in order to form a judgment (e.g., GMOs can offer a solution for the exponentially

growing world population, so they must be allowed versus in my eyes, GMOs are not safe for

consumption). In the final stage, the respondents have to translate their judgment into an

answering option in the questionnaire (e.g., agree, neutral, disagree, or no opinion.)

       To make a connection between the characteristics of a VAA statement and possible

comprehension problems, the understanding process (i.e., the first step of the Tourangeau Model)

                                                                                                 10
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

is an important starting point. Researchers in the field of discourse studies suggest that

respondents first construct a semantic representation of the literal meaning in their heads. Then,

this semantic representation will be enriched with world knowledge, which provides a pragmatic

representation of the question (Graesser, Singer, & Trabasso, 1994; Kintsch, 1994; Krosnick,

2018). According to Tourangeau et al. (2000), comprehension problems can arise both during the

creation of the semantic representation, as well as during the construction of the pragmatic

representation.

2.3 Semantic and pragmatic comprehension problems

       Kamoen and Holleman (2017) found in their study that two-thirds of the comprehension

problems in answering political statements were semantic, meaning that they are related to the

literal meaning of words in VAA statements. For example, making use of political jargon (e.g.,

GMOs) or tax names (e.g., OZB). The other problems were related to the pragmatic

representation of the statements. Here, respondents did understand the literal meaning of all the

terms in the VAA statements, but had too little background knowledge on the issue to answer the

question (e.g., the exact tax height). Hence, when VAA users are facing issues regarding the

construction of a semantic representation, or when they make an interpretation that does not

correspond to the interpretation of political parties, the answers to these questions are an invalid

base for the voting advice (Kamoen & Holleman, 2017).

       Since understanding the question is the first step to come to an optimal answer

(Tourangeau et al., 2000), it is likely that problems that appear at this stage will continue in the

following phases (Lenzner, Kaczmirek, & Galesic, 2011). For example, if a respondent does not

understand the term GMO, it is also difficult for this respondent to retrieve the relevant

                                                                                                       11
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

knowledge and subsequently come to an informed assessment of the statement (Jabine, Straf,

Tanur, & Tourangeau, 1984). In this situation, offering information in a semantic and/or

pragmatic way that meets the demands of the users can improve the retrieval process, and as a

result, make the answers of respondents more accurate (Schober & Conrad, 1997; Schober,

Conrad, & Fricker, 2004).

2.4 Comprehension problems and response behavior

         To improve the comprehension and retrieval process (i.e., step 2 of the Tourangeau

model), VAA users may benefit by reading additional information about political statements.

However, VAAs differ in the extent to which they provide information to users. For example,

Stemwijzer gives the opportunity to view how the various political parties think about the issue

with a brief explanation of their positions. However, this type of information is not semantic, nor

pragmatic. Kieskompas, in contrast, does not provide additional political information because

they want voters to form their own opinions without being influenced by information from the

political parties (Krouwel et al., 2012). Hence, this implies that the specific need for semantic

and pragmatic information has not been fulfilled in VAAs. Therefore, respondents have to

consult external information sources of information, such as Google, to receive additional

information when they do not understand a term, which costs more effort (Kamoen & Holleman,

2017).

         Furthermore, despite the possibilities that VAAs offer to obtain additional information,

users make little use of this extra information (Lenzner, 2012; Kamoen & Holleman, 2017). This

matches with what is known from traditional survey research. For example, Galesic, Tourangeau,

Couper, and Conrad (2008) showed with an eye-tracking study on survey responding that two-

                                                                                                    12
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

thirds of the respondents did not look at the definitions, which became visible when they moved

over them with their computer mouse. Additionally, the study showed that the more effort is

needed to access the definition, the less likely participants were to look into the available

information.

       The fact that respondents spend only a minimal effort in answering the statements

matches the idea of Krosnick (1991) who stated that survey participants often demonstrate

satisficing behavior. This means that people just spend enough effort to deliver an acceptable

answer that satisfies the survey investigator. Thus, rather than seeking for more information

about unclear terms, survey respondents use sub-optimal response strategies which take little

effort. For example, they (1) make assumptions about the meaning of the term, and (2) opt for

the neutral or non-response answering option (Baka, Figgou & Triga, 2012; Van Outersterp et

al., 2016; Kamoen & Holleman, 2017). This behavior is not desirable and could affect the voting

advice (Van Outersterp et al., 2016). Therefore, it is a challenge to develop VAAs that provide

political information in a least effort way.

2.5 Conversational Agent Voting Advice Applications

       Technologies that can offer additional information in a low-threshold way are chatbots,

also known as conversational agents, chatterbots, digital assistants, or virtual agents. Chatbots

are conversational agents that use natural language to interact with humans (Dale, 2016). Based

on input from users, chatbots provide pre-programmed answers. Chatbots have become

increasingly popular over the past decades. Nowadays, they are present in different contexts,

such as business, e-commerce, education, healthcare, and customer services, and offer help for

different purposes (Kerlyl, Hall, & Bull, 2007; Shawar & Atwell, 2007). For example, in the

                                                                                                    13
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

field of e-commerce and customer service, chatbots can operate as online shopping assistants to

provide customers with extra information about products or services in order to find the products

or services that fits best with their needs (Bogdanovych, Simoff, Sierra, & Berger, 2005). In

addition, chatbots can serve to help customers navigate through websites, decrease the number of

clicks, and reduce the time to find the desired information (Lasek & Jessa, 2013). Due to these

dynamic characteristics of chatbots, users have to make less effort to obtain information

(Brandtzaeg & Følstad, 2017). And this easy accessibility seems to correspond to the mindset of

VAA users when completing a VAA: they want to become informed about the political issues at

stake in an efficient and easy way (Kamoen & Holleman, 2017). Therefore, the current study will

explore the feasibility of chatbots for VAAs in the field of politics. We will name them

Conversational Agent Voting Advice Applications (CAVAAs).

       CAVAAs are traditional VAAs implemented in a chatbot. The results of the study

conducted by Galesic et al. (2008) indicate that users are less likely to look at additional

information when the more effort is required to access the definition. Compared to traditional

VAAs, chatbots have the ability to transport data in a simple way from computer to human

without searching several web pages to collect information (Dahiya, 2017). Therefore, users can

easily ask their questions about the content of political statements and receive the required

information in return. It can be assumed that CAVAAs have the potential to provide complex

political information in a cost-efficient and understandable way in order to increase political

knowledge among users and to provide more personalized voting advice, compared to traditional

VAAs. Therefore, in line with Galesic et al. (2008) the following hypothesis is formulated:

                                                                                                  14
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

    H1: After using a CAVAA, users will report more (a) perceived and (b) factual political

                           knowledge compared to a traditional VAA.

Since CAVAAs are a new phenomenon, it is important to investigate how people evaluate

CAVAAs and their intention to use CAVAAs in the future. Based on the studies of Dale (2016)

and Bogdanovych et al. (2005), CAVAAs can provide a more natural way of interacting

compared to traditional VAAs and can act as a personal assistant to help users find the right

information and enhance their political knowledge. Hence, we propose the following hypothesis:

           H2: Users will evaluate CAVAAs more positively than a traditional VAA.

2.6 Tone of voice

       Since CAVAAs are completely new tools, it is still unknown how the voting application

should respond to users in order to keep them motivated when filling out CAVAAs. It is

important that people are motivated when conducting CAVAAs, so they can process the political

statements in a careful way. This could be achieved by approaching CAVAA users with a

personal and engaging communication style, also known as tone of voice. Tone of voice is

widely investigated in the domain of organizational communication and it has been shown that

using a Conversational Human Voice (CHV) can make a difference in how people experience

chatbots (e.g., Araujo, 2018; Liebrecht & van der Weegen, 2019). Kelleher (2009) defines CHV

as “an engaging and natural style of organizational communication as perceived by an

organization’s publics based on interactions between individuals in the organization and

individuals in publics” (p. 177). CHV can be operationalized by several linguistic elements,

                                                                                                15
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

which can be subdivided into three categories: personalization (e.g., personally addressing the

user; Hi Simone), informal language (e.g., the use of emoticons and interjections; haha :)), and

invitational rhetoric (e.g., stimulating dialogue by using humor or asking questions; Can I help

you?) (Van Noort, Willemsen, Kerkhof, & Verhoeven, 2014; Van Hooijdonk & Liebrecht,

2018).

         Empirical research in other domains than politics (e.g., public relations, computer

science, and communication studies) shows that CHV is important in creating positive

evaluations. For example, Kelleher (2009) found a positive effect of CHV in blogs on customer

satisfaction and brand attitude. Here, satisfaction is defined as the extent to which the reader of

the blog is positive about the brand and has positive expectations about the brand. Furthermore,

in the study of Schneider (2015), CHV appears to be a key factor in enhancing positive attitudes

towards the reputation of an organization. Others (Liebrecht & Van der Weegen, 2019) have

highlighted the relevance of CHV usage in human-computer interaction. They found that

chatbots significantly enhanced brand attitude in the domain of customer service. Based on these

results from other domains, it can be expected that the use of CHV in CAVAAs will also have

positive effects on people’s attitudes towards the new tools in the political context. Therefore, the

following hypotheses will be investigated:

H3: Users of CAVAAs will evaluate the tool more positively when CHV is used compared to a

                                        formal tone of voice.

         Furthermore, it is known from lexical decision-making literature that personally

addressing users is an important driver for information processing and, in turn, makes

                                                                                                      16
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

information more understandable (Andrews, 1988). When people are able to process information

in an accessible way, this will improve the understanding of that information (Schwanenflugel et

al., 1988). Additionally, research in the field of surveys shows that people will put more effort

into understanding and answering statements when a personal communication style (e.g., using

personal pronouns) is used (Krosnick, 2000). Therefore, it could be expected that CAVAA users

will put more effort into the understanding and processing of complex political statements when

CAVAAs apply a conversational human voice.

       In short, because personal communication is an important factor for information

processing, it is expected that using CHV in CAVAAs will make political information more

understandable and will, in turn, lead to more political knowledge. This reasoning leads to the

following hypothesis:

    H4: After using a CAVAA, users will report more (a) perceived and (b) factual political

                knowledge when CHV is used compared to a formal tone of voice.

2.7 Political sophistication

       The process of translating interests and preferences into a vote involves a certain effort

from citizens: they have to collect information about political party positions, qualities of the

individual candidates, and information about the political topics themselves. Citizens vary in

their amount of cognitive resources to collect and process this multitude of complex information

to cast a meaningful vote (Luskin, 1987). Especially those with limited individual dispositions to

collect relevant information could profit the most from CAVAAs. In this study, the term

                                                                                                    17
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

“political sophistication” is used to make a classification between different types of voters with

divergent cognitive capacities.

       However, there is an ongoing debate about the best operationalization of the concept of

political sophistication. Some measure political knowledge by counting right answers to a series

of knowledge questions (McGraw, Lodge, & Stroh, 1990); others pose a number of questions

about political interest with a measure of educational level (Holleman et al., 2016; Lachat, 2008;

MacDonald, Rabinowitz, & Listhaug, 1995). Luskin (1990), however, suggests three aspects to

measure the individual development of political information, namely: the cognitive ability to

understand information (usually measured with level of education), an informative aspect

(usually measured as political knowledge), and the motivation to put effort into the collection of

this information (usually measured as political interest). This study will take all three indicators

into account to determine voters’ degree of political sophistication because previous studies have

already found positive effects of political interest, political knowledge, and level of education on

electoral participation (Söderlund, Wass, & Blais, 2011). Thus, someone with a high level of

education, a high degree of interest in politics, and much political knowledge is characterized by

a high level of political sophistication. Someone with a low level of political sophistication is

low educated, has little interest in politics, and does not have sufficient political knowledge.

       In the current study, we expect a positive contribution of CAVAAs compared to VAAs

on political knowledge and voting intention. Overall, political sophistication lowers the cognitive

cost of voting (Denny & Doyle, 2008). To illustrate, sophisticated voters are more interested in

politics and collect information about the political system themselves, even outside the election

times (Luskin, 1990; Söderlund et al., 2011). Therefore, it can be assumed that this group (higher

sophistication) benefits less from the additional information that is given by CAVAAs.

                                                                                                    18
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

Furthermore, sophisticated voters generally have more political knowledge than those who stay

at home during elections. When homestayers (i.e., low sophisticated citizens) do consult

CAVAAs to receive more information in order to enhance their political knowledge and

understand the political statements, it could make them more confident and motivated to go vote.

Therefore, it is expected that CAVAAs are especially relevant for people with low levels of

political sophistication. Lastly, educational level seems to have a positive influence on political

knowledge and electoral participation (Blais, 2000; Galego, 2010). A higher level of education

ensures a better development of voters’ cognitive capacities which increases the ability to deal

with complex political information (Armingeon & Schädel, 2015). In sum, it can be stated that

previous research clearly indicates that each of the three indicators of political sophistication by

Luskin (1990) are positively associated with political knowledge and voting intention.

       So, compared to highly sophisticated people, people with low levels of political

sophistication have less cognitive capacity which decreases the ability to deal with political

information (Armingeon & Schädel, 2015). In addition, they do not seek political information

themselves (Söderlund et al., 2011). Therefore, it can be assumed that these so-called low

information voters will feel more inclined to use a CAVAA, as they will experience a greater

need for information to simplify their voting decision than voters with high levels of political

sophistication. Therefore, the following hypothesis is formulated:

    H5: The effect of CAVAAs versus VAAs on political knowledge and voting intention is

moderated by political sophistication, such that CAVAAs lead to more political knowledge and a

  higher voting intention among people with low levels of political sophistication than among

                        people with high levels of political sophistication.

                                                                                                   19
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

2.8 Conceptual model

The conceptual model of the current thesis project is shown in Figure 2. The aim of the study is

to examine the effect of CAVAAs and tone of voice on political knowledge, voting intention and

evaluation compared to traditional VAAs. In addition, the moderating role of political

sophistication will be explored.

   Voting Advice Application                                           Outcome variables

                                                    H4
         Formal CAVAA                                                  Political knowledge

       CAVAA with CHV                               H3
                                                                      (CA)VAA evaluation
                                                H1 + H2

         Traditional VAA
                                                                        Voting intention

                                                         H5

                                                           High
                                   Political
                                   sophistication
                                                           Low

Figure 1. Conceptual model

                                               3. Method

                                                                                               20
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

3.1 Design

To test the hypotheses, an online experiment was conducted with a 3 (type of VAA: traditional

VAA, CAVAA with formal tone of voice, or CAVAA with conversational human voice) x 2

(political sophistication: high versus low) between-subjects design. Each participant was

randomly assigned to one of the three VAA conditions. The independent variable political

sophistication is a quasi-experimental variable, which was measured afterwards. Every

participant filled out an online questionnaire with items regarding political knowledge, voting

intention and (CA)VAA evaluation, after interacting with the CAVAA or VAA.

3.2 Material

       3.2.1 Political statements. Twenty different statements were derived and adapted from

the statements that were used in Kieskompas and StemWijzer during the 2017 Dutch National

Elections. Statements were only included if the topic was still relevant today. For example, the

statement “Stricter climate legislation is needed, even though it is at the expense of economic

growth” was not included, since the Dutch cabinet has already accepted the climate plan

proposal for the period 2021-2030. Besides, additional statements about the corona crisis were

added to the voting aid because this topic is currently dominating the news.

       Moreover, only statements that were expected to cause comprehension problems were

selected for this study. Previous studies have shown that difficult concepts (semantic) and

missing information (pragmatic) can lead to problems in completing VAAs (Kamoen &

Holleman, 2017). Therefore, the statements had to include political jargon or tax names

(semantic problems). Besides, words like “increase”, “lower”, or “abolish” were added in the

statements to make an explicit reference to the status quo (e.g., The corporate income tax should

                                                                                                   21
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

decrease further after 2021). The additional semantic and pragmatic information provided was

based on reliable sources, such as the Dutch Ministry of Defense and Statistics Netherlands

(CBS) (see Appendix A). Furthermore, the explanations were formulated in a neutral way which

means they did not favor one side of the political coin. An overview of example statements

including semantic and pragmatic explanations can be found in Appendix A (in Dutch). The

length of the explanations varied between 20 and 60 words (M = 38.95, SD = 15.06).

       3.2.2 Traditional VAA. One of the experimental conditions was the traditional VAA

with 20 political statements. Participants were informed that the VAA was especially designed

for research purposes and that political parties were not involved in the development of the

VAA. The VAA was created and distributed with Qualtrics software. We assured that the layout

was comparable to that of a Dutch VAA (e.g., Stemwijzer or Kieskompas). For example, the

response options were derived and adapted from Stemwijzer (i.e., “agree”, “neutral”, “disagree”,

“no opinion”), and the VAA provided a voting advice based on the answers given by the

participants. However, to underline that the VAA was built for research purposes, the logo of

Tilburg University was visible in the voting aid. To realize the voting advice, the websites of

eight political parties (4 left-wing parties and 4 right-wing parties) were visited to check the

attitudes of the different parties towards every political statement.

       3.2.3 CAVAAs. Based on the traditional VAA, two CAVAA versions were developed

with the software of Flow.ai. Flow.ai is a visual bot platform for creating AI chatbots for

Facebook, WhatsApp and internet. The main difference between the traditional VAA condition

and the two CAVAA conditions is the chatbot function. In a CAVAA, users are able to chat with

a chatbot in order to receive more information about the political statements. By using this

software, chatbots can be developed and trained to recognize different formulations of their

                                                                                                   22
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

conversational partners. Messages from the users activate a certain conversational flow within

the chatbot, which in return sends pre-programmed messages.

       The CAVAA interface was developed based on previous chatbot research. For example,

it is important that a chatbot starts with a proactive greeting to get the user’s attention (Thies,

Menon, Magapu, Subramony, & O’neill, 2017). Therefore, the CAVAAs started the

conversation with: "Hello, nice that you are going to fill in this voting aid!” Followed by: "Are

you ready to start?". Users were offered two buttons that said, "Yes, I'm ready!", and "No, I'll

come back another time. In this manner, users were forced to actively participate in the

conversation with the CAVAA. An example of a CAVAA conversation is shown in Figure 2.

       In addition, it is important that a chatbot responds invitational when a question is not

understood (Cahn, 2017). When a chatbot does not respond in an invitational way, users could

get stuck in the conversation. For this reason, the following chatbot answer was included: "Sorry,

I don’t know what you want to ask. At least, you can ask me a question about: the meaning of the

term, the current state of affairs, advantages, and disadvantages." In this way, the CAVAA was

given the opportunity to properly interpret new input, allowing the conversation to continue.

Research by Følstad et al. (2018) has shown that it is important to be transparent about the

functions and limitations of a chatbot. That is why the CAVAA indicated where users could ask

questions about.

       Lastly, before the CAVAA started with the political statements, an explanation was given

about the response options of the statements. Research about VAAs shows that VAA users most

of the time do not know the difference between the “neutral” and “no opinion” answer options

(Kamoen & Holleman, 2017). For this reason, both answer options were explained before users

started completing the statements, like: “Did you know that the answer option 'no opinion' in

                                                                                                      23
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

VAAs ensures that the statement is not included in the calculation of the voting advice?”. After

this explanation, the first political statement was offered to the user.

         3.2.4 Tone of voice in CAVAAs. For this study, two CAVAA versions have been

developed. One CAVAA used a formal tone of voice, and the other CAVAA used a

conversational human voice. Table 1 describes the characteristics of the two different tone of

voices used by the two CAVAA-versions. They are divided into three categories defined by Van

Noort et al. (2014), namely: personalization, informal language, and inviting rhetoric. The

conversations with the CHV-CAVAA were more personal, informal, and engaging than the

CAVAA that uses a formal tone of voice. To illustrate, some CHV elements were absent in the

formal CAVAA (e.g., smileys), and some words were replaced by more formal words in the

formal version (e.g., word choice). The CAVAA versions only differed in tone of voice only in

the parentheses of the conversation, not in the statements or in the additional information.

         An example of the two different conversations can be found in Figure 2 and Appendix E.

As you can see in the figure, the left CHV-CAVAA has an avatar and a name (i.e., Voty).

Compared to the formal-CAVAA (right), the CHV-CAVAA addresses users with their first

name (i.e., Hallo Simone), and makes use of emoticons (i.e., :)), personal pronouns (i.e., je

versus u), and interjections (i.e., top!).

Table 1
Tone of voice characteristics (CHV and formal) translated in Dutch into CAVAAs
 Conversational Human Voice-          Formal-CAVAA                     Source
 CAVAA
 Personalization
     •    Name (Voty)                        •   Name (VoteBot)            •    Araujo (2018)

                                                                                                 24
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

   •   Avatar                     •   No avatar                •   Araujo (2018)
   •   Start the conversation     •   Start the conversation   •   Van Hooijdonk &
       with a personal greeting       with a greeting              Liebrecht (2018);
       (Hi, name)                                                  Liebrecht & Van der
                                                                   Weegen (2019)
   •   Using personal             •   Using personal           •   Van Hooijdonk &
       pronouns (You/Je)              pronouns (You/U)             Liebrecht (2018)

Informal language
   •   Emoticons (:-))            •   No emoticons             •   Van Noort et al.
                                                                   (2014)
   •   Informal interpunction     •   No informal              •   Van Hooijdonk &
       (!)                            interpunction                Liebrecht (2018)
   •   Sound mimicking            •   No sound mimicking       •   Van Noort et al.
       (Wow)                                                       (2014)
   •   Capitals (YES)             •   No capitals              •   Van Hooijdonk &
                                                                   Liebrecht (2018)
Invitational rhetoric
   •   Stimulating dialogues      •   No stimulating           •   Van Hooijdonk &
       (I am happy to help            dialogues                    Liebrecht (2018)
       you)
   •   Humor (Lol!)               •   No humor                 •   Kelleher & Miller
                                                                   (2006)
   •   Respond to thank you       •   Thank you messages       •   Van Hooijdonk &
       messages (You are              will not be recognized       Liebrecht (2018)
       welcome)
   •   Well-wishes (Have a        •   No well-wishes           •   Van Hooijdonk &
       nice day!)                 •   No sympathy                  Liebrecht (2018)
   •   Sympathy (Enjoy!)

                                                                                       25
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

                                                                      •   Kelleher & Miller
                                                                          (2006)

Figure 2. Sharing conversations with CHV-CAVAA (left) and formal-CAVAA (right).

3.3 Pre-test

       3.3.1. Pre-test procedure. Before the main experiment was carried out, a qualitative pre-

test of the CAVAA was conducted in the form of an interview (Appendix B). The pre-test had

three goals: to verify whether the self-developed chatbots were well programmed, to check if the

additional background information provided with each statement met the objective of making the

                                                                                              26
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

political statements more understandable, and to check if information was missing in the

explanations.

       Ten participants took part in the pretest. The sample consisted of 4 males and 6 females

with an average age of 34.8 years (SD = 5.36). During the pre-test, participants had a

conversation with a chatbot and answered twenty political statements. After reading each

individual statement, participants were asked if they understood the political statement and if

they had a need for additional information (Appendix B). Thereafter, participants could click on

an information button (if desired) and had to read the explanation. After reading the additional

information, several questions were asked about the clarity and understandability of the

explanations (Appendix B). This procedure was repeated for each political statement. Lastly,

participants were asked to evaluate the new voting tool. The pretest ended with a short thank you

note by the researcher. Based on the results of the pretest, several design changes were made for

the main study, see Appendix C.

3.4 Participants

       The online questionnaire was distributed via e-mail and social media by the researcher

(convenience sampling). The data were collected online between May 14th 2020 and June 1

2020. The only requirement to participate in this study was a minimum age of 18 years old (i.e.,

participants should be entitled to vote). A total of 325 participants took part in the main

experiment but only 231 of them completed the full survey. These remaining participants were

checked on straight-lining response behavior (i.e., giving the same answer to all attitude

questions). Based on this criterium, two more participants were removed from the dataset.

                                                                                                   27
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

       Of the remaining 229 participants, 78 were male (34.1%), and 150 female (65.5%). One

person answered that he/she would rather not say (0.4%). The average age was 30.43 years old

(SD = 14.51), ranging from 18 to 75. Most of the participants received an education at University

level, as they finished an undergraduate program (52.9%) or a master’s program (26.2%). The

remaining participants finished intermediate vocational education (12.7%) or high school (8.4%).

Also, the majority of participants was familiar with VAAs (89.5%) and had previous experience

with chatbots (61.6%).

       Analyses showed that participants in the three VAA-conditions were comparable for

participants’ gender (χ2 (4) = 3.51, p = .48), level of education (χ2 (12) = 5.99, p = .92), age (F(2,

226) = 3.67, p = .03), familiarity with chatbots (χ2 (2) = .88, p = .64), and previous experience

with voting advice applications (χ2 (2) = 3.03, p = .22). This implies that there are no a priori

differences between participants in the three conditions.

3.5 Measures

       3.5.1 Political sophistication. Political sophistication is a theoretical construct that

consists of three aspects: a cognitive aspect, usually measured as educational level, an

informative aspect, usually measured as political knowledge, and a motivational aspect, usually

measured as political interest (Stiers, 2016; Rapeli, 2013; Luskin, 1990). Although previous

studies investigating political sophistication did not always focus on all three aspects, this study

does take all three aspects into account. Before the start of the experiment, VAA users indicated

to what extent they are interested in politics. Political interest was explicitly measured with 3

items (e.g., I am interested in politics) on a 7-point Likert scale (1 = “completely disagree”, 7 =

“completely agree”) based on Lachat (2008) and Shani (2012). This scale showed good

                                                                                                    28
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

reliability (α = .89, M = 4.78, SD = 1.53). Furthermore, political knowledge was measured with

seven political knowledge questions (e.g., There are 225 members in the House of

Representatives) where people could indicate if they thought the statement was “true” or “false”.

The answers on the seven factual political knowledge questions were recoded. So, people could

receive 1 point if they gave the right answer to the question (highest score = 7). Afterwards, the

question was asked about their highest finished degree of education. Then, all three variables

were combined into a new additive sophistication measure with a seven-point scale (see:

Holleman et al., 2016). The new variable political sophistication was split into a new binary

variable (1= “high level of political sophistication”, 2= “low level of political sophistication”) by

means of a median split (µ = 16.00). Based on the median score, the participant group was

divided into two new subgroups: one with high levels of political sophistication (N = 122) and

one with low levels of political sophistication (N = 107).

       Additional analyses show that the participant groups with low versus high levels of

political sophistication were comparable for gender (χ2 (2) = 4.22, p = .12), age (F(1, 227)
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

perceived and factual knowledge (e.g., Park, 2001). So, when people feel that they have more

knowledge, it does not necessarily mean that users actually know more than they did before. In

order to examine the exact contribution of CAVAAs to political knowledge, the current study

focuses on both types of knowledge.

       Factual political knowledge was measured using six open knowledge questions based on

the political statements presented in the (CA)VAA (i.e., What is a binding referendum? and

What is the current state of affairs regarding the retirement age?). The answers given by the

participants were checked and coded by two people based on a coding scheme (Appendix F).

Every right answer was coded as “1” (e.g., 66.4 or 66 years) and wrong answers were coded as

“0” (e.g., 67 years). To determine consistency among the raters, an interrater reliability using

was calculated. In total, 120 answers (8.7%) were compared, see Table 3 in Appendix F. The

interrater reliability of the coding was found to be κ =.85 (p
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

       3.5.4 Voting intention. The next dependent variable in the current study was “intention

to vote”. In order to assess this intention, two items from Glynn, Huge, and Lunney (2009) (i.e.,

If there were elections now, I would vote and After consulting the (CA)VAA, I feel sufficiently

informed to vote) and one additional item (i.e., I plan to vote in the upcoming elections on March

17, 2021) were used and adapted in the current thesis. The items were measured on a 7-point

Likert scale ranging from 1 (“strongly disagree”) to 7 (“strongly agree”).

       3.5.5 Factor analysis. The factor structure of political interest, perceived political

knowledge, voting intention, and (CA)VAA evaluation was assessed by performing a principal

component analysis with Varimax rotation. The results of this analysis are specified in Table 2.

The analysis revealed four factors that together explained 74% of the variance. The four factors

partially matched the predetermined factor structure.

       The three items that were supposed to measure “political interest” indeed clustered well

together, and the scale showed good reliability (α = .89, M = 4.78, SD = 1.53). Regarding

perceived political knowledge, the factor analysis revealed that the three items also clustered

well together. Overall, the scale of perceived political knowledge showed good reliability (α =

.79, M = 4.28, SD = 1.35). Moreover, the five items that were supposed to measure “(CA)VAA

evaluation” revealed a good coherence. The scale had a good reliability, Cronbach’s α = .87 (M =

5.14, SD = 1.14).

       The items that were expected to be related to “voting intention”, however, fell apart into

two dimensions: items measuring “voting intention” (VI1, VI2) and items measuring “political

interest” (VI3), see the boldfaced part in Table 2. In addition, reliability analysis using

Cronbach’s alpha of voting intention showed an acceptable reliability (α = .66). However, one

item (i.e., I feel sufficiently informed to go vote) remarkably decreased this reliability. Therefore,

                                                                                                    31
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

we decided to remove this item. The new voting intention scale consisted of two items (i.e., If

there were elections now, I would vote and I intend to vote in the upcoming elections on March

17, 2021) and showed a good reliability (α = .93, M = 6.35, SD = 0.99).

Table 2
Results principal component analysis with Varimax rotation
                       Factor 1:            Factor 2:            Factor 3:         Factor 4:
                       (CA)VAA              Political Interest   Perceived         Voting
                       evaluation                                knowledge         intention
 PI1_interesse                              .89
 PI2_aandacht                               .81                                    .22
 PI3_volgen                                 .91
 PK1_begrijpen                                                   .82
 PK2_kennis            .27                                       .87
 PK3_motiveren                                                   .71
 VI1_stemmen1                               .31                                    .90
 VI2_stemmen2                               .26                                    .90
 VI3_geïnformeerd                           .62
 EV1_betrokken         .69                                       .26
 EV2_nuttig            .83
 EV3_leuk              .76                                       .37
 EV4_makkelijk         .80
 EV5_aanbevelen        .84                                       .31
Note. Only factor loadings >.25 are included in the table; the interpretation has been boldfaced.
PI = political interest; PK = perceived political knowledge; VI = voting intention; EV =
evaluation.

3.6 Procedure

                                                                                                  32
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

       At the start of the experiment in Qualtrics (see Appendix D), participants first read an

introductory text stating the aim of the study and procedure of the study. Next, an informed

consent stated that that participants would remain anonymous, that participation was on

voluntary basis, that they had the right to drop out during the experiment at any moment, and that

the study was approved by the Research Ethics and Data Management Committee of Tilburg

University (identification code: REDC # 2020/060). In addition, participants had to declare to be

18 years or older and give permission for data processing and storage.

       Then, participants were randomly assigned to one of the three (CA)VAA conditions in

Qualtrics and were asked to provide some demographic information about their age, gender,

level of education, political interest, and familiarity with VAAs and chatbots. Thereafter,

participants were linked to the chatbot conversation or traditional VAA and had to give their

opinion towards 20 political statements. A disclaimer told participants in the CAVAA conditions

that they would temporarily leave the questionnaire and a new screen would open.

       When participants clicked on the link, a new window popped-up and the chatbot started

the conversation with a greeting (see Appendix E). After the chatbot gave a description about

how to answer the political statements, a set of twenty political statements followed. At every

statement, participants could ask the chatbot for extra information when they experienced

difficulties regarding the comprehension of the statement. Another opportunity to obtain

additional information was by clicking on the semantic or pragmatic button. After completing the

political statements, participants received a personal voting advice. Subsequently, participants

were informed that they could return to Qualtrics for some additional questions about their

experiences with the (CA)VAA.

                                                                                                   33
CONVERSATIONAL AGENT VOTING ADVICE APPLICATIONS

       Back in Qualtrics, participants evaluated the (CA)VAA. Participants were asked how

they experienced the interaction with the (CA)VAA and whether they would recommend the

(CA)VAA to others. Also, questions were asked to indicate their intention to vote during the

upcoming elections. Next, participants’ perceived and factual political knowledge were measured

with several knowledge questions. Lastly, participants could write some final remarks on the

study. They were thanked for their participation and debriefed. The debriefing consisted of a

short text that explained the research goals and manipulations of the study. Besides, it was

highlighted in the debriefing that created (CA)VAA was developed for research purposes and did

possibly did not provide a valid voting advice. In total, the study in the VAA condition took on

average 10.49 minutes (SD = 10.65), and endured on average 15.12 minutes (SD = 11.98) in the

CAVAA conditions.

                                            4. Results

4.1 Political knowledge

       First, the perceived political knowledge was investigated. To test the hypotheses, a two-

way ANOVA was performed with VAA type and political sophistication as independent

variables and perceived political knowledge as dependent variable. An overview of the mean

score and standard deviations of the variables can be found in Table 5. The assumptions were

checked (see Appendix G) and showed some skewness and kurtosis. However, given that our

sample size was reasonable, the ANOVA would be fairly robust against this violation.

       The ANOVA showed a significant main effect of VAA type, F(2, 223) = 7.44, p = .001,

ηpartial2 = .06. Pairwise comparisons revealed that the traditional VAA differed significantly from

the formal CAVAA (Mdif = -.58, 95% CI [-.99, -.17], p = .01) and CHV CAVAA (Mdif = -.78,

                                                                                                 34
You can also read