Analyzing an Interactive Chatbot and its Impact on Academic Reference Services

Page created by Kent Young
 
CONTINUE READING
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
Analyzing an Interactive
Chatbot and its Impact on
Academic Reference Services
Danielle Kane*
          Chatbots (also known as conversational agents, artificial conversation entities, or chatterbox-
          es) are computer applications that imitate human personality. Our University of California,
          Irvine (UCI) libraries chatbot ANTswers is one of a few academic library chatbots in existence
          in the United States. The chatbot was built and put online in 2014 to augment and bridge
          gaps in instruction and reference services at the University of California, Irvine Libraries. The
          chatbot helps with simple directional and/or factual questions, can serve multiple patrons at
          one time, works 24x7 with very little mediation, and provides consistent answers. Academic
          librarians are proponents and believers in good customer service and we extended that belief
          to ANTswers when the underlying code was written. We routinely track statistics and evaluate
          what we find, making changes to the code to improve ANTswers responses to our patrons. Can
          we pinpoint why library patrons use the chatbot over another reference service based on their
          initial question, when is this service primarily used, and what do patrons tend to ask for as-
          sistance with? In-depth data has been collected over the past four years with over 10,000 ques-
          tions asked and stats kept such as time of day/day of week / answer rate. All questions have
          been coded for what information has been requested. In addition to all of the statistical data
          there are also over 7,000 transcripts. Can analyzing the language of the patron greeting, leave
          taking, and type of comment/statement/question asked give us more insight into how patrons
          approach using the chatbot?

Background
ANTswers is a web-based application, run on a remote library server and accessed through a web graphical user
interface (GUI) page. Implemented as a beta test in the spring of 2014 after a year of development, ANTswers
utilizes the open-source Program-O (https://program-o.com/) software and Artificial Markup Language (.aiml).
There are currently 128 .aiml files, organized by subject, library services, and include general conversation files.
AIML was used since the language is flexible and can accommodate a complex organization. Also since there is
an open source community which creates and shares AIML files we did not need to create all files from scratch.
These shared files formed the basis of our general conversation responses after some heavy editing for scope and
coverage. ANTswers responds to simple and short questions, McNeal and Newyear state that “while a chatbot
cannot replicate the complexity of a human interaction, it can provide a cost-effective way to answer routine
questions and direct users to additional services.”1 To read more about the development of ANTswers, the Role of
Chatbots in Teaching and Learning in E-Learning and the Academic Library: Essays on Innovative Initiatives goes
into more depth on how to create an academic library conversational agent.

* Danielle Kane is Digital Scholarship Services Emerging Technologies Librarian, University of California, Irvine,
e-mail: kaned@uci.edu.

                                                        481
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
482 Danielle Kane

          The University of California, Irvine is a public research university located in Irvine, California. It is one of
     the 10 campuses in the University of California (UC) system. UCI has an FTE of over 11,000 faculty and staff
     and a student population of more than 35,000. UCI is on the quarter system which has instruction for ten weeks
     followed by a finals week, week two and three of the quarter tend to be the Libraries busiest time for one-shot
     instruction workshops. The UCI Libraries has 59 librarians and approximately 100 staff. We employee over 200
     students to work at our service points in four different libraries, the Langson library for humanities and social
     science, our Science library, the Grunigen Medical library and the Law library. The UCI Libraries has an an-
     nual expenditure of over $20 million. We provide reference services at our Ask Us desk at the Langson Library,
     through email, phone, 30 minute research consultations and through our participation in the QuestionPoint
     24/7 chat co-operative. UCI library staff have answered over 12,000 reference questions in the past year.
          Figure 1 shows the GUI, the area to the right of the chat log is where the first link in a response opens in a
     preview window, all other links open in a new window. ANTswers was developed to work 24/7 with very little
     down time, to provide consistent answers, and to refer to other reference services when applicable. ANTswers
     has listed in four places that it is an experimental computer program located in the initial short introduction,
     within the chat window itself, under the chat window and also in the about ANTswers section. This was done to
     alleviate some past issues with patrons assuming the chatbot was a live IM chat service. Since the conversation
     agent was developed to not keep track of personal identifiable information such as name, IP address, location, or
     to require authentication we felt that shy users who might not want to ask a person a question would feel more

                                                        FIGURE 1
                                             ANTswers Graphical User Interface

     ACRL 2019 • RECASTING THE NARRATIVE
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services 483

comfortable with a computer program. It was also developed to try to alleviate some of the simple, repetitive
questions that tend to be asked at our physical reference desk, such as where is the restrooms, printer, copier, etc.
     A back-end system was created using MySQL that pulls transcripts from Program-O and into an online da-
tabase system where each conversation is reviewed and a data form is filled out to track usage; such as date, time,
answer rate, etc. The Libraries’ Information Technology department maintains the server installation of Program-
O, the GUI, and the MySQL database. Updating the system to improve ANTswers responses and the tracking of
statistics is currently handled by the original developer, who was previously a reference librarian. Transcripts are
reviewed and updates to the files are uploaded at the end of each review session. When the chatbot was first imple-
mented chat transcripts were reviewed daily and took approximately 5-6 hours per week to review and update, as
the database improved the logs are reviewed 2-3 times per week and takes between 2-3 hours.
     According to DeeAnn Allison “chatbots can be built using concepts from natural language interactions
(NLI). The advantage of NLI processing is the ability to use phrasing (verbs, nouns, adjectives, etc.) from the
input to supply an answer that is more sensitive to the intent of the question.”2 When a library patron asks a
question the system ranks responses based on how closely the pattern matches the input. The response with the
closest match is the one provided. Due to the complexity of libraries and that language can be used to refer to
multiple things, sometimes there is a lack of an appropriate response to the question asked. At times patrons
have felt that ANTswers responses were “snarky” when this happens the answer to the question wasn’t the best
match but the program pulled it anyways. Continuous revision of the program files and code has led to a de-
crease in “snarky” responses over time.
     The UCI Libraries have a traditional in-person reference service and participates in the 24/7 QuestionPoint
(QP) chat reference service. We also developed a chatbot in 2014 (an interactive FAQ) designed to assist patrons with
library-related questions. We are ultimately interested in comparing how patrons approach and use different refer-
ence service points. Rubin, Chen and Thorimbert indicated that the “goal in using conversational agents in libraries
is to enhance – not replace – face-to-face human contact, interactivity, and service delivery streams.”3 The UCI Li-
braries agrees, ANTswers was never meant to replace our traditional services but to instead augment the services we
already provide. Questions at this point are: (1) what language structure do patrons use when using the ANTswers
chatbot? (2) Do people use this service at different hours of the day? (3) Is the assumption accurate that this service
point attracts directional, printing/equipment, library holdings questions, and/or library policy questions?
     Ultimately evaluating ANTswers in comparison to traditional library services will add to the sparse litera-
ture about the use of chatbots in libraries in the United States. Novotny believes that “evaluation must be inte-
grated into the library’s operations, and built in the implementation of any new service.”4 Since there are so few
academic library chatbots available in the United States it is imperative to share our knowledge of building and
sustaining such a project. To that end ANTswers program code and data has been shared in various venues. The
original 2014 .aiml code was placed in eScholarship (https://escholarship.org/uc/uci_libs_antswers) but since so
much change has occurred since the chatbot went live updated code was shared via GitHub in December of 2017
(https://github.com/UCI-Libraries/ANTswers). Data has also been shared via the UCI DASH data repository
(https://dash.lib.uci.edu/stash/dataset/doi:10.7280/D1P075).

Methodology
The analysis of the ANTswers conversational agent was split into two parts. Preceding analysis all transcripts
were coded as a transcript, test/demo, or as spam. Of the 7,924 conversations collected since 2014, 2,786 (35%)
were true transcripts submitted by library patrons, 1,539 (19%) were conversations conducted as demonstra-
tions of the system and/or as a test of new code, and 3,599 (45%) were spam. In 2017, ANTswers was hit by either

                                                                 APRIL 10–1 3, 2019 • CLEVELAND, OHIO
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
484 Danielle Kane

     our Office of Information Technology or an outside user testing the system for vulnerabilities, which resulted in
     the high level of spam conversations.
          All conversations coded as transcripts were then reviewed for confidential information submitted by the
     patron, this information was then removed prior to being loaded into the UAM CorpusTool3 by its associated
     User ID. The remaining 2,786 transcripts contained a total of 10,341 individual questions and/or statements
     by library patrons (all responses from the ANTswers chatbot were removed prior to analysis). In the UAM
     CorpusTool3 two layers were then created, one layer was to examine the conversational structure of the patron
     submitted questions. This layer was used to track the following: opening phrase, showing interest, sentence type,
     and closing phrase. The second layer was used to track patron need, what kind of services or materials were the
     patrons requesting.
          All data is de-identified upon collection, no personal information is purposely collected, and transcripts
     were coded to further remove any possible identifiable information that might have been shared purposefully or
     inadvertently by the patron. The data collected by ANTswers may also help other libraries create similar chatbots
     and for our library to modify our chatbot to better answer patron questions. Data has already been collected for
     a future comparison of the chatbot with virtual reference and in-person reference at a reference desk. Conversa-
     tion analysis is a branch of sociology which studies the structure and organization of human interaction, with a
     more specific focus on conversational interaction and is the starting point for this initial analysis of ANTswers
     and will be the basis for further research.

     Usage Statistics
     When planning for the development of the ANTswers chatbot it was clear that evaluation and continuous devel-
     opment were going to be key to the success or failure of the conversational agent. The following statistics were
     kept for every transcript except for demo/test and spam transcripts:
         • Date
         • Hour of day
         • Quarter of year
         • Week of quarter
         • Day of week
         Figure 2 shows the total number of questions asked by quarter, while most quarters maintain a similar range
     in the number of questions asked, summer of 2018 shows a marked decrease. This decrease was due to ANTs-
     wers going offline for three weeks so the code could be updated to work with Library Search, the UCI Libraries
     new Primo discovery layer. Cumulative statistics show that while ANTswers is available 24/7 patrons ask the
     bulk of their questions (9554, 92%) between 8:00 AM and 12:00 AM, while questions asked between 1:00 AM
     and 7:00 AM only account for 8% of the total of 10,341 questions. The hours with the highest activity are 1:00 PM
     with 1,119 questions (11%) and 2:00 PM with 979 questions (10%). ANTswers questions are tracked according
     to the week that the questions were asked. The highest number of questions are asked in the beginning of the
     quarter with week 2 being the highest with a total of 1,682 questions (16%) followed by week 3 with 1,115 (11%)
     and week 1 with 1,045 (10%). When evaluating when questions are asked according to the day of the week,
     Wednesday is the highest with 2,142 questions (21%), followed by Tuesday with 2,095 (20%) and Monday with
     1,928 questions (19%).
         In addition to general statistics the total number of questions were tracked via the statistics form related to
     each transcript and questions/statements were evaluated as to whether they were library related or were consid-
     ered general conversation. Statistics tracked:

     ACRL 2019 • RECASTING THE NARRATIVE
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services 485

                                                      FIGURE 2
                                Total Number of Questions Asked by Quarter (n=10,341)
 1200
                                1047
 1000
        823
  800                                                                      748
                    698                                  675         678
                                       575         548                                             550 545
  600                     534
                                                               496
              443                            460                                       461
                                                                                 425         404                   Total
  400
                                                                                                             231
  200

    0

    • Total number of questions asked
    • Total number of library related questions asked
    • Total number of library related questions answered
    • Total number of general conversation questions asked
    • Total number of general conversation questions answered
    Through this data the percentage of library questions answered correctly and the percentage of general
conversation questions answered correctly can be determined. When ANTswers was introduced in the spring
of 2014 the percentage of library questions answered correctly was approximately 39% and in the summer of
2018 had risen to 76% (see figure 3). The growth in the percentage of library related questions being answered
correctly is entirely due to the continuous evaluation and development of the ANTswers backend code. By fixing
the code that relates to questions being answered incorrectly or not at all the conversational agent continuous to
improve, soon the answer rate should go above 80%. Library related questions include the total of all directional,

                                                  FIGURE 3
                      Total Number of Questions Asked by Quarter 2014–2018 (n=10,341)

                                                                           APRIL 10–1 3, 2019 • CLEVELAND, OHIO
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
486 Danielle Kane

     ready reference and research level questions asked of the chatbot. General conversation questions include all
     other questions/statement that do not relate to the library.

     Conversational Analysis
     Evaluations of in-person reference and online IM reference appear in library literature quite often but there is
     very little about evaluating conversational agents. Creating ANTswers and including a chatbot with our tra-
     ditional reference services led us into being more interested in the language and phrasing library patrons’ use
     with each of the reference services. According to Houlson, McCready, and Pfahl “qualitative analysis of chat
     transcripts can offer more detail to describe and evaluate virtual reference.”5 They were referring to online chat
     reference but this type of analysis can be extended to the evaluation of chatbot transcripts. Can we analyze “how”
     patrons ask questions of a pre-programmed chatbot and will it give us insights into how our patrons approach
     research? Will what they ask help us to understand where they get stuck in the research process, are there trends
     about when those questions occur? Will understanding how patrons ask questions online through IM help us to
     better program the chatbot to increase its answering percentage for library related questions?
          A conversation is an interactive communication between two or more people. In the case of ANTswers the
     conversation is between one person and a pre-scripted conversational agent. In most human conversation, they
     don’t simply begin and end with only a simple and/or complex question. Typical conversations can follow a pre-
     dictable pattern with a greeting, showing interest (such as how are you), a reason for the conversation, and then
     a closing of the conversation. Does a chatbot conversation also follow this predictable pattern? 2,786 ANTswers
     transcripts were evaluated for whether or not the patron used a greeting (opening phrase), showed interest in
     who they were chatting with, and whether or not they closed the conversation. These three conversational events
     were included in a layer scheme with the sentence type and was coded manually using the UAM CorpusTool3.
     The layer was pre-populated with known greetings, closing phrases and types of sentences. When a new term
     was presented in the transcript that term was added to the layer scheme. Questions that “showed interest” were
     marked using that term and then a secondary analysis was conducted to determine what patrons were asking
     ANTswers about.
          With in-person communication we have the benefits of verbal and non-verbal cues to help us interpret and
     assign meaning. With written communication such as letters or emails we have the space to adequately describe
     our needs to the other individual(s). With an Instant Message (IM) type system of communication the patron is
     given very little space to describe their need. In addition because of Natural Language Processing and the pattern
     matching used with most chatbots it is very difficult for a chatbot to determine meaning, especially in a compli-
     cated situation where the variation of one word can change what would be the best overall response. According
     to Maness the language used in “IM conversations is unique in that it is of a more spoken, informal genre and
     style than most written forms of communication.”6 Upon review of ANTswers transcripts it was found that they
     are written informally, thankfully patrons did not resort to using IM or Text abbreviations when using the chat-
     bot. Problems also arise when patrons do not stick to short questions and instead input multiple sentences or
     short paragraphs. The chatbot has trouble parsing multiple sentences and providing a correct response.

     Opening Phrase
     Starting a conversation with a greeting is considered a basic function of communication and it triggers a posi-
     tive emotion, even on your worst days having someone saying “hi” can put you in a better mood. Greetings are
     a common courtesy, typically when you are introduced to someone for the very first time, your greeting will
     be the basis of that person’s first impression of you. Dempsey states that “greetings are essential for achieving

     ACRL 2019 • RECASTING THE NARRATIVE
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services 487

                    FIGURE 4                                               FIGURE 5
              Opening Phrase (n=460)                                 Showing Interest (n=248)

recognition” and Schegloff and Sacks state that omit-
ting a greeting before asking a question is a strategy
for not starting a conversation.”7, 8 This in fact would
establish the conversation as transactional rather than
relational. The expectation of ANTswers would be that
it would be more transactional in nature since the GUI
clearly states that it a computer program. Therefor it
is of interest that in the 2,786 ANTswers transcripts
an opening phrase such as hello, hey, hi, etc. was used
460 times or approximately 17%. The most frequently
used variations were “hello” and “hi,” at 31% and 56%
respectively. So while the GUI clearly states this is a
computer program some library patrons still attempt to
create a relationship with the computer program they
are chatting with. It could be that some patrons are simply responding to ANTswers initial outreach statement
that included “hi” and an introduction to ANTswers being a chatbot.

Showing Interest
While ANTswers was purposely given a personality and a wide range of interests, the fact that patrons were
interested in ANTswers “life” was unexpected. Styled after the UCI mascot Peter the Anteater, ANTswers loves

                                                            APRIL 10–1 3, 2019 • CLEVELAND, OHIO
488 Danielle Kane

     all things UCI and anything and everything to do with ants, as an example the chatbots favorite meal is ants in
     a white wine reduction. 248 transcripts included the patron showing interest in ANTswers by asking a variety of
     questions such as is the conversational agent a robot or a human, what is ANTswers favorite types of things, such
     as books, movies, food, etc.? Patrons also asked if ANTswers was single and if the conversational agent would go
     out with them. By asking about ANTswers the library patron is attempting to build a greater connection with the
     conversational agent. A recent study by Xu et al. on customer service chatbots found about 40% of user requests
     are emotional rather than seeking specific information.9 Of the 248 transcripts that included questions about
     ANTswers, 48 or 19% of those questions were variations of asking ANTswers “How are you.” Approximately 10%
     (24) wanted to know what the chatbots name was and 9% (23) wanted to know what it does, or what he was. 33
     (13%) asked some variation of whether the chatbot was human, a robot, or an anteater. While patrons would
     be hesitant to ask library staff their relationship status or even if they would like to go out with them they had
     no problems asking these questions of ANTswers. The privacy protections of ANTswers not tracking personal
     information does seem to satisfy one thought that patrons would feel more comfortable asking questions that
     they wouldn’t typically feel comfortable asking a person.

                           FIGURE 6                           Type of Questions Asked
                  Type of Question (n=10,051)
                                                              As a start for conducting a Conversation Analysis on
                                                              ANTswers transcripts an examination of the types of
                                                              questions asked by library patrons was done as part
                                                              of the UAM CorpusTool3 layer one, Conversation_
                                                              Analysis. Sentences were coded as being declarative,
                                                              interrogative, exclamatory, or imperative. Questions/
                                                              statements that contained profanity, punctuation, and
                                                              URLs were coded along with repetitive questions and
                                                              if the patron was responding to a question asked by
                                                              ANTswers.
                                                                   Declarative sentences are also known as statements
                                                              and tend to end with a period. These sentences are used
                                                              to state or declare something and are not used to ask
                                                              questions or to give commands. They tend to lack emo-
                                                              tion as well. Interrogative sentences on the other hand
                                                              are used to ask questions or to make a request and usu-
                                                              ally end with a question mark. Exclamatory sentences
                                                              are a forceful version of declarative sentences and con-
                                                              vey excitement or emotion, they also end with excla-
                                                              mation marks. Imperative sentences are used to give a
                                                              command or to impart instructions, make a request, or
                                                              even as a way to offer advice.
          Certain statements or questions were not included in the four sentence types (declarative, interrogative,
     exclamatory, or imperative) such as opening and closing phrases. Since punctuation and URLs are not sentences
     they were tracked but not counted in the four types. Repetitive questions were also excluded because the repeti-
     tive questions were asked either because the patron did not read ANTswers response or because the chatbot was
     unable to provide an appropriate response. Responses to questions were also excluded because they were a re-

     ACRL 2019 • RECASTING THE NARRATIVE
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services 489

quirement of the programming for ANTswers to provide an accurate response. The highest number of informa-
tion requests were interrogative at 50% and imperative at 29%. Patrons for the most part were asking questions
or giving commands/making demands.

                       FIGURE 7                           Closing Phrase
                Closing Phrase (n=142)                    A closing phrase is used to end verbal and written
                                                          communications. The types of phrases vary, where
                                                          we might use bye or good bye to signal the end of a
                                                          verbal communication we might use sincerely, best
                                                          regards, cordially, etc. to end a written communi-
                                                          cation. Since IM is a shortened version of written
                                                          communication do patrons close their conversations
                                                          with the ANTswers conversational agent? While a
                                                          greeting was used in 17% of the transcripts, the use
                                                          of closing phrases was quite small, showing patrons
                                                          comfort level in just dropping out of the conversa-
                                                          tion. Closing words or phrases were used in 142
                                                          transcripts or in only 5% of the total number of tran-
                                                          scripts. The most common closing was thank you or
                                                          thanks appearing in a total of 122 of the 142 tran-
                                                          scripts.

Assessment of Needs
The second layer created in the UAM CorpusTool3 was a layer to evaluate the needs of the library patron. This
layer was coded manually and the scheme was developed by utilizing the UCI Library website as the structure.
For items not listed originally they were added to the scheme when found in the transcript. The Needs scheme
was first organized into categories such as About (Library), About (UCI), Find, Services, and Subject. Categories
were then narrowed further with subcategories (see Figure 8) since more than one topic could be broached dur-
ing a transcript a total of 3,536 requests for information were tracked. The highest number of requests were first
in the Services category, specifically Borrowing at 699 (20%) and Computing at 496 (14%). Next patrons most
asked about items were in the Find category such as books/eBooks at 398 (11%) and then about hours in About
(Library) at 285 (8%).
     Some subcategories were further refined with other subcategories (see Figure 9). In the subcategory of Bor-
rowing, Checkout (161, 23%) was the highest assistance requested, followed by questions about library cards and
ID’s (91, 13%). When asking questions about computing most patrons had questions about the campus VPN
(229, 46%). Instruction/Workshops and Research Advice overall did not receive a lot of questions, in Instruc-
tion/Workshops the highest subcategory was Writing W39C with 47 questions asked since 2014, W39C is an
undergraduate writing course that librarians heavily participate in. Research Assistance in Research Advice had
46 questions followed by requests for research guides at 29. In terms of locations, Building were asked about 60
times with study spaces following at 54 questions. When asking about the library patrons were mostly interested
in asking questions about policies with 35 questions asked primarily about having food or eating in the library.
Questions about the overall size of the collection was next with 28 questions and student employment, working
at the library had 22 questions.

                                                               APRIL 10–1 3, 2019 • CLEVELAND, OHIO
490 Danielle Kane

                                            FIGURE 8
                            Services and Items Requested (n = 3,536)

                                           FIGURE 9A
                            Services and Items Requested (n = 3,536)

     ACRL 2019 • RECASTING THE NARRATIVE
Analyzing an Interactive Chatbot and its Impact on Academic Reference Services 491

                                                  FIGURE 9B
                                   Services and Items Requested (n = 3,536)

Conclusion
Through the analysis of ANTswers transcripts it was found that our initial assumption that most library patrons
would ask directional and simple questions about library services, locations, and policies were correct. Very
few patrons asked what could be considered in-depth research questions. The number of transcripts that asked
no library related questions at all were fascinating, it seems a number of patrons just want someone to talk to
and the chatbot serves that role for them. At one point we were considering limiting or removing the general
conversation files altogether to focus more on the library related programming, in hindsight it was good that we
decided against taking that step. We found that some patrons prefer to treat the ANTswers chatbot as if he is hu-
man and they follow the normal steps of conducting a conversation, using opening statements, asking how their
conversational partner is, asks about their life, and uses closing statements before leaving. Some patrons on the

                                                              APRIL 10–1 3, 2019 • CLEVELAND, OHIO
492 Danielle Kane

     other hand prefer to just jump in, make a demand or ask a question, and leave as soon as they have an answer.
     Future changes to the underlying code need to continue to reflect these two types of user behavior.
          The information we amass through analyzing ANTswers transcripts continues to inform the UCI Libraries.
     Due to the number of times patrons have asked about library hours when it came time to update our libraries’
     website we used that information to support placing the library hours in a prominent space on the main page.
     Prior to the UCI Libraries’ implementation of Primo we had a library supported Solr search (Solr is an open
     source search platform written in Java), ANTswers data was utilized in a tagging project to update the Solr search
     to better retrieve the information library patrons asked for. For example because of text-mining logs we found
     that a percentage of patrons use the term “rent” interchangeably with the term “borrow.” We were then able to
     tag Solr results with both terms to increase the likelihood that patrons get the right information no matter what
     term they used.
          There is further research to be conducted on library chatbots and their place in academic library reference
     services. Planned future studies will include a more in-depth conversation analysis on ANTswers transcripts,
     conducting a sentiment analysis, and analyzing if patrons use softeners when asking questions. Some other
     interesting research would be to analyze if patrons ask inappropriate questions of the chatbot more than in a
     traditional chat reference service or if there is an increase in patrons utilizing more text/IM/message board ab-
     breviations over time. In this paper we have looked at the types of questions/statements patrons used to gain
     information, would looking at the language in how those questions/statements were started help us gain further
     insight? Finally a comparison of the type of language used in ANTswers could be compared to the language used
     at the reference desk and through QuestionPoint chat.

     Endnotes
     1.   Michele L. McNeal and David Newyear, “Introducing Chatbots in Libraries,” Library Technology Reports 49, no. 8 (November/De-
          cember 2013): 5-10, https://www.journals.ala.org/index.php/ltr/article/view/4504/5281
     2.   DeeAnn Allison, “Chatbots in the Library: Is It Time?,” Library Hi Tech 30, no. 1, (2012): 95-107, doi:10.1108/07378831211213238.
     3.   Victoria L. Rubin, Yimin Chen, and Lynne Marie Thorimbert, “Artificially intelligent conversational agents in libraries,” Library Hi
          Tech 28, no. 4, (2010): 496-522. doi:10.1108/07378831011096196.
     4.   Eric Novotny, “Evaluating Electronic Reference Services,” The Reference Librarian 35, no. 74 (2001): 103-120, doi:10.1300/
          J120v35n74_08.
     5.   Van Houlson, Kate McCready, and Carla Steinberg, “A Window into Our Patron’s Needs,” Internet Reference Services Quarterly 11,
          no. 4, (2007): 19-39, doi: 10.1300/J136v11n04_02.
     6.   Jack M. Maness, “A Linguistic Analysis of Chat Reference Conversations with 18-24 Year-Old College Students,” The Journal of
          Academic Librarianship 34, no. 1, (2008): 31-38, doi:10.1016/j.acalib.2007.11.008.
     7.   Paula R. Dempsey, “Are you a Computer? Opening Exchanges in Virtual Reference Shape the Potential for Teaching,” College and
          Research Libraries 77, no. 4 (2016): 455-467. doi:10.5860/crl.77.4.455.
     8.   Emanuel A. Schegloff and Harvey Sacks, “Opening up Closings,” Semiotica 8, no. 4 (1973): 289-327, doi:10.1515/
          semi.1973.8.4.289.
     9.   Anbang Xu, Zhe Liu, Yufan Guo, Vibha Sinha, and Rama Akkiraju, “A New Chatbot for Customer Service on Social Media,” in
          Proceedings of the ACM Conference on Human Factors in Computing Systems, CHI ’17 (New York, NY, USA: ACM, 2017): 3506-
          3510, doi:10.1145/3025453.3025496.

     ACRL 2019 • RECASTING THE NARRATIVE
You can also read