Program - The University of Sydney

 
Program
Wednesday
 07:30 - 08:30   Level 1 Foyer   Registration and breakfast
 08:30 - 09:00   Grand Lodge     Introduction
 09:00 - 10:00   Grand Lodge     Keynote
                                 Prof. David Williamson Shaffer (University of Wisconsin-Madison, USA)

 10:00 - 10:30   Banquet Hall    Coffee Break
 10:30 - 12:00   Parallel Sessions 1
                 Grand Lodge     Keynote Q&A and Panel Session 1A
                                  10:30 - 11:00 Session 1A1    Q&A Keynote
                                                               Prof. David Williamson Shaffer (University of Wisconsin-Madison, USA)

                                  11:00 - 12:00 Session 1A2    Panel 1: How can Learning Analytics contribute to a wider notion of student success? (Chair: Stephanie
                                                               Teasley, SoLAR President)
                                                               Panellists: Prof Pit Pattison (DVC Education, The University of Sydney Australia), Prof Shirley Alexander
                                                               (DVC and Vice-Presiden Education and Students, University of Technology Sydney, Australia), Prof
                                                               Timothy McKay (College of Literature, Science and the Arts, University of Michigan, USA), Prof Belinda
                                                               Tynan (DVC Education and Vice-President, RMIT University, Australia), Prof Dragan Gasevic (Monash
                                                               University)

                                                               Despite the future-gazers’ hype around Learning Analytics, everything we know about technology
                                                               adoption reminds us that it is very human factors such as staff skills, work processes, and organisational
                                                               incentives that determine whether digital innovations deliver real change and improvement. This panel
                                                               will discuss the role that university leadership plays, not only in fostering Learning Analytics innovation,
                                                               but sustainable impact when considering a wider conception of student success.
Doric   Evaluation & Feedback.
        Session 1B
         10:30 - 11:00 Session 1B1   The Half-Life of MOOC Knowledge: A Randomized Trial Evaluating Knowledge Retention and Retrieval
                                     Practice in MOOCs

                                     Full research paper
                                     Daniel Davis (Delft University of Technology, Netherlands)
                                     Rene Kizilcec (Stanford University, USA)
                                     Claudia Hauff (Delft University of Technology, Netherlands)
                                     Geert-Jan Houben (Delft University of Technology, Netherlands)

                                     Retrieval practice has been established in the learning sciences as one of the most effective strategies to
                                     facilitate robust learning in traditional classroom contexts. The cognitive theory underpinning the "testing
                                     effect" states that actively recalling information is more effective than passively revisiting materials for
                                     encoding information to long-term memory. This paper documents the design, development, deployment,
                                     and evaluation of an Adaptive Retrieval Practice System (ARPS) in a MOOC. To leverage the testing
                                     effect in promoting MOOC learners' achievement and engagement, the push-based system intelligently
                                     delivered quiz questions from prior course units to learners throughout the course. We conducted
                                     an experiment in which learners were randomized to receive ARPS in a MOOC to investigate their
                                     performance and behavior compared to a control group. We find that (i) in our MOOC setting - and in
         11:00 - 11:15 Session 1B2   [Best Short Research Paper Nomination] Graph-based Visual Topic Dependency Models: Supporting
                                     Assessment Design and Delivery at Scale

                                     Short research paper
                                     Kendra Cooper (Independent, Canada)
                                     Hassan Khosravi (The University of Queensland, Australia)

                                     Educational environments continue to rapidly evolve to address the needs of diverse, growing student
                                     populations, while embracing advances in pedagogy and technology. In this changing landscape
                                     ensuring the consistency among the assessments for different offerings of a course (within or across
                                     terms), providing meaningful feedback about students' achievements, and tracking students' progression
                                     over time are all challenging tasks, particularly at scale. Here, a collection of visual Topic Dependency
                                     Models (TDMs) is proposed to help address these challenges. It visualises the required topics and their
                                     dependencies at a course level (e.g., CS 100) and assessment achievement data at the classroom
                                     level (e.g., students in CS 100 Term 1 2016 Section 001) both at one point in time (static) and over
                                     time (dynamic). The collection of TDMs share a common, two-weighted graph foundation. An algorithm
                                     is presented to create a TDM (static achievement for a cohort). An open-source, proof of concept
                                     implementation of the TDMs is under development; the current version is described briefly in terms of its
                                     support for visualising existing (historical, test) and synthetic data generated on demand.
11:15 - 11:30 Session 1B3    Data-driven Generation of Rubric Criteria from an Educational Programming Environment
                             Short research paper

                             Nicholas Diana (Carnegie Mellon University, USA)
                             Michael Eagle (Carnegie Mellon University, USA)
                             John Stamper (Carnegie Mellon University, USA)
                             Shuchi Grover (SRI International, USA)
                             Marie Bienkowski (SRI International, USA)
                             Satabdi Basu (SRI International, USA)

                             We demonstrate that, by using a small set of hand-graded student work, we can automatically generate
                             rubric criteria with a high degree of validity, and that a predictive model incorporating these rubric
                             criteria is more accurate than a previously reported model. We present this method as one approach
                             to addressing the often challenging problem of grading assignments in programming environments.
                             A classic solution is creating unit-tests that the student-generated program must pass, but the rigid,
                             structured nature of unit-tests is suboptimal for assessing the more open-ended assignments students
                             encounter in introductory programming environments like Alice. Furthermore, the creation of unit-tests
                             requires predicting the various ways a student might correctly solve a problem -- a challenging and time-
                             intensive process. The current study proposes an alternative, semi-automated method for generating
                             rubric criteria using low-level data from the Alice programming environment.
11:30 - 11:45. Session 1B4   Supporting Teachers' Intervention in Students' Virtual Collaboration Using a Network Based Model
                             Short research paper

                             Tiffany Herder (University of Wisconsin-Madison, USA)
                             Zachari Swiecki (University of Wisconsin-Madison, USA)
                             Simon Skov Fougt (University College Metropol, Denmark)
                             Andreas Lindenskov Tamborg (Aalborg University, Denmark)
                             Benjamin Brink Allsopp (Aalborg University, Denmark)
                             David Williamson Shaffer (University of Wisconsin-Madison, USA)
                             Morten Misfeld (Aalborg University, Denmark)

                             This paper reports a Design-Based Research project developing a tool (the Process Tab) that supports
                             teachers’ meaningful interventions with students when they work in virtual internships. The tool uses a
                             networked approach to learning and allows insights into the discourse of groups and individuals based
                             on their written contributions in chat fora and assignments. In the paper, we present the tool and reports
                             from an interview study with three teachers who used the tool during a 3-6 week virtual internship. The
                             interviews provide insights from the teachers’ hopes, actual use, and difficulties with the tool. The main
                             insight is that even though the teachers genuinely liked the idea of the Process Tab and the specific
                             representations that it contains, the teachers’ lack of ability to be both teaching and looking at the
                             Process Tab at the same time hindered their use of the tool. In the final part of the paper, we discuss how
                             this issue can be addressed.
11:45 - 12:00 Session 1B5   Correlating Affect and Behavior in Reasoning Mind with State Test Achievement
                                          Short research paper

                                          Victor Kostyuk (Reasoning Mind, USA)
                                          Ma. Victoria Almeda (Columbia University, USA)
                                          Ryan Baker (University of Pennsylvania, USA)

                                          Previous studies have investigated the relationship between affect, behavior, and learning in blended
                                          learning systems. These articles have found that affect and behavior are closely linked with learning
                                          outcomes. In this paper, we attempt to replicate prior work on how affective states and behaviors relate
                                          to mathematics achievement, investigating these issues within the context of 5th-grade students in South
                                          Texas using a mathematics blended learning system, Reasoning Mind. We use automatic detectors
                                          of student behavior and affect, and correlate inferred rates of each behavior and affective state with
                                          the students' end-of-year standardized assessment score. A positive correlation between engaged
                                          concentration and test scores replicates previous studies, as does a negative correlation between
                                          boredom and test scores. However, our findings differ from previous findings relating to confusion,
                                          frustration, and off-task behavior, suggesting the importance of contextual factors for the relationship
                                          between behavior, affect, and learning. Our study represents a step in understanding how broadly
                                          findings on the relationships between affect/behavior and learning generalize across different learning
                                          platforms.
Corinthian   Dashboards. Session 1C
             10:30 - 11:00. Session 1C1   [Best Full Research Paper Nomination] License to Evaluate: Preparing Learning Analytics Dashboards
                                          for Educational Practice
                                          Full research paper

                                          Ioana Jivet (Open University of the Netherlands, Netherlands)
                                          Maren Scheffel (Open University of the Netherlands, Netherlands)
                                          Marcus Specht (Open University of the Netherlands, Netherlands)
                                          Hendrik Drachsler (Goethe University Frankfurt/DIPF, Germany)

                                          Learning analytics can bridge the gap between the learning sciences and data analytics, leveraging the
                                          expertise of both fields in exploring the vast amount of data generated in online learning environments.
                                          A widespread learning analytics intervention is the learning dashboard, a visualisation tool built with the
                                          purpose of empowering teachers and learners to make informed decisions about their learning process.
                                          Several related works have investigated the field of learning dashboards, yet none have explored
                                          the theoretical foundation that should inform the design and evaluation of such interventions. In this
                                          systematic literature review, we analyse the extent to which theories and models from learning sciences
                                          have been integrated into the development of learning dashboards aimed at learners. Our analysis
                                          reveals the very few dashboards conduct evaluations that take into account the educational concepts
                                          they used as a theoretical foundation for their design and we propose ways of incorporating research
                                          from learning sciences into learning analytics dashboard research. We find contradicting evidence that
                                          comparison with peers, a common reference frame for contextualising information on learning analytics
                                          dashboards, is perceived positively by all learners.
11:00 - 11:30. Session 1C2   Open Learner Models and Learning Analytics Dashboards: A Systematic Review
                             Full research paper

                             Robert Bodily (Brigham Young University, USA)
                             Judy Kay (The University of Sydney, Australia)
                             Vincent Aleven (Carnegie Mellon University, USA)
                             Daniel Davis (Delft University of Technology, Netherlands)
                             Ioana Jivet (Open University of the Netherlands, Netherlands)
                             Franceska Xhakaj (Carnegie Mellon University, USA)
                             Katrien Verbert (Katholieke Universiteit Leuven, Belgium)

                             This paper aims to link student facing Learning Analytics Dashboards (LADs) to the corpus of research
                             on Open Learner Models (OLMs), as both have similar goals. We conducted a systematic review of
                             literature on OLMs and compared the results with a previously conducted review of LADs for learners in
                             terms of (i) data use and modelling, (ii) key publication venues, (iii) authors and articles, (iv) key themes,
                             and (v) system evaluation. We highlight the similarities and differences between the research on LADs
                             and OLMs. Our key contribution is a bridge between these two areas as a foundation for building upon
                             the strengths of each. We report the following key results from the review: in reports of new OLMs,
                             almost 60% are based on a single type of data; 33% use behavioral metrics; 39% support input from
                             the user; 37% have complex models; and just 6% involve multiple applications. Key associated themes
                             include intelligent tutoring systems, learning analytics, and self-regulated learning. Notably, compared
                             with LADs, OLM research is more likely to be interactive (81% of papers compared with 31% for LADs),
                             report evaluations (76% versus 59%), use assessment data (100% versus 37%), provide a comparison
                             standard for students (52% versus 38%), but less likely to use behavioral metrics, or resource use data
                             (33% against 75% for LADs). In OLM work, there was a heightened focus on learner control and access
                             to their own data.
11:30 - 11:45. Session 1C3   Multi-institutional Positioning Test Feedback Dashboard for Aspiring Students: Lessons Learnt from a
                             Case Study in Flanders
                             Short research paper

                             Tom Broos (Katholieke Universiteit Leuven, Belgium)
                             Katrien Verbert (Katholieke Universiteit Leuven, Belgium)
                             Greet Langie (Katholieke Universiteit Leuven, Belgium)
                             Carolien Van Soom (Katholieke Universiteit Leuven, Belgium)
                             Tinne De Laet (Katholieke Universiteit Leuven, Belgium)

                             Our work focuses on a multi-institutional implementation and eval-uation of a Learning Analytics
                             Dashboards (LAD) at scale, providingfeedback to N=337 aspiring STEM (science, technology,
                             engineeringand mathematics) students participating in a region-wide position-ing test before entering the
                             study program. Study advisors wereclosely involved in the design and evaluation of the dashboard.The
                             multi-institutional context of our case study requires carefulconsideration of external stakeholders and
                             data ownership andportability issues, which gives shape to the technical design of theLAD. Our approach
                             confirms students as active agents with dataownership, using an anonymous feedback code to access
                             the LADand to enable students to share their data with institutions at theirdiscretion. Other distinguishing
                             features of the LAD are the supportfor active content contribution by study advisors and L A TEX type-
                             setting of question item feedback to enhance visual recognizability.We present our lessons learnt from a
                             first iteration in production.
11:45 - 12:00. Session 1C4   A Qualitative Evaluation of a Learning Dashboard to Support Advisor-Student Dialogues
                             Short research paper

                             Martijn Millecamp (Katholieke Universiteit Leuven, Belgium)
                             Francisco Gutierrez (Katholieke Universiteit Leuven, Belgium)
                             Sven Charleer (Katholieke Universiteit Leuven, Belgium)
                             Katrien Verbert (Katholieke Universiteit Leuven, Belgium)
                             Tinne De Laet (Katholieke Universiteit Leuven, Belgium)

                             This paper presents an evaluation of a learning dashboard that supports the dialogue between a student
                             and a study advisor. The dashboard was designed, developed, and evaluated in collaboration with study
                             advisers. To ensure scalability to other contexts, the dashboard uses data that is commonly available at
                             any higher education institute. It visualizes the grades of the student, an overview of the progress through
                             the year, his/her position in comparison with peers, sliders to plan the next years and a prediction of
                             the length of the bachelor program for this student in years based on historic data. The dashboard was
                             deployed at a large university Europe, and used in September 2017 to support 224 sessions between
                             students and study advisers. We observed twenty of these conversations, and collected feedback from
                             students with questionnaires (N=101). Results of our observations indicate that the dashboard primarily
                             triggers insights at the beginning of a conversation. The number of insights and the level of these insights
                             (factual, interpretative and reflective) depends on the context of the conversation. Most insights were
                             triggered in conversations with students doubting to continue the program, indicating that our dashboard
                             is useful to support difficult decision-making processes.
Northcott   Retention I. Session 1D
            10:30 - 11:00. Session 1D1   Meta-Predictive Retention Risk Modeling: Risk Model Readiness Assessment at Scale with X-Ray
                                         Learning Analytics
                                         Full practitioner paper

                                         Aleksander Dietrichson (Blackboard Inc, Argentina)
                                         Diego Forteza (Blackboard Inc, Uruguay)

                                         Deploying X-Ray Learning Analytics at scale presented the challenge of deploying customized retention
                                         risk models to a host of new clients. Prior findings made the researchers believe that it was necessary
                                         to create customized risk models for each institution, but this was a challenge to do with the limited
                                         resources at their disposal. It quickly became clear that usage patterns detected in the Learning
                                         Management System (LMS) were predictive of the later success of the risk model deployments. This
                                         paper describes how a meta-predictive model to assess clients' readiness for a retention risk model
                                         deployment was developed. The application of this model avoids deployment where not appropriate. It
                                         is also shown how significance tests applied to density distributions can be used in order to automate
                                         this assessment. A case study is presented with data from two current clients to demonstrate the
                                         methodology.
            11:00 - 11:30. Session 1D2   A Generalized Classifier to Identify Online Learning Tool Disengagement at Scale
                                         Full research paper

                                         Jacqueline Feild (McGraw-Hill Education, USA)
                                         Nicholas Lewkow (McGraw-Hill Education, USA)
                                         Sean Burns (Colorado State University, USA)
                                         Karen Gebhardt (Colorado State University, USA)

                                         Student success is a major focus in higher education and success, in part, requires students to remain
                                         actively engaged in the required coursework. Identifying student disengagement at scale has been a
                                         continuing challenge for higher education due to the heterogeneity of traditional college courses.This
                                         research uses data from a widely used online learning tool to build a classifier to identify learning tool
                                         disengagement at scale.This classifier was trained and tested on 4 years of historical data representing
                                         4.5 million students in 175,000 courses, across 256 disciplines.Results show that the classifier is effective
                                         in identifying disengagement within the online learning tool against baselines, across time, and within and
                                         across disciplines.The classifier was also effective in identifying students at risk of disengaging from the
                                         online learning tool and then earning unsuccessful grades in a pilot course where the assignments in the
                                         online learning tool were worth a relatively small portion of the overall course grade. Because this online
                                         learning tool is widely used, this classifier is positioned to be a good tool for instructors and institutions
                                         to use to help identify students at risk for disengagement from coursework.Instructors and institutions
11:30 - 12:00. Session 1D3   Using the MOOC Replication Framework to Examine Course Completion
                                                            Full research paper

                                                            Juan Miguel Andres (University of Pennsylvania, USA)
                                                            Ryan Baker (University of Pennsylvania, USA)
                                                            Dragan Gašević (Monash University, Australia & The University of Edinburgh, UK)
                                                            George Siemens (University of Texas at Arlington, USA)
                                                            Scott Crossley (Georgia State University, USA)
                                                            Srećko Joksimović (University of South Australia, Australia)

                                                            Research on learner behaviors and course completion within Massive Open Online Courses (MOOCs)
                                                            has been mostly confined to single courses, making the findings difficult to generalize across different
                                                            data sets and to assess which contexts and types of courses these findings apply to. This paper reports
                                                            on the development of the MOOC Replication Framework (MORF), a framework that facilitates the
                                                            replication of previously published findings across multiple data sets and the seamless integration of
                                                            new findings as new research is conducted or new hypotheses are generated. In the proof of concept
                                                            presented here, we use MORF to attempt to replicate 15 previously published findings across 29
                                                            iterations of 17 MOOCs. The findings indicate that 12 of the 15 findings replicated significantly across the
                                                            data sets. Results contradicting previously published findings were found in two cases. MORF enables
                                                            larger-scale analysis of MOOC research questions than previously feasible, and enables researchers
                                                            around the world to conduct analyses on huge multi-MOOC data sets without having to negotiate access
                                                            to data.
12:00 - 13:00   Banquet Hall   Lunch
13:00 - 14:30   Parallel Sessions 2
                Grand Lodge     User-Centered Design I. Session 2A
                                 13:00 - 13:30. Session 2A1   The Classrooom as a Dashboard: Co-designing Wearable Cognitive Augmentation for K-12 Teachers
                                                              Full research paper

                                                              Kenneth Holstein (Carnegie Mellon University, USA)
                                                              Gena Hong (Carnegie Mellon University, USA)
                                                              Mera Tegene (Carnegie Mellon University, USA)
                                                              Bruce McLaren (Carnegie Mellon University, USA)
                                                              Vincent Aleven (Carnegie Mellon University, USA)

                                                              When used in classrooms, personalized learning software allows students to work at their own
                                                              pace, while freeing up the teacher to spend more time working one-on-one with students. Yet such
                                                              personalized classrooms also pose unique challenges for teachers, who are tasked with monitoring
                                                              classes working on divergent activities, and prioritizing help-giving in the face of limited time. This paper
                                                              reports on the co-design, implementation, and evaluation of a wearable classroom orchestration tool for
                                                              K-12 teachers: mixed-reality smart glasses that augment teachers’ real-time perceptions of their students’
                                                              learning, metacognition, and behavior, while students work with personalized learning software. The
                                                              main contributions are: (1) the first exploration of the use of smart glasses to support orchestration of
                                                              personalized classrooms, yielding design findings that may inform future work on real-time orchestration
                                                              tools; (2) Replay Enactments: a new prototyping method for real-time orchestration tools; and (3) an in-
                                                              lab evaluation and classroom pilot using a prototype of teacher smart glasses (Lumilo), with early findings
                                                              suggesting that Lumilo can direct teachers’ time to students who may need it most.
13:30 - 14:00. Session 2A2   An Application of Participatory Action Research in Advising-Focused Learning Analytics
                             Full research paper

                             Stefano Fiorini (Indiana University Bloomington, USA)
                             Adrienne Sewell (Indiana University Bloomington, USA)
                             Mathew Bumbalough (Indiana University Bloomington, USA)
                             Pallavi Chauhan (Indiana University Bloomington, USA)
                             Linda Shepard (Indiana University Bloomington, USA)
                             George Rehrey (Indiana University Bloomington, USA)
                             Dennis Groth (Indiana University Bloomington, USA)

                             Advisors assist students in developing successful course pathways through the curriculum. The purpose
                             of this project is to augment advisor institutional and tacit knowledge with knowledge from predictive
                             algorithms (i.e., Matrix Factorization and Classifiers) specifically developed to identify risk. We use
                             a participatory action research approach that directly involves key members from both advising and
                             research communities in the assessment and provisioning of information from the predictive analytics.
                             The knowledge gained from predictive algorithms is evaluated using a mixed method approach. We first
                             compare the predictive evaluations with advisors evaluations of student performance in courses and
                             actual outcomes in those courses We next expose and classify advisor knowledge of student risk and
                             identify ways to enhance the value of the prediction model. The results highlight the contribution that this
                             collaborative approach can give to the constructive integration of Learning Analytics in higher education
                             settings.

14:00 - 14:15. Session 2A3   [Best Short Research Paper Nomination] Co-Creation Strategies for Learning Analytics
                             Short research paper

                             Mollie Dollinger (The University of Melbourne, Australia)
                             Jason Lodge (The University of Melbourne, Australia)

                             In order to further the field of learning analytics (LA), researchers and experts may need to look beyond
                             themselves and their own perspectives and expertise to innovate LA platforms and interventions. We
                             suggest that by co-creating with the users of LA, such as educators and students, researchers and
                             experts can improve the usability, usefulness, and draw greater understanding from LA interventions.
                             Within this article, we discuss the current LA issues and barriers and how co-creation strategies can
                             help address many of these challenges. We further outline the considerations, both pre and during
                             interventions, which support and foster a co-created strategy for learning analytics interventions.
14:15 - 14:30. Session 2A4   Considering Context and Comparing Methodological Approaches in Implementing Learning Analytics at
                             the University of Victoria
                             Short practitioner paper

                             Sarah K. Davis (University of Victoria, Canada)
                             Rebecca L. Edwards (University of Victoria, Canada)
                             Mariel Miller (University of Victoria, Canada)
                             Janni Aragon (University of Victoria, Canada)

                             One of the gaps in the field of learning analytics is the lack of clarity about how the move is made from
                             researching the data to optimizing learning (Ferguson & Clow, 2017). Thus, this practitioner report details
                             the implementation process undertaken between the data to the metrics of the learning analytics cycle
                             (Clow, 2012). Five anonymized secondary data sets consisting solely of LMS interaction data from
                             undergraduate courses at a large research university in Canada university will be analyzed in the fall of
                             2017. Specifically, this study (a) provides context for the individual data sets through a survey tool taken
                             by the instructors of the course, and (b) compares machine learning techniques and statistical analyses
                             to provide information on how different approaches to analyzing the data can inform the learning process.
                             Findings from this study will inform the adoption of learning analytics at the institution and contribute to
                             the larger learning analytics community by detailing the methods compared in this report.
Doric   Discourse I: General.
        Session 2B
        13:00 - 13:30. Session 2B1   Profiling Students from Their Questions in a Blended Learning Environment
                                     Full research paper

                                     Fatima Harrak (LIP6 - Université Pierre et Marie Curie, France)
                                     François Bouchet (LIP6 - Université Pierre et Marie Curie, France)
                                     Vanda Luengo (LIP6 - Université Pierre et Marie Curie, France)
                                     Pierre Gillois (Université de Grenoble, France)

                                     Many approaches have been proposed to analyze learners’ questions to improve their level and help
                                     teachers in addressing them. The present study investigated questions asked by 1st year medicine/
                                     pharmacy students in a blended learning flipped classroom context. The questions (N=6457) were asked
                                     before the class on an online platform to help professors prepare their Q&A session. Our long-term
                                     objective is to help professors in categorizing those questions and potentially to provide students with
                                     feedback on the quality of their questions. To do so, first we present the manual process of categorization
                                     of students’ questions, which led to a taxonomy then used for an automatic annotation of the whole
                                     corpus. Based on this annotated corpus, to identify students’ characteristics from the typology of
                                     questions they asked, we used K-Means algorithm over four courses. The students were clustered by the
                                     proportion of each question they asked in each dimension of the taxonomy. Then, we characterized the
                                     clusters by attributes not used for clustering such as the students’ grade, the attendance, the number of
                                     questions asked and the number of votes their questions received. Across the four courses considered,
                                     two similar clusters always appeared: a cluster (A), made of students with grades lower than average,
                                     attending less to classes, asking a low number of questions but which are particularly popular; and a
                                     cluster (D), made of students with higher grades, high attendance, asking more questions which are
                                     less popular. This work demonstrates the validity and the usefulness of our taxonomy, and shows the
                                     relevance of this classification to identify different students’ profiles.
13:30 - 14:00. Session 2B2   Recurrence Quantification Analysis as a Method for Studying Text Comprehension Dynamics
                             Full research paper

                             Aaron Likens (Arizona State University, USA)
                             Kathryn McCarthy (Arizona State University, USA)
                             Laura Allen (Mississippi State University, USA)
                             Danielle McNamara (Arizona State University, USA)

                             Self-explanations are commonly used to assess on-line reading comprehension processes. However,
                             traditional methods of analysis ignore important temporal variations in these explanations. This study
                             investigated how dynamical systems theory could be used to reveal linguistic patterns that are predictive
                             of self-explanation quality. High school students (n = 232) generated self-explanations while they read
                             a science text. Recurrence Plots were generated to show qualitative differences in students’ linguistic
                             sequences that were later quantified by indices derived by Recurrence Quantification Analysis (RQA).
                             To predict self-explanation quality, RQA indices, along with summative measures (i.e., number of words,
                             mean word length, and type-token ration) and general reading ability, served as predictors in a series
                             of regression models. Regression analyses indicated that recurrence in students’ self-explanations
                             significantly predicted human rated self-explanation quality, even after controlling for summative
                             measures of self-explanations, individual differences, and the text that was read (R2 = 0.68). These
                             results demonstrate the utility of RQA in exposing and quantifying temporal structure in student’s self-
                             explanations. Further, they imply that dynamical systems methodology can be used to uncover important
                             processes that occur during comprehension.
14:00 - 14:15. Session 2B3   [Best Short Research Paper Nomination] Towards a Writing Analytics Framework for Adult English
                             Language Learners

                             Short research paper
                             Amna Liaqat (University of Toronto, Canada)

                             Cosmin Munteanu (University of Toronto, Canada)
                             Improving the written literacy of newcomers to English-speaking countries can lead to better education,
                             employment, or social integration opportunities. However, this remains a challenge in traditional
                             classrooms where providing frequent, timely, and personalized feedback is not always possible. Analytics
                             can scaffold the writing development of English Language Learners (ELLs) by providing such feedback.
                             To design these analytics, we conducted a field study analyzing essay samples from immigrant adult
                             ELLs (a group often overlooked in writing analytics research) and identifying their epistemic beliefs and
                             learning motivations. We identified common themes across individual learner differences and patterns of
                             errors in the writing samples. The study revealed strong associations between epistemic writing beliefs
                             and learning strategies. The results are used to develop guidelines for designing writing analytics for
                             adult ELLs, and to propose several analytics that scaffold writing development for this group.
14:15 - 14:30. Session 2B4   Epistemic Network Analysis of Students’ Longer Written Assignments as Formative/Summative
                                          Evaluation
                                          Short research paper

                                          Simon Skov Fougt (Metropolitan University College, Denmark)
                                          Amanda Siebert-Evenstone (University of Wisconsin-Madison, USA)
                                          Bredndan Eagan (University of Wisconsin-Madison, USA)
                                          Sara Tabatabai (University of Wisconsin-Madison, USA)
                                          Morten Misfeldt (Aalborg University, Denmark)

                                          This paper investigates a method of developing pedagogical visualizations of student written assignments
                                          using keyword matching and Epistemic Network Analysis (ENA) on 16 teacher students’ longer written
                                          assignments on literacy analysis of fictional texts. The visualizations are aimed at summative evaluation
                                          as a tool for the professor to support assessment and understanding of subject learning. We applied two
                                          sets of keywords. The first set with 8 was general, the second set also with 8 focused on specific literary
                                          analysis concepts. Both results show that ENA can visually distinguish low, middle and high performing
                                          students, all though all not statistically significantly. Thus, our learning analytics trial provides a tool that
                                          supports understanding subject learning.
Corinthian   Dashboards, Learning Design & Video. Session 2C
             13:00 - 13:30. Session 2C1   Driving Data Storytelling from Learning Design
                                          Full research paper

                                          Vanessa Echeverria (University of Technology Sydney, Australia)
                                          Roberto Martinez-Maldonado (University of Technology Sydney, Australia)
                                          Roger Granda (Centro de Tecnologías de Información, Ecuador)
                                          Katherine Chiluiza (Escuela Superior Politécnica del Litoral, ESPOL, Ecuador)
                                          Cristina Conati (The University of British Columbia, Canada)
                                          Simon Buckingham Shum (University of Technology Sydney, Australia)

                                          Data science is now impacting the education sector, with a growing number of commercial products
                                          and research prototypes providing learning dashboards. From a human-centred computing perspective,
                                          the end-user’s interpretation of these visualisations is a critical challenge to design for, with empirical
                                          evidence already showing that ‘usable’ visualisations are not necessarily effective from a learning
                                          perspective. Since an educator’s interpretation of visualised data is essentially the construction of a
                                          narrative about student progress, we draw on the growing body of work on Data Storytelling (DS) as
                                          the inspiration for a set of enhancements that could be applied to data visualisations to improve their
                                          communicative power. We present a pilot study that explores the effectiveness of these DS elements
                                          based on educators’ responses to paper prototypes. The dual purpose is understanding the contribution
                                          of each visual element for data storytelling, and the effectiveness of the enhancements when combined.
13:30 - 14:00. Session 2C2   [Best Full Research Paper Nomination] Linking Students’ Timing of Engagement to Learning Design and
                             Academic Performance
                             Full research paper

                             Quan Nguyen (Open University UK, UK)
                             Michal Huptych (Open University UK, UK)
                             Bart Rienties (Open University UK, UK)

                             In recent years, the connection between Learning Design (LD) and Learning Analytics (LA) has been
                             emphasized by many scholars as it could enhance our interpretation of LA findings and translate them
                             to meaningful interventions. Together with numerous conceptual studies, a gradual accumulation of
                             empirical evidence has indicated a strong connection between how instructors design for learning and
                             student behaviour. Nonetheless, students’ timing of engagement and its relation to LD and academic
                             performance have received limited attention. Therefore, this study investigates to what extent students’
                             timing of engagement aligned with instructor learning design, and how engagement varied across
                             different levels of performance. The analysis was conducted over 28 weeks using trace data, on 387
                             students, and replicated over two semesters in 2015 and 2016. Our findings revealed a mismatch
                             between how instructors designed for learning and how students studied in reality. In most weeks,
                             students spent less time studying the assigned materials on the VLE compared to the number of hours
                             recommended by instructors. The timing of engagement also varied, from in advance to catching up
                             patterns. High-performing students spent more time studying in advance, while low-performing students
                             spent a higher proportion of their time on catching-up activities. This study reinforced the importance of
                             pedagogical context to transform analytics into actionable insights.
14:00 - 14:30. Session 2C3   Video and Learning: A Systematic Review (2007-2017)
                             Full research paper

                             Oleksandra Poquet (University of South Australia, Australia)
                             Lisa Lim (University of South Australia, Australia)
                             Negin Mirriahi (University of South Australia, Australia)
                             Shane Dawson (University of South Australia, Australia)

                             Video materials have become an integral part of university learning and teaching practice. While
                             empirical research concerning the use of videos for educational purposes has increased, the literature
                             lacks an overview of the specific effects of videos on diverse learning outcomes. To address such a gap,
                             this paper presents preliminary results of a large-scale systematic review of peer-reviewed empirical
                             studies published from 2007-2017. The study synthesizes the trends observed through the analysis
                             of 178 papers selected from the screening of 2531 abstracts. The findings summarize the effects of
                             manipulating video presentation, content and tasks on learning outcomes, such as recall, transfer,
                             academic achievement, among others. The study points out the gap between large-scale analysis of
                             fine-grained data on video interaction and experimental findings reliant on established psychological
                             instruments. Narrowing this gap is suggested as the future direction for the research of video-based
                             learning.
14:30 - 15:00   Banquet Hall    Coffee Break
15:00 - 16:30   Parallel Sessions 3
                Grand Lodge     Performance Prediction. Session 3A
                                 15:00 - 15:30. Session 3A1   [Best Full Research Paper Nomination] Using Embedded Formative Assessment to Predict State
                                                              Summative Test Scores
                                                              Full research paper

                                                              Stephen Fancsali (Carnegie Learning, Inc., USA)
                                                              Guoguo Zheng (University of Georgia, USA)
                                                              Yanyan Tan (University of Georgia, USA)
                                                              Steven Ritter (Carnegie Learning, Inc., USA)
                                                              Susan Berman (Carnegie Learning, Inc., USA)
                                                              April Galyardt (Carnegie Mellon University, USA)

                                                              If we wish to embed assessment for accountability within instruction, we need to better understand
                                                              the relative contribution of different types of learner data to statistical models that predict scores on
                                                              assessments used for accountability purposes. The present work scales up and extends predictive
                                                              models of math test scores from existing literature and specifies six categories of models that incorporate
                                                              information about student prior knowledge, socio-demographics, and performance within the MATHia
                                                              intelligent tutoring system. Linear regression and random forest models are learned within each category
                                                              and generalized over a sample of 23,000+ learners in Grades 6, 7, and 8 over three academic years
                                                              in a large school district in Florida. After briefly exploring hierarchical models of this data, we discuss
                                                              a variety of technical and practical applications, limitations, and open questions related to this work,
                                                              especially concerning to the potential use of instructional platforms like MATHia as a replacement for
                                                              time-consuming standardized tests.
15:30 - 16:00. Session 3A2   The Influence of Students’ Cognitive and Motivational Characteristics on Students’ Use of a 4C/ID-based
                             Online Learning Environment and their Learning Gain
                             Full research paper

                             Charlotte Larmuseau (Katholieke Universiteit Leuven, Belgium)
                             Jan Elen (Katholieke Universiteit Leuven, Belgium)
                             Fien Depaepe (Katholieke Universiteit Leuven, Belgium)

                             Research has revealed that the design of online learning environments can influence students’ use and
                             performance. In this study, an online learning environment for learning French as a foreign language was
                             developed in line with the four component instructional design (4C/ID) model. While the 4C/ID-model is
                             a well-established instructional design model, little is known about (1) factors impacting students’ use of
                             the four components, namely, learning tasks, part-task practice, supportive and procedural information
                             during their learning process as well as about (2) the way in which students’ differences in use of the 4C/
                             ID-based online learning environment impacts course performance. The aim of this study is, therefore,
                             twofold. Firstly, it investigates the influence of students’ prior knowledge, task value and self-efficacy on
                             students’ use of the four different components of the 4C/ID-model. Secondly, it examines the influence
                             of students’ use of the components on their learning gain, taking into account their characteristics.
                             The sample consisted of 161 students in higher education. Results, based on structural equation
                             modelling (SEM), indicate that prior knowledge has a negative influence on students’ use of learning
                             tasks and part-task practice. Task value has a positive influence on use of learning tasks and supportive
                             information. Additionally, results indicate that use of use of learning tasks, procedural information,
                             controlled for students’ prior knowledge significantly contribute to students’ learning gain. Results suggest
                             that students’ use of the four components is based on their cognitive and motivational characteristics.
                             Furthermore, results reveal the impact of students’ use of learning tasks and procedural information on
                             students’ learning gain.
16:00 - 16:30. Session 3A3   Explaining Learning Performance Using Response-Time, Self-Regulation and Satisfaction from Content:
                             an fsQCA Approach
                             Full research paper

                             Zacharoula Papamitsiou (University of Macedonia, Greece)
                             Anastasios A. Economides (University of Macedonia, Greece)
                             Ilias O. Pappas (Norwegian University of Science and Technology (NTNU), Norway)
                             Michail N. Giannakos (Norwegian University of Science and Technology (NTNU), Norway)

                             This study focuses on compiling students’ response-time allocated to answer correctly or wrongly,
                             their self-regulation, as well as their satisfaction from the assessment content, in order to explain high
                             or medium/low learning performance. To this end, it proposes a conceptual model in conjunction with
                             research propositions. For the evaluation of the approach, an empirical study with 452 students was
                             conducted. The fuzzy set qualitative comparative analysis (fsQCA) revealed five configurations driven
                             by the admitted factors that explain students’ high performance, as well as five additional patterns,
                             interpreting students’ medium/low performance. These findings advance our understanding of the
                             relations between actual usage and latent behavioral factors, as well as their combined effect on
                             students’ test score. Limitations and potential implications of these findings are also discussed.
Doric   Self-Regulation. Session 3B
        15:00 - 15:30. Session 3B1    [Best Practitioner Full Paper Nomination] Evaluating the Adoption of a Badge System based on Seven
                                      Principles of Effective Teaching
                                      Full practitioner paper

                                      Chi-Un Lei (The University of Hong Kong, Hong Kong)
                                      Xiangyu Hou (The University of Hong Kong, Hong Kong)
                                      Donn Gonda (The University of Hong Kong, Hong Kong)
                                      Xiao Hu (The University of Hong Kong, Hong Kong)

                                      Badge systems are useful teaching tools which can effectively capture and visualize students’ learning
                                      progress. By gamifying the learning process, the badge system serves to improve students’ intrinsic
                                      learning motivations, while adding a humanistic touch to teaching and learning. The implementation of
                                      the badge system and the evaluation of effectiveness should be guided by pedagogical principles. This
                                      paper evaluates the effectiveness of a badge system in a non-credit-bearing outreach course from a
                                      pedagogical point of view based on Chickering's “Seven Principles for Good Practice in Undergraduate
                                      Education” and Object-Action Interface model. Furthermore, usage of the badge system is analyzed
                                      in terms of system traffic and the distribution of earned badges. Suggestions for improvements of the
                                      badge system are proposed. It is hoped that the findings in this paper will inspire teachers and e-learning
                                      technologists to make effective use of badge systems and other learning visualization tools for teaching
                                      and learning.
        15:30 - 16:00. Session 3B2    Finding Traces of Self-Regulated Learning in Activity Streams
                                      Full research paper

                                      Analia Cicchinelli (Know Center GmbH, Austria)
                                      Eduardo Veas (Know Center GmbH, Austria)
                                      Abelardo Pardo (The University of Sydney, Australia)
                                      Viktoria Pammer (Know Center GmbH, Austria)
                                      Angela Fessl (Know Center GmbH, Austria)
                                      Carla Barreiros (Know Center GmbH, Austria)
                                      Stefanie Lindstaedt (Know Center GmbH, Austria)

                                      This paper aims to identify self-regulation strategies from students’ interactions with the learning
                                      management system (LMS). We used learning analytics techniques to identify metacognitive and
                                      cognitive strategies in the data. We define three research questions that guide our studies analyzing i)
                                      self-assessments of motivation and self regulation strategies using standard methods to draw a baseline,
                                      ii) interactions with the LMS to find traces of self regulation in observable indicators, and iii) self regulation
                                      behaviours over the course duration. The results show that the observable indicators can better explain
                                      self-regulatory behaviour and its influence in performance than preliminary subjective assessments.
16:00 - 16:15. Session 3B3   Investigating Learning Strategies in a Dispositional Learning Analytics Context: the Case of Worked
                             Examples
                             Short research paper

                             Dirk Tempelaar (Maastricht University, Netherlands)
                             Bart Rienties (The Open University UK, UK)
                             Quan Nguyen (The Open University UK, UK)

                             One approach of user-centered design to empower learning analytics it to listen to students’ needs
                             and learning strategies. This study aims to contribute to recent developments in empirical studies of
                             students’ learning strategies, whereby the use of trace data is combined with the use of self-report
                             data to distinguish profiles of learning strategy use [3, 4, 5]. We do so in the context of an application
                             of dispositional learning analytics in a large introductory course mathematics and statistics, based
                             on blended learning. Continuing from the outcomes of a previous study in which we found marked
                             differences in how students use worked examples as a learning strategy [6, 10], we compare different
                             profiles of learning strategies on learning approaches, learning outcomes, and learning dispositions.
16:15 - 16:30. Session 3B4   Measuring Student Self-regulated Learning in an Online Class
                             Short practitioner paper

                             Qiujie Li (University of California, Irvine, USA)
                             Rachel Baker (University of California, Irvine, USA)
                             Mark Warschauer (University of California, Irvine, USA)

                             Clickstream data has been used to measure students’ self-regulated learning (SRL) in online courses,
                             which allows for more timely and fine-grained measures as compared to traditional self-report methods.
                             However, key questions remain: to what extent can these clickstream measures provide valid inference
                             about the constructs of SRL and complement self-report measures in predicting course performance.
                             Based on the theory of SRL and a well-established self-report instrument of SRL, this study measured
                             three types of SRL behaviors—time management, effort regulation, and cognitive strategy use—
                             using both self-report surveys and clickstream data in an online course. We found both similarities
                             and discrepancies between self-report and clickstream measures. In addition, clickstream measures
                             superseded self-report measures in predicting course performance.
Corinthian   MOOCs. Session 3C
             15:00 - 15:30. Session 3C1   [Best Full Research Paper Nomination] Discovery and Temporal Analysis of Latent Study Patterns from
                                          MOOC Interaction Sequences
                                          Full research paper

                                          Mina Shirvani Boroujeni (École polytechnique fédérale de Lausanne (EPFL), Switzerland)
                                          Pierre Dillenbourg (École polytechnique fédérale de Lausanne (EPFL), Switzerland)

                                          Capturing students' behavioral patterns through analysis of sequential interaction logs is an important
                                          task in educational data mining and could enable more effective and personalized support during the
                                          learning processes. This study aims at discovery and temporal analysis of learners' study patterns in
                                          MOOC assessment periods. We propose two different methods to achieve this goal. First, following a
                                          hypothesis-driven approach, we identify learners' study patterns based on their interaction with lectures
                                          and assignments. Through clustering of study pattern sequences, we capture different longitudinal
                                          engagement profiles among learners and describe their properties. Second, we propose a temporal
                                          clustering pipeline for unsupervised discovery of latent patterns in learners' interaction data. We model
                                          and cluster activity sequences at each time step, and perform cluster matching to enable tracking
                                          learning behaviors over time. Our proposed pipeline is general and applicable in different learning
                                          environments such as MOOC and ITS. Moreover, it allows for modeling and temporal analysis of
                                          interaction data at different levels of actions granularity and time resolution. We demonstrate the
                                          application of this method for detecting latent study patterns in a MOOC course.
15:30 - 16:00. Session 3C2   Evaluating Retrieval Practice in a MOOC: How Writing and Reading Summaries of Videos Affects
                             Student Learning
                             Full research paper

                             Tim van der Zee (Leiden University - ICLON, Netherlands)
                             Daniel Davis (Delft University of Technology, Netherlands)
                             Nadira Saab (Leiden University - ICLON, Netherlands)
                             Bas Giesbers (Erasmus University Rotterdam, Netherlands)
                             Jasper Ginn (Leiden University, Netherlands)
                             Frans Van Der Sluis (Leiden University, Netherlands)
                             Fred Paas (Erasmus University Rotterdam / University of Wollongong, Netherlands)
                             Wilfried Admiraal (Leiden University - ICLON, Netherlands)

                             Videos are often the core content in open online education, such as in Massive Open Online Courses
                             (MOOCs). Students spend most of their time in a MOOC on watching educational videos. However,
                             merely watching a video is a relatively passive learning activity. To increase the educational benefits of
                             online videos, students could benefit from more actively interacting with the to-be-learned material. In this
                             paper two studies (n = 13k) are presented which examined the educational benefits of two more active
                             learning strategies: 1) Retrieval Practice tasks which asked students to shortly summarize the content
                             of videos, and 2) Given Summary tasks in which the students were asked to read pre-written summaries
                             of videos. Writing, as well as reading summaries of videos had a positive impact on quiz grades. Both
                             interventions helped students to perform better, but there was no difference between the efficacy of these
                             interventions. These studies show how the quality of online education can be improved by adapting
                             course design to established approaches from the learning sciences.
16:00 - 16:30. Session 3C3   Reciprocal Peer Recommendation for Learning Purposes
                                                            Full research paper

                                                            Boyd Potts (The University of Queensland, Australia)
                                                            Hassan Khosravi (The University of Queensland, Australia)
                                                            Carl Reidsema (The University of Queensland, Australia)
                                                            Aneesha Bakharia (The University of Queensland, Australia)
                                                            Mark Belonogoff (The University of Queensland, Australia)
                                                            Melanie Fleming (The University of Queensland, Australia)

                                                            Larger student intakes by universities and the rise of education through Massive Open Online Courses
                                                            and has led to less direct contact time with teaching staff for each student. One potential way of
                                                            addressing this contact deficit is to invite learners to engage in peer learning and peer support; however,
                                                            without technological support they may be unable to discover suitable peer connections that can
                                                            enhance their learning experience. Two different research subfields with ties to recommender systems
                                                            provide partial solutions to this problem. Reciprocal recommender systems provide sophisticated filtering
                                                            techniques that enable users to connect with one another. To date, however, the main focus of reciprocal
                                                            recommender systems has been on providing recommendation in online dating sites. Recommender
                                                            systems for technology enhanced learning have employed and tailored exemplary recommenders
                                                            towards use in education, with a focus on recommending learning content rather than other users. In this
                                                            paper, we first discuss the importance of supporting peer learning and the role recommending reciprocal
                                                            peers can play in educational settings. We then introduce our open-source course-level recommendation
                                                            platform called \name that has the capacity to provide reciprocal peer recommendation. The proposed
                                                            reciprocal peer recommender algorithm is evaluated against key criteria such as scalability, reciprocality,
                                                            coverage, and quality and show improvement over a baseline recommender. Primary results indicate
                                                            that the system can help learners connect with peers based on their knowledge gaps and reciprocal
                                                            preferences, with designed flexibility to address key limitations of existing algorithms identified in the
                                                            literature.
16:45 – 17:30   Banquet Hall   Firehose session
17:30 – 19:00   Banquet Hall   Demo & Posters and
                               Reception
Thursday
 07:30 - 08:30   Level 1 Foyer   Registration and breakfast
 08:30 - 09:00   Grand Lodge     Introductions and housekeeping
 09:00 - 10:00   Grand Lodge     Keynote
                                 Prof. Cristina Conati (University of British Columbia, Vancouver, Canada)
 10:00 - 10:30   Banquet Hall    Coffee Break
 10:30 - 12:00   Parallel Sessions 4
                 Grand Lodge     Keynote Q&A and Panel. Session 4A
                                  10:30 - 11:00. Session 4A1    Q&A Keynote
                                                                Prof. Cristina Conati (University of British Columbia, Vancouver, Canada)

                                  11:00 - 12:00. Session 4A2    Panel 2: Discourse-Centric Learning Analytics (Chair: Chris Brookes, U. Michigan)
                                                                Panelists: Prof Danielle S. McNamara (Arizona State University), Dr Oleksandra Poquet (National
                                                                University of Singapore), Dr Andrew Gibson, (Queensland University of Technology, Australia), Assistant
                                                                Prof Ammon Allred (The University of Toledo, USA).

                                                                 This panel will explore the landscape of technology mediated educational discourse research, touching
                                                                on the different approaches used and describing visions of the future for the area. Breaking discourse
                                                                free from the chains of linear discussion boards, these panelists will consider the opportunities
                                                                new technologies afford educators and researchers, and the changes needed for methodological
                                                                improvement because of these new learning environments.
Doric   Institutional Adoption. Session 4B
        10:30 - 11:00. Session 4B1      [Best Practitioner Full Paper Nomination] Implementation of a Student Learning Analytics Fellows
                                        Program
                                        Full practitioner paper

                                        George Rehrey (Indiana University Bloomington, USA)
                                        Dennis Groth (Indiana University Bloomington, USA)
                                        Stefano Fiorini (Indiana University Bloomington, USA)
                                        Carol Hostetter (Indiana University Bloomington, USA)
                                        Linda Shepard (Indiana University Bloomington, USA)

                                        Post-secondary institutions are rapidly adopting Learning Analytics as a means for enhancing student
                                        success using a variety of implementation strategies, such as, small-scale, large-scale, vended products.
                                        In this paper, we discuss the creation and evolution of our novel Student Learning Analytics Fellows
                                        (SLAF) program comprised of faculty and staff who conduct scholarly research about teaching, learning
                                        and student success. This approach directly addresses known barriers to successful implementation,
                                        largely dealing with culture management and sustainability. Specifically, we set the conditions for
                                        catalyzed institutional change by engaging faculty in evidence-based inquiry, situated with like-minded
                                        scholars and embedded within a broader community of external partners who also support this work.
                                        This approach bridges the gap between bottom-up support for faculty concerns about student learning in
                                        courses and top-down administrative initiatives of the campus, such as the strategic plan. We describe
                                        the foundations of this implementation strategy, describe the SLAF program, summarize the areas of
                                        inquiry of our participating Fellows, present initial findings from self-reports from the Fellow community,
                                        consider future directions including plans for evaluating the LA research and the broader impacts of this
                                        implementation strategy.
         11:00 - 11:30. Session 4B2     Scaling Nationally: Seven Lessons Learned
                                        Full practitioner paper

                                        Michael Webb (Jisc, UK)
                                        Paul Bailey (Jisc, UK)

                                        A national learning analytics service has been under development in the UK, led by a non-profit
                                        organization with universities, colleges and other post sixteen education providers as members. After
                                        two years of development the project is moving to full service mode. This paper reports on seven of
                                        the key lessons learnt so far from the first twenty pathfinder organization, along with the transition-to-
                                        service process expanding to other organizations. The lessons cover the make up of the project team,
                                        functionality of services, the speed of change processes, the success of standards, legal complexity,
                                        the complexity of describing predictive models and the challenge of the innovation chasm. Although
                                        these lessons are from the perspective of a service provider, most should be equally applicable to the
                                        deployment of analytics solutions within a single organization.
11:30 - 12:00. Session 4B3   Rethinking Learning Analytics Adoption through Complexity Leadership Theory
                             Full research paper

                             Shane Dawson (University of South Australia, Australia)
                             Oleksandra Poquet (University of South Australia, Australia)
                             Cassandra Colvin (Charles Sturt University, Australia)
                             Tim Rogers (University of South Australia, Australia)
                             Abelardo Pardo (The University of Sydney, Australia)
                             Dragan Gašević (Monash University, Australia & The University of Edinburgh, UK)

                             Despite strong interest in learning analytics (LA) adoption at large-scale organizational levels continues
                             to be problematic. This may in part be due to the lack of acknowledgement of exist-ing conceptual LA
                             models to operationalize how key dimensions of adoption interact to better inform the realities of the
                             implementation process. This paper proposes the framing of LA adoption in complexity leadership
                             theory (CLT) to study the over-arching system dynamics. The framing is empirically validated in a study
                             analysing interviews with senior managers of Australian universities (n=32). The results were coded for
                             several adoption dimensions (e.g., leadership, governance, staff development, and culture). The coded
                             data were then analysed with latent class analysis. The results identified two classes of universities that
                             either i) followed an instrumental approach to adoption - typically top-down leadership, large scale project
                             with high technology focus yet demonstrating limited staff uptake; or ii) were characterized as emergent
                             innovators –bottom up, strong consultation process, but with subsequent challenges in communicating
                             and scaling up innovations. The results suggest there is a need to broaden the focus of research in LA
                             adoption models to move on from small-scale course/program levels to a more holistic and complex
                             organizational level.
You can also read