Research results on European and international e-learning quality, certification and benchmarking schemes and methodologies

Page created by Jason Stone
 
CONTINUE READING
Research results on European and international e-learning quality, certification and benchmarking schemes and methodologies
Research results on European
  and international e-learning
   quality, certification and
  benchmarking schemes and
        methodologies

VISCED PROJECT
Background documentation and project research results

                                                        1
Authors
       Ilse Op de Beeck, Anthony Camilleri, Marie Bijnens

       Copyright
       (C) 2012, VISCED Consortium

                                  Project Agreement Number:
                                  511578–LLP–1–2010–1–GR-KA3-KA3MP
                                  Project funded by the European Commission

This project has been funded with support from the European Commission. This publication reflects the
views only of the authors, and the Commission cannot be held responsible for any use which may be
made of the information contained therein.

This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 2.0 Belgium
License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/2.0/be/

                                                                                                  2
CONTENTS
Contents ........................................................................................................................................................ 3
Introducttion ................................................................................................................................................. 5
ACODE benchmarks for e-learning in universities ........................................................................................ 5
   What? ........................................................................................................................................................ 5
   Description ................................................................................................................................................ 5
   Status ......................................................................................................................................................... 6
   eMM and ACODE....................................................................................................................................... 6
EFMD CEL ...................................................................................................................................................... 7
   What? ........................................................................................................................................................ 7
   Description ................................................................................................................................................ 7
   Status ......................................................................................................................................................... 8
E-learning Maturity Model (eMM) Benchmarking ....................................................................................... 8
   What? ........................................................................................................................................................ 8
   Description ................................................................................................................................................ 8
   Status ......................................................................................................................................................... 9
   eMM and ACODE....................................................................................................................................... 9
EPPROBATE ................................................................................................................................................. 10
   What? ...................................................................................................................................................... 10
   Description .............................................................................................................................................. 10
   Status ....................................................................................................................................................... 11
E-xcellence .................................................................................................................................................. 11
   What? ...................................................................................................................................................... 11
   Description .............................................................................................................................................. 11
   Status ....................................................................................................................................................... 13
iNACOL National Standards ........................................................................................................................ 13
   What? ...................................................................................................................................................... 13
   Description .............................................................................................................................................. 13
   Status ....................................................................................................................................................... 14
IQAT............................................................................................................................................................. 14
   what? ....................................................................................................................................................... 14

                                                                                                                                                                3
Description .............................................................................................................................................. 14
   Status ....................................................................................................................................................... 15
MIICE ........................................................................................................................................................... 15
   what? ....................................................................................................................................................... 15
   Description .............................................................................................................................................. 15
   Status ....................................................................................................................................................... 16
NCTE e-Learning Planning ........................................................................................................................... 16
   What? ...................................................................................................................................................... 16
   Description .............................................................................................................................................. 16
   Status ....................................................................................................................................................... 17
Open ECBCheck ........................................................................................................................................... 17
   What? ...................................................................................................................................................... 17
   Description .............................................................................................................................................. 17
   Status ....................................................................................................................................................... 19
Pick&Mix ..................................................................................................................................................... 19
   What? ...................................................................................................................................................... 19
   Description .............................................................................................................................................. 19
   Status ....................................................................................................................................................... 20
Quality Matters ........................................................................................................................................... 20
   What? ...................................................................................................................................................... 20
   Description .............................................................................................................................................. 21
   Status ....................................................................................................................................................... 21
SEVAQ+ ....................................................................................................................................................... 22
   What? ...................................................................................................................................................... 22
   Description .............................................................................................................................................. 22
   Status ....................................................................................................................................................... 22
UNIQUe ....................................................................................................................................................... 22
   What? ...................................................................................................................................................... 23
   Description .............................................................................................................................................. 23
   Status ....................................................................................................................................................... 24
References .................................................................................................................................................. 24

                                                                                                                                                               4
INTRODUCTTION
There has been substantial literature on success factors for e-learning. Also a number of benchmarking
and quality schemes contain relevant information on what is important in e-learning. Each scheme has
its own particular approach and focus, some more relevant than others in view of the VISCED work.

In this annex a number of the e-learning quality, certification and benchmarking schemes and
methodologies looked at for VISCED are presented (in alphabetical order). The descriptions of these and
other schemes are also available on the VISCED wiki, brought together under the “Methodologies”
category.

ACODE BENCHMARKS FOR E-LEARNING IN
UNIVERSITIES

What?
     Benchmarking tool for the use of technology in teaching and learning in order to support
      continuous quality improvement in e-learning
     Tertiary education

Description
ACODE, the Australasian Council on Open, Distance and E-Learning, has funded the development of
benchmarks for the use of technology in learning and teaching.
The benchmarks were developed as part of an ACODE funded project, initiated in 2004. They were
developed collaboratively by representatives of a number of universities (Monash University; RMIT
University; University of Melbourne; University of Queensland; University of Southern Queensland;
University of Tasmania; Victoria University of Technology), they have been piloted in universities and
have been independently reviewed (by Paul Bacsich).
The purpose of the benchmarks is to support continuous quality improvement in e-learning. The
approach reflects an enterprise perspective, integrating the key issue of pedagogy with institutional
dimensions such as planning, staff development and infrastructure provision. The benchmarks have
been developed for use at the enterprise level or by the organisational areas responsible for the
provision of leadership and services in this area. Each benchmark area is discrete and can be used alone
or in combination with others. Benchmarks can be used for self assessment purposes (in one or several
areas), or as part of a collaborative benchmarking exercise.
The benchmarks cover the following eight separate topic areas which have been internationally
reviewed:
    1. Institution policy and governance for technology supported learning and teaching

                                                                                                         5
2. Planning for, and quality improvement of the integration of technologies for learning and
       teaching
    3. Information technology infrastructure to support learning and teaching
    4. Pedagogical application of information and communication technology
    5. Professional/staff development for the effective use of technologies for learning and teaching
    6. Staff support for the use of technologies for learning and teaching
    7. Student training for the effective use of technologies for learning
    8. Student support for the use of technologies for learning
Each includes a Scoping Statement, a Good Practice Statement and a summary list of general
Performance Indicators (PIs). Institutions can customise the benchmarks by replacing or adding to these
Local Performance Indicators (LPIs).
Each Performance Indicator then comprises Performance Measures. Each measure is rated on a 5 point
scale (where level 5 indicates good practice). There are five statements that represent progress toward
good practice (as represented by an indicator), with some represented as a matrix. Service areas or units
within universities can complete a self-assessment of current practice using these indicators, noting that
it is not necessary to aspire to best practice on all.

For more details see http://www.acode.edu.au/benchmarks.php. On this website the benchmarks can
be downloaded as well as guidelines for use.

Status
A case study on how the ACODE benchmarks were used (in 2007) by the Innovative Research
Universities of Australia (IRUA) can be found on the ACODE website.
A more recent example of the use of ACODE benchmarks can be found here at the University of New
England, Australia who used them to conduct a self-assessment of technology use at the university
(2010).

eMM and ACODE
Within Australasia there are two major, different, but complementary benchmarking tools in use: the
ACODE benchmarks for e-learning in universities and the E-Learning Maturity Model (eMM).
The eMM is widely used in the New Zealand context and has largely been funded by government
agencies. The ACODE benchmarks have had greater use in Australia and have been focused on
institutional self-assessment.
Both these methodologies were early developments and have been used to considerable effect across
the higher education sectors in both New Zealand and Australia (Keppell, M. e.a., 2011, p.21).
An analysis of the similarities and differences in topic coverage between the ACODE Benchmarks and
version 2.3 of the eMM was made by Stephen Marshall (2009).

                                                                                                        6
EFMD CEL

What?
     Programme Accreditation for technology-enhanced learning (for programmes in management
      education)
     Tertiary education (business schools & corporate universities)

Description
EFMD – the European Foundation for Management Development - manages the CEL programme
accreditation for teChnology-Enhanced Learning, which aims to raise the standard of ICT-based learning
management programmes worldwide. EFMD CEL aims to facilitate standard setting, benchmarking,
mutual learning, and the dissemination of good practice. It allows for different approaches and diversity
in designing and implementing such programmes. EFMD CEL is directed towards educational
management programmes incorporating ICT-based learning.
Two aspects characterise the uniqueness of EFMD CEL: (1) EFMD CEL focuses on programmes in
management education and not just on technology or learning software products. Learners’ experiences
of ICT-based programmes are given significant importance. (2) The EFMD CEL quality framework
represents a comprehensive system of relevant factors based on substantial research.
EFMD CEL is founded on a set of research-based criteria that are grouped into the following categories:
       Programme Strategy takes up questions like: Are the main characteristics of the programme
        transparent for all interested parties? What (added) value does the programme provide
        especially by integrating technology-enhanced learning components?
       Pedagogy covers all aspects of the learning and teaching process and addresses questions such
        as: What type of learning environments does the programme consist of? What is the (added)
        value of the learning processes supported by technology?
       Economics involves all facets related to efficiency in the use of resources. The main question is:
        Are the resources in terms of funds and competencies efficiently used?
       Organisation deals with the question: Are the organisational measures in running the
        programme adequate to meet the programme’s underlying objectives?
       Technology addresses the question: Is the functionality of the technology implemented
        adequately to meet the programme’s underlying objectives?
       Culture is concerned with: Are the cultural factors of change and innovation considered
        adequately?
These categories form a global view on quality development within technology-enhanced programmes.
All are backed by concrete criteria, each of which is part of a coherent system.

                                                                                                        7
An introductory guide on EFMD CEL and other support documentation can be found at the EFMD
website: http://www.efmd.org/index.php/accreditation-main/cel/cel-guides.

Status
EFMD helps its members with continuous quality improvement through different accreditation schemes.
EFMD accreditation involves self-assessment and peer reviews under established quality frameworks
that have been designed to deal with the very diverse approaches to management education and
development that exist around the world. CEL is one of the quality services/accreditation schemes that
is offered by EFMD. So far 11 technology-enhanced learning programmes have received CEL
accreditation.

E-LEARNING MATURITY MODEL (EMM)
BENCHMARKING

What?
     Tool for institutions by which they can assess and compare their capability to develop, deploy
      and support e-learning
     Tertiary education

Description
The e-learning Maturity Model (eMM) was developed by Stephen Marshall at the Victoria University of
Wellington, New Zealand. It provides a means by which institutions can assess and compare their
capability to sustainably develop, deploy and support e-learning. The eMM is based on the ideas of the
Capability Maturity Model and SPICE (Software Process Improvement and Capability dEtermination)
methodologies.
The underlying idea that guides the development of the eMM is that the ability of an institution to be
effective in any particular area of work is dependent on their capability to engage in high quality
processes that are reproducible and able to be extended and sustained as demand grows. Capability, in
the context of this model, refers to the ability of an institution to ensure that e-learning design,
development and deployment is meeting the needs of the students, staff and institution. Capability
includes the ability of an institution to sustain e-learning support of teaching as demand grows and staff
change.
The eMM divides the capability of institutions to sustain and deliver e-learning up into five major
categories or process areas:
    1. Learning - Processes that directly impact on pedagogical aspects of e-learning
    2. Development - Processes surrounding the creation and maintenance of e-learning resources
    3. Support - Processes surrounding the oversight and management of e-learning

                                                                                                        8
4. Evaluation - Processes surrounding the evaluation and quality control of e-learning through its
       entire lifecycle
    5. Organisation - Processes associated with institutional planning and management
Processes define an aspect of the overall ability of institutions to perform well in the given process area,
and thus in e-learning overall. The advantage of this approach is that it breaks down a complex area of
institutional work into related sections that can be assessed independently and presented in a
comparatively simple overview without losing the underlying detail.
Capability in each process is described by a set of practices organised by dimension. The eMM
supplements the CMM concept of maturity levels, which describe the evolution of the organisation as a
whole, with five dimensions (Delivery; Planning; Definition; Management; and Optimisation).
The key idea underlying the dimension concept is holistic capability. Rather than the eMM measuring
progressive levels, it describes the capability of a process from these five synergistic perspectives. An
organization that has developed capability on all dimensions for all processes will be more capable than
one that has not. Capability at the higher dimensions that is not supported by capability at the lower
dimensions will not deliver the desired outcomes; capability at the lower dimensions that is not
supported by capability in the higher dimensions will be ad-hoc, unsustainable and unresponsive to
changing organizational and learner needs.
The key web site is http://www.utdc.vuw.ac.nz/research/emm/. Updates appear on the eMM Blog.

Status
The eMM has evolved since its initial conception (2003). This evolution was informed by an initial
assessment of capability in the New Zealand sector (2005), extensive consultation and workshops in
New Zealand, Australia and the UK, and an extensive literature review examining a wide set of
heuristics, benchmarks and e-learning quality research (2006). As well as a significantly improved set of
processes and practices, the current version of the eMM differs most significantly in the change from
levels of process capability to dimensions.
The methodology has been and is being deployed in Australia, New Zealand, UK and US.

eMM and ACODE
Within Australasia there are two major, different, but complementary benchmarking tools in use: the
ACODE benchmarks for e-learning in universities and the E-Learning Maturity Model (eMM).
The eMM is widely used in the New Zealand context and has largely been funded by government
agencies. The ACODE benchmarks have had greater use in Australia and have been focused on
institutional self-assessment.
Both these methodologies were early developments and have been used to considerable effect across
the higher education sectors in both New Zealand and Australia (Keppell, M. e.a., 2011, p.21).
An analysis of the similarities and differences in topic coverage between the ACODE Benchmarks and
version 2.3 of the eMM was made by Stephen Marshall (2009).

                                                                                                         9
EPPROBATE

What?
     International quality label for e-learning courseware
     Tertiary education

Description
epprobate is the first international quality label for eLearning courseware. The quality label is an
initiative of three organisations: The Learning Agency Network (LANETO), the Agence Wallonne des
Télécommunication (AWT) and the e-Learning Quality Service Center (eLQSC).
epprobate's objectives are to:
       deliver a quality label focussing on eLearning products (courseware) rather than the teaching
        and learning processes within an organization (and thus act as a complement to process
        oriented quality labels)
       increase the acceptance of eLearning courseware through the provision of an international
        quality label
       facilitate a consensus building process around the meaning of quality for eLearning courseware
        and its assessment
       establish an international network of reviewers (pedagogical experts, domain experts,
        courseware developers) and of national partner organisations.
The key to the epprobate reviewing process is the application of the epprobate quality grid to the
examination of a piece of courseware. Reviewers assess courseware in terms of specific criteria. Criteria
are organised into four main sections of the quality grid:
       Course design (Provision of course information, learning objectives; Constructive alignment)
       Learning design (Learner Needs; Personalisation; Instructional strategies)
       Media design (Media Integration; Interface; Interoperability and technological standards)
       Content (Accuracy and values of content; Intellectual property rights; Legal compliance)
The first step in the review process is that the courseware producer (the applicant) provides a self
assessment of the courseware in terms of the epprobate quality grid. The courseware and the self
assessment document are then considered by a review team consisting of four reviewers who rate the
courseware on a four point scale for each criterion (exceeds, meets, partially meets, fails to meet the
criterion), and produce a written rationale for their decision. Not all criteria are relevant to all kinds of
courseware and so the relevance and importance of each specific criterion is judged for each piece of
courseware. The review team meet to reach a consensus view, and then invite the applicant to a
meeting to answer any specific questions arising from the review. The report of the review team is then

                                                                                                        10
forwarded to the international awarding body, which consists of an international group of head
reviewers together with the national partners, who make the final decision on the award of the
epprobate label. The courseware provider receives detailed feedback three times during this process:
after the meeting of the review team, during the subsequent meeting of the applicant with the review
team, and after the meeting of the international awarding body.
An epprobate manual providing reviewers the necessary information to be able to carry out a review is
available at: http://wiki.international-cv.net/index.php?title=Epprobate_Manual_for_reviewers
The epprobate website is at: http://www.epprobate.com

Status
Starting in May 2011, the ‘epprobate’ initiative developed its first prototypes including the review
process and the quality grid. After ten months of development, epprobate was officially rolled-out on
21st of March 2012. ‘epprobate’ has identified national partners in around 30 countries in Europe,
Africa, Asia, the Pacific region, as well as in North and South America.

E-XCELLENCE

What?
     Quality benchmarking assessment instrument and label
     Tertiary education

Description
EADTU (the European Association of Distance Teaching Universities) is and has been leading a process of
Quality Assurance in e-learning under the series of E-xcellence (pronounced “E-excellence”) projects
executed from 2005 – 2012 within EU programmes (see E-xcellence - creating a standard of excellence
for e-learning, E-xcellence+, E-xcellence Associates in Quality, and E-xcellence Next).
Within a consortium of experts in the field, EADTU has established a full procedural approach for
universities to assess and improve their e-learning performance. This is supported by a dedicated
website on QA in e-learning, where the manual can be found and more information on the tools, the
review process and the E-xcellence Associates Label.
With E-xcellence, universities are stimulated to improve their e-learning performance by a guided self-
assessment. This assessment can be a stand-alone exercise for the higher education institution, leading
to a first insight in fields of improvement. The approach can be extended with a review at a distance or
on-site from e-learning experts. This extension is formalised in an E-xcellence Associates label.
The E-xcellence Associates label is not a label for proven excellence but rather a label for
institutions/faculties using the E-xcellence instrument for self-assessment and take measures of
improvement accordingly. This label was established to reward the efforts of universities in a continuous
process of improving their e-learning performance and offer them the platform and networking

                                                                                                    11
opportunities to meet virtually with peers and experts in the field. On their part universities can present
their fields of expertise as well to this community.

The E-xcellence instrument consists of 3 main elements:
    1. Manual on QA in e-learning, covering the 33 benchmarks on e-learning, the indicators related to
       these benchmarks, guidance for improvement and references to excellence level performance.
       The manual is organised into six sections covering (1) Strategic management, (2) Curriculum
       design, (3) Course design, (4) Course delivery, (5) Staff support and (6) Student support. Each
       section follows a similar format setting out benchmarks, critical factors, performance indicators,
       and assessor’s notes. The benchmarks provide a set of general quality statements covering a
       wide range of contexts in which programme designers and others work. It is intended that the
       benchmarks will be relevant to virtually all e-learning situations. These benchmarks might
       usefully form the basis for institutions' quality self assessment where the full range of criteria
       and performance indicators are not judged relevant to the institutional context (e.g. in
       situations where e-learning developments are confined to a minority of courses or to specialist
       areas of the institution's work). The critical factors and performance indicators which follow
       then focus on particular topics relevant to the benchmark statements. Not all the critical factors
       will be relevant in all situations and several will be seen to cut across more than one benchmark
       statement. Thus there is not a one-to-one relationship between the benchmarks and the critical
       factors since they are pitched at different levels of analysis. Performance indicators relating to
       the critical factors have been developed at both general and excellence levels.
    2. The Assessors notes provide a more detailed account of the issues and the approaches which
       might be taken to meet requirements in each situation.
    3. Tools: Quick Scan and Full-Assessment.

The basic tool is the quick scan, a web-based tool which enables easy guidance and decision making
which benchmarks are of interest for an institution. The quick scan can be applied in three ways:
    1. The quick scan as a quick orientation (basic option): An institution gets a first orientation on the
       strengths of eLearning performance and fields of improvement. The output can be used for
       example as input for internal discussions. The result of doing the Quick Scan must be an agreed
       overview of benchmarks that fit the institution as well as a number of benchmarks that ask for
       an action line in the roadmap of improvement. Each statement has to be considered and judged
       how this aspect of e-learning is realised in the course or programme of the particular institution
       or faculty. The instrument offers the opportunity to make comments on the specific issues
       byindicating: Not Adequate, Partially Adequate, Largely Adequate or Fully Adequate.
    2. The quick scan with a review at a distance (extended option): The review is based on a list of
       required documents. Reviewers give advice and recommendations in writing (at a distance). If
       the institution fulfills the conditions of integrating the benchmarks in the internal QA-system, it
       is allowed to use the E-xcellence label.
    3. The quick scan with an on-site assessment (most comprehensive option): After the quick scan, e-
       learn experts (reviewers) will do an on-site assessment. The institution meets up with the

                                                                                                      12
reviewers and receives recommendations and advice for improvement. If conditions are
        fulfilled, the institution is allowed to use the E-xcellence label.

Status
With a series of E-xcellence projects and initiatives, EADTU is and has been leading a European
movement on quality assurance in e-learning:
- E-xcellence - creating a standard of excellence for e-learning (2005-2006), creating a quality
benchmarking assessment instrument covering pedagogical, organisational and technical frameworks.
- E-xcellence+ (2007-2009), aiming to valorise the instrument (developed during E-xcellence) at the local,
national and European level for the higher education and adult education sectors and to broaden the
implementation and receive feedback for enhancing the instrument.
- E-xcellence Associates in Quality (launch 2010), the building of an e-learning benchmarking community
of Associates in Quality. The E-xcellence Associates are focusing on the improvement of four priority
elements of progressive higher education: Accessibility, Flexibility, Interactiveness and Personalization.
- E-xcellence Next (2011-2012), taking the third step in mainstreaming by integrating the instrument into
the regular channels for QA (further European introduction, updating and extending the E-xcellence
instrument and broadening the partnership on European and global level).
All information on the different tools and the E-xcellence Associates Label that grew out of these
projects can be found on following website: http://www.eadtu.nl/e-xcellencelabel.
On this website, also a list of the first universities (faculties, departments) that are qualified for the E-
xcellence label and are now an Associate in Quality can be found.
An updated version of the E-xcellence Manual will be launched September 2012 (see
http://www.eadtu.eu/e-xcellencenext.html)

INACOL NATIONAL STANDARDS

What?
     The iNACOL National Standards for Quality Online Courses, Teaching and Programs are quality
      standards for evaluating online courses, teachers and programs with common benchmarks.
     Primary & secondary education

Description
iNACOL, the International Association for K-12 Online Learning, is a non-profit membership association
based in the United States, representing a divers cross-section of K-12 education. Its aim is to ensure all
students have access to a world-class education and quality online learning opportunities that prepare
them for a lifetime of success.

                                                                                                          13
One of the activities of iNACOL is developping national K-12 online learning quality standards. In October
2011, the National Standards for Quality Online Courses (version 2) and for Quality Online Teaching
(version 2) were released. Both standards were selected, based on the original standards, the results of
a research review and survey of online course quality criteria. Further on they were evaluated and
assembled into an easy to use document for evaluating online courses and evaluating online teachers
with common benchmarks.
The National Standards for Quality Online Courses is a measuring tool to help policy leaders, schools,
and parents across the nation evaluate course quality and implement best practices. Quality criteria are
focussed on Content; (2) Instructional Design; (3) Student Assessment; (4) Technology; and (5) Course
evaluation and support.
The National Standards for Quality Online Teaching was designed to provide teachers with a set of
criteria for effective online learning and to guarantee that the teachers are better able to understand
the technology, new teaching methods and digital course content in an effort to foster an personalized
online learning environment for every student.
The National Standards for Quality Online Programs, released October 2009, completes the triad of
iNACOL’s online education quality standards. It was designed to provide states, districts, online
programs, accreditation agencies and other organizations with a set of over-arching quality guidelines
for online programs in several categories: leadership, instruction, content, support services and
evaluation. Focus is on institutional standards, teaching and learning standards, support standards and
evaluation standards.

Status
Since the original standards were released, other organizations have released quality standards for
online courses. iNACOL organized a team of experts in the area of course development, instructional
design, professional development, research, education, and administration to review these new
standards and new literature around the topic and determined there was a need to refresh version one
of the iNACOL standards, which was done quite recently (October 2011).
Over the past years, iNACOL has received feedback that several organization are using these standards
in the development and review of online courses and programs.

IQAT

what?
     Web-based tool to track and benchmark institutional data systematically across time and among
      peer institutions
     Tertiary education

Description

                                                                                                    14
IQAT - pronounced "eye-cat" - is a benchmarking and quality enhancement methodology developed by
Hezel Associates, a well-known firm of e-learning consultants, in conjunction with a number of
university partners. The methodology was formally launched in June 2006.
It describes itself as "a web-based tool to track and benchmark institutional data systematically across
time and among peer institutions".
The work is being done in partnership with NUTN, the National University Telecommunications Network,
and with sponsorship from Cisco Systems. NUTN has for some time had a major interest in quality and
benchmarking, as demonstrated for example by the topics and speakers at their 2006 conference. In
particular there was a launch presentationof IQAT.

Status
The IQAT website was at http://www.iqat.org/ but seems to be not existing anymore. Also on the
website of Hezel Associates there seems to be no traces anymore of the IQAT methodology and tool.

MIICE

what?
     Tool by which schools can measure their progress when learning and teaching is being planned
      or reviewed which incorporates the use of ICT
     Primary & secondary education

Description
MIICE stands for Measurement of the Impact of ICT on Children's Education.
It is a partnership of Scottish EAs and teacher education institutes, led by the University of Edinburgh
(Moray House School of Education) and dedicated to discussion and action research to enhance learning
and teaching through appropriate use of ICT. The partnership had its first meeting in May 2000.
MIICE's main purpose is to put into words what most recognise is good quality in learning and teaching
incorporating the use of ICT. It is concerned with those qualities which cannot readily be assessed in
conventional ways. MIICE wants to contribute to the debate about the ends of more widespread use of
ICT for learning and teaching. Use of ICT makes real demands - in money and time - on education
authorities, schools, teachers and children. It needs to be clearer what the benefits are that can be
anticipated.
The MIICE project grew out of observation of a set of case studies of evidently good practice in the use
of ICT to promote learning and teaching across Scotland (primary, secondary and special schools). The
16 case studies can be seen at http://sitc.education.ed.ac.uk/Case_Studies/index.htm
The MIICE partnership has developed a range of instruments to help with this focus on quality of
learning and teaching when using ICT. Prime among these has been the MIICE quality framework or

                                                                                                     15
toolbox, which is a cornerstone of the activities of the partnership as a whole and of individuals using
MIICE.
The MIICE toolbox, developed in 2001, articulates the criteria by which one can measure progress in the
quality of learning and teaching in general. An alternative approach to the framework was published in
May 2009. Both the toolbox of 2001 and 2009 have the following structure:
       Outcomes - these are the broad areas of impact of ICT use; there are 13 altogether, in 3 broad
        groups ((1) Learner reflection; (2) Skills development; (3) Managing and manipulating digital
        information; (4) Shared planning/Organisation; (5) Investigatory learning; (6) Shared learning;
        (7) Motivation; (8) Enhancing learning outcomes; (9) Quality of outcomes; (10) Self – esteem /
        confidence; (11) Teacher use of computers as productivity tools; (12) Teacher facilitating the
        learning of ICT principles and good habits; (13) Teacher use of ICT as a rich and effective means
        of learning.)

       Components - these are aspects of these broader areas; there are from 2 to 4 components in
        each outcome and 41 altogether (4 of which appear in 2 outcomes)
       Measures - these are the detailed activities about which questions might be asked when
        assessing achievement; there are from 1 to 6 measures within each component The structure
        broadly mirrors that in “How good is our school?” (quality indicator, theme, illustration) but the
        MIICE framework is a more finely grained analysis.
The toolbox is available in various formats (full version, basic, summary…) to permit selection and
adaptation for personal professional purposes.
The MIICE website is at http://www.miice.org.uk/

Status
The MIICE quality framework or toolbox has been used since the start by a wide range of individual
professionals, schools, education authorities, universities and other agencies to help them to articulate
'progress' when learning is being planned or reviewed which incorporates the use of ICT.
A small selection of the ways in which a number of individuals and agencies have found MIICE useful can
be found at the MIICE website. (There is evidence an examples on the website that this scheme was
used at least until 2010.)

NCTE E-LEARNING PLANNING

What?
     Tool to assist schools in developing their e-learning plan (where is a school currently with regard
      to e-learning?)
     Primary & secondary education

Description
                                                                                                      16
The NCTE (National Centre for Technology in Education) website contains a number of useful resources
to assist schools in developing their e-Learning Plan, from the NCTE’s e-Learning Handbook and
Roadmap to case studies and video exemplars highlighting how teachers are integrating ICT in their
classrooms. For the development of the school’s e-Learning Plan, there are also templates available
which are designed to be adapted and customised along the particular needs and priority areas of a
school.
The NCTE’s e-Learning Handbook outlines the process of planning for e-Learning in a school and has
been developed in consultation with the school development planning initiatives at primary and post
primary level. It provides a step by step guide to the development of the school’s e-Learning Plan and
outlines the key roles and responsibilities of all involved in the development of the plan.
The e-Learning Roadmap is a planning tool designed to help a school identify where it currently is in
relation to e-Learning, and the priority areas for e-Learning Development. The Roadmap provides a
number of statements under the following headings:
       Leadership & Planning
       ICT & the Curriculum
       Professional Development
       e-Learning Culture
       ICT Infrastructure
The statements can be categorised as ‘Initial’, ‘e-Enabled’, ‘e-Confident’ or ‘e-Mature’.
The website is at: http://www.ncte.ie/elearningplan/

Status
The Department of Education and Science in Ireland has adopted a detailed information and
communications technology strategy for 2008 – 2013 which prioritises a number of key areas for
investment which is furthering the integration of ICT in learning and teaching. Each school is required to
prepare and implement an e-Learning Plan as part of its Whole School Plan. It is in this context that the
handbook and other resources have been produced – to assist schools to develop concrete action plans
and strategies to integrate ICT into learning and teaching across the curriculum.

OPEN ECBCHECK

What?
     Certification and quality improvement scheme for e-learning programmes & courses in
      international capacity building
     Tertiary Education

Description

                                                                                                    17
Open ECBCheck stands for Open Certification Standard for E-Learning in Capacity Building. It is a
community based and low-cost certification and quality improvement scheme for e-Learning
programmes and institutions in international Capacity Building. The objective of Open ECBCheck is to
build a quality label for e-learning which serves the needs of the community of CBOs (Community Based
Organisations) to improve quality, strengthen recognition and enable individual and organisational
learning.
Open ECBCheck is designed for Programmes and Courses and can be used for Distance Learning
Programmes in Higher Education. It can both be used as a guideline for development and delivery of
courses and programmes and as a certification scheme.
The criteria framework can be adjusted to institution and specific contexts. The ECBCheck Criteria
analyse a wide variety of indicators about a programme requiring:
       Information about & organisation of programme
       Target Audience Orientation
       Quality of Content
       Programme/Course Design
       Media Design
       Technology
       Evaluation & Review
Three steps can be distinguished in the process: (1) Become part of an international Community for e-
Learning certification; (2) Share tools and experiences, gain access to the complete ECBCheck toolset;
and (3) Certification through self-assessment and independent peer-review.
ECBCheck involves organisations into a community of practice for quality development. Experiences,
tools and instruments are shared and direct access to the ECBCheck toolset is gained. Peer-reviewers
are also drawn from the community, wherever possible. ECBCheck involves a detailed but efficient
review process, through a process of remote peer-review and custom-design software tools. The
ECBCheck toolset for self-assessment and peer-review is validated through a community of international
experts and international organisations in a participative process. The toolset is developed by
international capacity building organisations and is adaptable to the latest developments in learning
provision. It provides international benchmarks but can be calibrated to different organisations’,
countries’, and cultures’ needs.
Open ECBCheck is validated by experts from international organisations. The initiative is supported by:
GIZ (former InWENT) - United Nations University – CGIAR - World Agroforestry Centre (ICRAF) - United
Nations Environmental Program - African Virtual University – European Foundation for Quality in E-
Learning – GDLN (World Bank Institute) - Namibia Qualifications Authority – UNITAR - SPIDER/ SIDA -
IICD,Netherlands - Federal Institute of VET, Germany - European Foundation for Management
Development - Hoffmann & Reif Consultants -Tertiary and Vocational Education Commission (TVEC),
Ministry of Vocational and Technical Training - The University of the Western Cape - Kenya Institute of
Education - University of Santo Tomas, Philippines – University of the Philippines Open University - Food
and Agricultural Organisation (FAO) - Open University of Catalonia - Royal Holloway College -

                                                                                                     18
Accreditation Agency for Engineering, Informatics and Natural Sciences (ASIIN) - Commonwealth of
Learning - UNESCO-UNIVOC International Centre for Technical and Vocational Education and Training.
The Open ECBCheck website is at: http://ecbcheck.efquel.org

Status
EFQUEL has recently (2012) taken over the secretariat of ECBCheck. ECBCheck is now part of the EFQUEL
quality services, a range of self-assessment and external review services offered to e-Learning and TEL
providers. From July 2012 onwards ECBCheck will be partially commercialised. Western organisations
will have to pay for the service but for Southern organisations the service will be free, sponsored by
EFQUEL and GIZ.

PICK&MIX

What?
             Methodology for benchmarking e-learning
             Tertiary education

Description
Pick&Mix is a methodology for benchmarking e-learning, based on a systematic review of other
approaches to benchmarking e-learning, looking for commonalities of approach. One of the virtues of
Pick&Mix (which gave rise to its name) is that it does not impose methodological restrictions and has
incorporated (and will continue to incorporate, in line with need) criteria from other methodologies of
quality, best practice, adoption and benchmarking.
Pick Mix has the following features:
    1. a set of criteria split into core criteria (which each university must consider) and supplementary
       criteria (from which each university should select some to consider); in addition, an institution
       may use local criteria developed in the same style
    2. guidelines, based on HEI experience, as to the total number of criteria (core plus supplementary
       plus local) that a university should consider
    3. criteria which are a mix of ‘process’ criteria and ‘metric’ output criteria; covering student-facing
       and staff-facing issues as well as strategy, structure and IT topics
    4. criteria described (as far as possible) using concepts, structures, processes and vocabulary
       familiar to those in universities (different national and sectoral variants are possible)
    5. each criterion is scored on a levels 1-5 scale with an additional level 6 to signify excellence: level
       1 is always sector-minimum and level 5 is reachable sector best practice in any given time period
       (level 6 is supposed to be out of planned reach for the majority of HEIs)

                                                                                                       19
6. each score associated with a scoring statement to describe in more detail the practices
       associated with that level in each specific HEI
    7. new criteria which can be developed to reflect changing agendas (such as plagiarism, widening
       participation, space planning) or taken from other criterion-based methodologies (ELTI, eMM,
       BENVIC, CHIRON, E-xcellence, etc) where appropriate: each such criterion can either be specific
       to an HEI (local criteria) or suggested for inclusion as a new supplementary criterion
    8. inbuilt sector knowledge and comparability based on the use of transparent evidenced public
       criteria norm-referenced across the sector which is not to downplay the role of HEIs and
       consultants in jointly investigating and assessing each criterion
    9. careful consideration given to minimise the number of core criteria so that each is clearly
       correlated with success in e-learning
    10. no inbuilt project management or engagement methodology so that Pick&Mix can be run within
        a project management methodology comfortable to the HEIs involved and of appropriate
        weight
    11. use of criteria, couched in familiar terms and clearly correlated with success, coupled with
        familiar and lightweight project management, so as to lead to a "low footprint" style of
        benchmarking suitable for a range of HEIs, and departments within institutions as well as
        institution-wide approaches augmentable with deeper studies
    12. an "open content" method of distribution where each final release plus its supporting
        documents is available under a Creative Commons license.

The current public release is the beta 3 version of Pick&Mix version 2.6 - this can be found at
http://www.matic-media.co.uk/benchmarking/PnM-2pt6-beta3-full.xlsx
A variant of this with more emphasis on displaying the critical cuccess factors and key success factors is
at http://www.matic-media.co.uk/benchmarking/PnM-2pt6-beta3-CKSFs.xlsx

Status
The Pick&Mix methodology was developed in 2005 and used for benchmarking e-learning in UK
universities in the period from 2005 to the current day. Pick&Mix has drawn on and influenced work
from the US, Australia, New Zealand and EU projects.

Lessons learnt from practice in the UK and modifications to use Pick&Mix in Wales (in Gwella, the
national initiative in Wales to enhance the e-learning capability of universities) led to adjustments.
Pick&Mis has been reoriented for more international use and now has a generic version under the new
name of ELDDA (2008).

QUALITY MATTERS

What?

                                                                                                       20
 Peer review process to certify quality of online and blended courses
     Primary, secondary & tertiary education

Description
Quality Matters (QM) is a faculty-centred, peer review process that is designed to certify the quality of
online and blended courses.
There are three primary components in the Quality Matters Program: The QM Rubric, the Peer Review
Process and QM Professional Development.
The Quality Matters Rubric is the core of QM and consists of a set of 8 general standards (Course
Overview and Introduction; Learning Objectives; Assessment and Measurement; Instructional Materials;
Learner Interaction and Engagement; Course Technology; Learner Support; and Accessibility) and 41
specific standards used to evaluate the design of online and blended courses. The Rubric is completed
with annotations that explain the application of the standards and the relationship among them, as well
as examples of good practice for all 41 standards. A scoring system and set of online tools facilitate the
evaluation by a team of reviewers.
In an effort to apply the program also to the secondary education community, the Grades 6-12 Rubric
was developed specifically tailored for middle school and high school online and blended courses. The
G6-12 rubric has been created to address the need for a set of 9 general standards (Course Overview
and Introduction; Learning Objectives; Assessment and Measurement; Resources and Materials;
Learning Activities; Course Technology; Learner Support; Accessibility; and Compliance Standards) that is
specific enough to guide the development, enhancement, and evaluation of online and blended courses
for middle and high school students.
In this respect, the Quality Matters Program collaborated with a leading provider of online courses for
students in grades 6 to 12, the Florida Virtual School (FLVS). In addition to collaborating with FLVS, the
G6-12 Rubric integrates existing national standards for K–12 online education promulgated by the
Southern Regional Education Board (SREB), The North American Council for Online Learning (iNACOL)
and the Partnership for 21st Century Skills.
For more details see the presentation and the website http://www.qmprogram.org

Status
Quality Matters is a not-for-profit subscription service that was developed by MarylandOnline with
funding from Fipse. QM is a leader in quality assurance for online education and has received national
recognition for its peer-based approach and continuous improvement in online education and student
learning. Adopted by a large and broad user base, QM represents a shared understanding of quality in
online course design. QM subscribers include community and technical colleges, colleges and
universities, K-12 schools and systems, and other academic institutions.

                                                                                                       21
SEVAQ+

What?
     Tool to evaluate the quality of any teaching and learning supported by technology
     Tertiary education (learning organisations: professional training centres, in-company training
      departments, universities)

Description
SEVAQ+ focusses on manageing internal assessment processes within an institution. It is a combined
tool and approach for the shared evaluation of quality in technology enhanced learning. SEVAQ+ is
designed to be used by a range of learning organisations – professional training centres, in-company
training departments or universities – to evaluate the quality of any teaching and learning supported by
technology, whether it concerns totally online distance courses or blended learning. Teachers and
trainers can design questionnaires to gather feedback on what learners really think of their learning
experience. Training managers can get the full picture by designing questionnaires for the different
stakeholders involved. Learners get the chance to give their point of view and contribute to improving
the quality of learning.
SEVAQ+ follows a logical structure inspired by the EFQM quality framework, combined with the
Kirkpatrick evaluation model. To design a questionnaire, you can choose which Criteria and Sub-criteria
you wish to focus on (achievement of learning goals, efficiency of the technical support, effectiveness of
the pedagogical approaches, quality of the learning resources,…). These criteria are organised within an
overall framework of 'Resources', 'Processes' and 'Results'. The SEVAQ+ tool then proposes a series of
statements: you choose those which best reflect the reality of the context you wish to evaluate. As a
respondent, you will either be asked to answer by yes or no, or to rate your level of agreement with
each statement and to say how important this aspect is. In the overview of results, the “critical areas for
improvement” will be reflected.
The SEVAQ+ website and tool can be found at http://sevaq.efquel.org/.The SEVAQ+ handbook (in
different languages) can be found at http://sevaq.efquel.org/sevaq-tool/handbook/.

Status
The tool was originally developed in a pilot Leonardo Da Vinci project (2005-2007) called SEVAQ. The
follow-up project SEVAQ+ (2009-2011) aimed to engage in wide-reaching dissemination and exploitation
of the SEVAQ tool and the concept for the Self-Evaluation of Quality in eLearning.
Currently, SEVAQ+ is part of the EFQUEL quality services, a range of self-assessment and external review
services offered to e-Learning and TEL providers. There are different account types (with different
functionalities) available, respectively adapted to the needs of individuals or organisations.

UNIQUE
                                                                                                      22
You can also read