iPad Scotland Evaluation Study

iPad Scotland Evaluation Study — from University of Hull
The Technology Enhanced Learning Research group, led by Kevin Burden (Principal Investigator) based in the Faculty of Education, have recently completed the first national evaluation to investigate the use and impact of tablet technologies (in this case the iPad), across schools and homes in Scotland.

Tagged with:  

EDUCAUSE Learning Initiative 2011 Online Spring Focus Session: Seeking Evidence of Impact

Join us April 13 and 14 for “Seeking Evidence of Impact,” the 2011 ELI Online Spring Focus Session, where we will engage the teaching and learning community in exploring initial questions on seeking evidence. Do our innovations accomplish our desired outcomes? How do we define “impact”? How do we measure impact? Through plenary sessions and various institutional case studies, we will:

  • Examine research strategies designed to evaluate teaching and learning innovation and practice
  • Review various approaches and designs for collecting evidence
  • Explore evaluation tools and methodologies and how they can be used to effectively measure the impact of our innovations and practices
  • Discuss evidence-of-impact challenges and opportunities at various institutional levels and across various institutional types and controls (public, private, two year, four year)
  • Learn how to infuse instructional objectives and pedagogy into the evidence-seeking process
  • Tour institutional case studies and the research frameworks that have been used to evaluate their effectiveness and construct future improvements
Tagged with:  
Checklist for Evaluating Online Courses (2006).
Southern Regional Education Board.
Online course evaluation rubric. (2007).
The online course evaluation project (OCEP) criteria. Monterey Institute for Technology and Education.
The online course evaluation project (OCEP) overview. (2006).

From DSC:
I especially appreciated the following excerpt of the dialog between Donald Kirkpatrick (Kirkpatrick’s 4 Levels*) and Jack Phillips (another well-known evaluator of training).  Kirkpatrick proposes the following series of questions:

Many people see evaluation as something to be done after training. But the art is to arrange your training in such a way that success is guaranteed. We launch a series of questions to see whether we can build a chain of impact up front: How can we make sure people learn? How do we ensure people use their newly acquired skills on the job? How do we know that the application of these skills returns the desired effect?

That’s very simple. Let me give an example. At Intel, they said, “Let’s take your four levels, start with the last one, and work backwards. What results are we looking for? What behaviors are needed to accomplish those results? What knowledge, skills, and attitudes do people need in order to behave in that way? And how can we do it in such a way that they’ll react favorably?” Answering those questions in that order is at least one less headache.

Excerpt from:

Diederick Stoel.  (2004, January). The evaluation heavyweight match. T + D, 58(1), 46-48.  Retrieved February 18, 2010, from ProQuest Education Journals. (Document ID: 535013771).

* Per Wikipedia:
The four levels of Kirkpatrick’s evaluation model essentially measure:

  • Reaction of student – what they thought and felt about the training
  • Learning – the resulting increase in knowledge or capability
  • Behavior – extent of behavior and capability improvement and implementation/application
  • Results – the effects on the business or environment resulting from the trainee’s performance

The Open Learning Initiative (OLI)

“Using intelligent tutoring systems, virtual laboratories, simulations, and frequent opportunities for assessment and feedback, the Open Learning Initiative (OLI) builds courses that are intended to enact instruction – or, more precisely, to enact the kind of dynamic, flexible, and responsive instruction that fosters learning.”


© 2021 | Daniel Christian