When is Big Learning Data too Big? — from Learning TRENDS by Elliott Masie 

Excerpt from Update #822:

1) An interesting question arose in our conversations about Big Learning Data:

When is Big Learning Data too Big?
The question is framed around the ability of an individual or an organization to process really large amounts of data. Can a learning designer or even a learner, “handle” really large amounts of data? When is someone (or even an organization) handicapped by the size, scope and variety of data that is available to reflect learning patterns and outcomes? When do we want a tight summary vs. when we want to see a scattergram of many data points?

As we grow the size, volume and variety of Big Learning Data elements – we will also need to respect the ability (or challenge) of people to process the data. A parent may hear that their kid is a B- in mathematics – and want a lot more data. But, the same parent does not want 1,000 data elements covering 500 sub-competencies. The goal is to find a way to reflect Big Learning Data to an individual in a fashion that enables them to make better sense of the process – and have a “Continuum” that they can move to get more or less data as a situational choice.

 

Also related, an excerpt from Three Archetypes of the Future Post-Secondary Instructor — from evoLLLution.com by Chris Proulx

The Course Hacker
The last and perhaps most speculative role of the future online instructor will be the person who digs deep into the data that will be available from next generation learning systems to target specific learning interventions to specific students — at scale. The idea of the Course Hacker is based on the emerging role of the Growth Hacker at high-growth web businesses. Mining data from web traffic, social media, email campaigns, etc., the Growth Hacker is constantly iterating a web product or marketing campaign to seek rapid growth in users or revenue. Adapted to online education, the Course Hacker would be a faculty member with strong technical and statistical skills who would study data about which course assets were being used and by whom, which students worked more quickly or slowly, which questions caused the most problems on a quiz, who were the most socially active students in the course, who were the lurkers but getting high marks, etc.  Armed with those deep insights, they would be continually adapting course content, providing support and remedial help to targeted students, creating incentives to motivate people past critical blocks in the course, etc.

 

 

Added later on:

What do the ethical models look like? How are these models deployed rapidly — at the speed of technology? How are these models refined with time? We distilled the group discussions into a series of topics, including student awareness (or lack of awareness) of analytics, future algorithmic science, and the future of learning analytics as defined by business practices, student and faculty access to the data, and a redefinition of failure.

The arguments put forward here often take the form of rhetorical questions; the methodological purpose in presenting the argument in this way is to frame how ethical questioning might guide future developments.