As teaching and learning spaces, technologies and applications continually evolve, it’s crucial to determine where we’re headed and what we hope to accomplish. EDUCAUSE, higher education’s largest technology association, is offering a variety of online webinars and sessions exploring key topics within the future of higher ed teaching and learning in the coming months:

 

* A primary goal of the Horizon Report is that this research will help to inform the choices institutions are making about technology to improve, support or extend teaching, learning and creative inquiry in higher education across the globe. There is no fee for participating in this webinar.

 

 

A smorgasboard of ideas to put on your organization’s radar! [Christian]

From DSC:
At the Next Generation Learning Spaces Conference, held recently in San Diego, CA, I moderated a panel discussion re: AR, VR, and MR.  I started off our panel discussion with some introductory ideas and remarks — meant to make sure that numerous ideas were on the radars at attendees’ organizations. Then Vinay and Carrie did a super job of addressing several topics and questions (Mary was unable to make it that day, as she got stuck in the UK due to transportation-related issues).

That said, I didn’t get a chance to finish the second part of the presentation which I’ve listed below in both 4:3 and 16:9 formats.  So I made a recording of these ideas, and I’m relaying it to you in the hopes that it can help you and your organization.

 


Presentations/recordings:


 

Audio/video recording (187 MB MP4 file)

 

 


Again, I hope you find this information helpful.

Thanks,
Daniel

 

 

 

 

Key issues in teaching and learning 2017 — from Educause Learning Initiative (ELI)

Excerpt:

Since 2011, ELI has surveyed the higher education teaching and learning community to identify its key issues. The community is wide in scope: we solicit input from all those participating in the support of the teaching and learning mission, including professionals from the IT organization, the center for teaching and learning, the library, and the dean’s and provost’s offices.

 

 

 

HarvardX rolls out new adaptive learning feature in online course — from edscoop.com by Corinne Lestch
Students in MOOC adaptive learning experiment scored nearly 20 percent better than students using more traditional learning approaches.

Excerpt:

Online courses at Harvard University are adapting on the fly to students’ needs.

Officials at the Cambridge, Massachusetts, institution announced a new adaptive learning technology that was recently rolled out in a HarvardX online course. The feature offers tailored course material that directly correlates with student performance while the student is taking the class, as well as tailored assessment algorithms.

HarvardX is an independent university initiative that was launched in parallel with edX, the online learning platform that was created by Harvard and Massachusetts Institute of Technology. Both HarvardX and edX run massive open online courses. The new feature has never before been used in a HarvardX course, and has only been deployed in a small number of edX courses, according to officials.

 

 

From DSC:
Given the growth of AI, this is certainly radar worthy — something that’s definitely worth pulse-checking to see where opportunities exist to leverage these types of technologies.  What we now know of as adaptive learning will likely take an enormous step forward in the next decade.

IBM’s assertion rings in my mind:

 

 

I’m cautiously hopeful that these types of technologies can extend beyond K-12 and help us deal with the current need to be lifelong learners, and the need to constantly reinvent ourselves — while providing us with more choice, more control over our learning. I’m hopeful that learners will be able to pursue their passions, and enlist the help of other learners and/or the (human) subject matter experts as needed.

I don’t see these types of technologies replacing any teachers, professors, or trainers. That said, these types of technologies should be able to help do some of the heavy teaching and learning lifting in order to help someone learn about a new topic.

Again, this is one piece of the Learning from the Living [Class] Room that we see developing.

 

 

 

 
 

From DSC:
The following article reminded me of a vision that I’ve had for the last few years…

  • How to Build a Production Studio for Online Courses — from campustechnology.com by Dian Schaffhauser
    At the College of Business at the University of Illinois, video operations don’t come in one size. Here’s how the institution is handling studio setup for MOOCs, online courses, guest speakers and more.

Though I’m a huge fan of online learning, why only build a production studio that’s meant to support online courses only? Let’s take it a step further and design a space that can address the content development for online learning as well as for blended learning — which can include the flipped classroom type of approach.

To do so, colleges and universities need to build something akin to what the National University of Singapore has done. I would like to see institutions create large enough facilities in order to house multiple types of recording studios in each one of them. Each facility would feature:

  • One room that has a lightboard and a mobile whiteboard in it — let the faculty member choose which surface that they want to use

 

 

 

 

 

 

 

  • A recording booth with a nice, powerful, large iMac that has ScreenFlow on it. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.

 

 

 

 

  • Another recording booth with a PC and Adobe Captivate, Camtasia Studio, Screencast-O-Matic, or similar tools. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.

 

 

 

 

  • Another recording booth with an iPad tablet and apps loaded on it such as Explain Everything:

 

 

  • A large recording studio that is similar to what’s described in the article — a room that incorporates a full-width green screen, with video monitors, a tablet, a podium, several cameras, high-end mics and more.  Or, if the budget allows for it, a really high end broadcasting/recording studio like what Harvard Business school is using:

 

 

 

 

 


 

A piece of this facility could look and act like the Sound Lab at the Museum of Pop Culture (MoPOP)

 

 

 


 

 

 

5 Online Education Trends to Watch in 2017 — from usnews.com by Jordan Friedman
Experts predict more online programs will offer alternative credentials and degrees in specialized fields.

Excerpts:

  1. Greater emphasis on nontraditional credentials
  2. Increased use of big data to measure student performance
  3. Greater incorporation of artificial intelligence into classes
  4. Growth of nonprofit online programs
  5. Online degrees in surprising and specialized disciplines

 

 

The Future of Online Learning Is Offline: What Strava Can Teach Digital Course Designers — from edsurge.com by Amy Ahearn

Excerpt:

I became a Strava user in 2013, around the same time I became an online course designer. Quickly I found that even as I logged runs on Strava daily, I struggled to find the time to log into platforms like Coursera, Udemy or Udacity to finish courses produced by my fellow instructional designers. What was happening? Why was the fitness app so “sticky” as opposed to the online learning platforms?

As a thought experiment, I tried to recast my Strava experience in pedagogical terms. I realized that I was recording hours of deliberate practice (my early morning runs), formative assessments (the occasional speed workout on the track) and even a few summative assessments (races) on the app. Strava was motivating my consistent use by overlaying a digital grid on my existing offline activities. It let me reconnect with college teammates who could keep me motivated. It enabled me to analyze the results of my efforts and compare them to others. I didn’t have to be trapped behind a computer to benefit from this form of digital engagement—yet it was giving me personalized feedback and results. How could we apply the same practices to learning?

I’ve come to believe that one of the biggest misunderstandings about online learning is that it has to be limited to things that can be done in front of a computer screen. Instead, we need to reimagine online courses as something that can enable the interplay between offline activities and digital augmentation.

A few companies are heading that way. Edthena enables teachers to record videos of themselves teaching and then upload these to the platform to get feedback from mentors.

 

 

DIY’s JAM online courses let kids complete hands-on activities like drawing or building with LEGOs and then has them upload pictures of their work to earn badges and share their projects.

 

 

My team at +Acumen has built online courses that let teams complete projects together offline and then upload their prototypes to the NovoEd platform to receive feedback from peers. University campuses are integrating Kaltura into their LMS platforms to enable students to capture and upload videos.

 

 

We need to focus less on building multiple choice quizzes or slick lecture videos and more on finding ways to robustly capture evidence of offline learning that can be validated and critiqued at scale by peers and experts online.

 

 

 

 

 

 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

Top 200 Tools for Learning 2016: Overview — from c4lpt.co.uk by Jane Hart

Also see Jane’s:

  1. TOP 100 TOOLS FOR PERSONAL & PROFESSIONAL LEARNING (for formal/informal learning and personal productivity)
  2. TOP 100 TOOLS FOR WORKPLACE LEARNING (for training, e-learning, performance support and social collaboration
  3. TOP 100 TOOLS FOR EDUCATION (for use in primary and secondary (K12) schools, colleges, universities and adult education.)

 

top200tools-2016-jane-hart

 

Also see Jane’s “Best of Breed 2016” where she breaks things down into:

  1. Instructional tools
  2. Content development tools
  3. Social tools
  4. Personal tools

 

 

 

 

ngls-2017-conference

 

From DSC:
I have attended the Next Generation Learning Spaces Conference for the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.

For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.

The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.

Key takeaways for the panel discussion:

  • Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
  • An update on the state of the approaching ed tech landscape
  • Creative, new thinking: What might our next generation learning environments look like in 5-10 years?

I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check out the conference and register soon to take advantage of the early bird discounts.

 

 
© 2016 Learning Ecosystems