Virtual classes shouldn’t be cringeworthy. Here are 5 tips for teaching live online — from edsurge.com by Bonni Stachowiak (Columnist)

Excerpt:

Dear Bonni: I’m wanting to learn about best practices for virtual courses that are “live” (e.g., using a platform like Zoom). It differs both from face-to-face classroom learning and traditional (asynchronous) online courses. I’d love to know about resources addressing this learning format. —Keith Johnson. director of theological development at Cru. My team facilitates and teaches graduate-level theological courses for a non-profit.

Teaching a class by live video conference is quite different than being in person with a room full of students. But there are some approaches we can draw from traditional classrooms that work quite well in a live, online environment.

Here are some recommendations for virtual teaching…

 

 

Can space activate learning? UC Irvine seeks to find out with $67M teaching facility  — from edsurge.com by Sydney Johnson

Excerpt:

When class isn’t in session, UC Irvine’s shiny new Anteater Learning Pavillion looks like any modern campus building. There are large lecture halls, hard-wired lecture capture technology, smaller classrooms, casual study spaces and brightly colored swivel chairs.

But there’s more going on in this three-level, $67-million facility, which opened its doors in September. For starters, the space is dedicated to “active learning,” a term that often refers to teaching styles that go beyond a one-way lecture format. That could range from simply giving students a chance to pause and discuss with peers, to role playing, to polling students during class, and more.

To find out what that really looks like—and more importantly, if it works—the campus is also conducting a major study over the next year to assess active learning in the new building.

 

 

 

 

The information below is from Heather Campbell at Chegg
(emphasis DSC)


 

Chegg Math Solver is an AI-driven tool to help the student understand math. It is more than just a calculator – it explains the approach to solving the problem. So, students won’t just copy the answer but understand and can solve similar problems at the same time. Most importantly,students can dig deeper into a problem and see why it’s solved that way. Chegg Math Solver.

In every subject, there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important concepts and terms are for a given subject, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand these terms and concepts, we’ve provided thousands of definitions, written and compiled by Chegg experts. Chegg Definition.

 

 

 

 

 


From DSC:
I see this type of functionality as a piece of a next generation learning platform — a piece of the Living from the Living [Class] Room type of vision. Great work here by Chegg!

Likely, students will also be able to take pictures of their homework, submit it online, and have that image/problem analyzed for correctness and/or where things went wrong with it.

 

 


 

 

Alexa, get me the articles (voice interfaces in academia) — from blog.libux.co by Kelly Dagan

Excerpt:

Credit to Jill O’Neill, who has written an engaging consideration of applications, discussions, and potentials for voice-user interfaces in the scholarly realm. She details a few use case scenarios: finding recent, authoritative biographies of Jane Austen; finding if your closest library has an item on the shelf now (and whether it’s worth the drive based on traffic).

Coming from an undergraduate-focused (and library) perspective, I can think of a few more:

  • asking if there are any group study rooms available at 7 pm and making a booking
  • finding out if [X] is open now (Archives, the Cafe, the Library, etc.)
  • finding three books on the Red Brigades, seeing if they are available, and saving the locations
  • grabbing five research articles on stereotype threat, to read later

 

Also see:

 

 

 

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Top Trends in Active and Collaborative Learning — from thesextantgroup.com by Joe Hammett

Excerpts:

My daughter is a maker. She spends hours tinkering with sewing machines and slime recipes, building salamander habitats and the like. She hangs out with her school friends inside apps that teach math and problem solving through multi-player games. All the while, they are learning to communicate and collaborate in ways that are completely foreign to their grandparent’s generation. She is 10 years old and represents a shift in human cognitive processing brought about by the mastery of technology from a very young age. Her generation and those that come after have never known a time without technology. Personal devices have changed the shared human experience and there is no turning back.

The spaces in which this new human chooses to occupy must cater to their style of existence. They see every display as interactive and are growing up knowing that the entirety of human knowledge is available to them by simply asking Alexa. The 3D printer is a familiar concept and space travel for pleasure will be the norm when they have children of their own.

Current trends in active and collaborative learning are evolving alongside these young minds and when appropriately implemented, enable experiential learning and creative encounters that are changing the very nature of the learning process. Attention to the spaces that will support the educators is also paramount to this success. Lesson plans and teaching style must flip with the classroom. The learning space is just a room without the educator and their content.

 


8. Flexible and Reconfigurable
With floor space at a premium, classrooms need to be able to adapt to a multitude of uses and pedagogies. Flexible furniture will allow the individual instructor freedom to set up the space as needed for their intended activities without impacting the next person to use the room. Construction material choices are key to achieving an easily reconfigurable space. Raised floors and individually controllable lighting fixtures allow a room to go from lecture to group work with ease. Whiteboard paints and rail mounting systems make walls reconfigurable too!.

Active Learning, Flipped Classroom, SCALE-UP, TEAL Classroom, whatever label you choose to place before it, the classroom, learning spaces of all sorts, are changing. The occupants of these spaces demand that they are able to effectively, and comfortably, share ideas and collaborate on projects with their counterparts both in person and in the ether. A global shift is happening in the way humans share ideas. Disruptive technology, on a level not seen since the assembly line, is driving a change in the way humans interact with other humans. The future is collaborative.

 

 

 

Robots won’t replace instructors, 2 Penn State educators argue. Instead, they’ll help them be ‘more human.’ — from edsurge.com by Tina Nazerian

Excerpt:

Specifically, it will help them prepare for and teach their courses through several phases—ideation, design, assessment, facilitation, reflection and research. The two described a few prototypes they’ve built to show what that might look like.

 

Also see:

The future of education: Online, free, and with AI teachers? — from fool.com by Simon Erickson
Duolingo is using artificial intelligence to teach 300 million people a foreign language for free. Will this be the future of education?

Excerpts:

While it might not get a lot of investor attention, education is actually one of America’s largest markets.

The U.S. has 20 million undergraduates enrolled in colleges and universities right now and another 3 million enrolled in graduate programs. Those undergrads paid an average of $17,237 for tuition, room, and board at public institutions in the 2016-17 school year and $44,551 for private institutions. Graduate education varies widely by area of focus, but the average amount paid for tuition alone was $24,812 last year.

Add all of those up, and America’s students are paying more than half a trillion dollars each year for their education! And that doesn’t even include the interest amassed for student loans, the college-branded merchandise, or all the money spent on beer and coffee.

Keeping the costs down
Several companies are trying to find ways to make college more affordable and accessible.

 

But after we launched, we have so many users that nowadays if the system wants to figure out whether it should teach plurals before adjectives or adjectives before plurals, it just runs a test with about 50,000 people. So for the next 50,000 people that sign up, which takes about six hours for 50,000 new users to come to Duolingo, to half of them it teaches plurals before adjectives. To the other half it teaches adjectives before plurals. And then it measures which ones learn better. And so once and for all it can figure out, ah it turns out for this particular language to teach plurals before adjectives for example.

So every week the system is improving. It’s making itself better at teaching by learning from our learners. So it’s doing that just based on huge amounts of data. And this is why it’s become so successful I think at teaching and why we have so many users.

 

 

From DSC:
I see AI helping learners, instructors, teachers, and trainers. I see AI being a tool to help do some of the heavy lifting, but people still like to learn with other people…with actual human beings. That said, a next generation learning platform could be far more responsive than what today’s traditional institutions of higher education are delivering.

 

 

Virtual digital assistants in the workplace: Still nascent, but developing — from cisco.com by Pat Brans
As workers get overwhelmed with daily tasks, they want virtual digital assistants in the workplace that can alleviate some of the burden.

Excerpts:

As life gets busier, knowledge workers are struggling with information overload.

They’re looking for a way out, and that way, experts say, will eventually involve virtual digital assistants (VDAs). Increasingly, workers need to complete myriad tasks, often seemingly simultaneously. And as the pace of business continues to drive ever faster, hands-free, intelligent technology that can speed administrative tasks holds obvious appeal.

So far, scenarios in which digital assistants in the workplace enhance productivity fall into three categories: scheduling, project management, and improved interfaces to enterprise applications. “Using digital assistants to perform scheduling has clear benefits,” Beccue said.

“Scheduling meetings and managing calendars takes a long time—many early adopters are able to quantify the savings they get when the scheduling is performed by a VDA. Likewise, when VDAs are used to track project status through daily standup meetings, project managers can easily measure the time saved.”

 

Perhaps the most important change we’ll see in future generations of VDA technology for workforce productivity will be the advent of general-purpose VDAs that help users with all tasks. These VDAs will be multi-channel (providing interfaces through mobile apps, messaging, telephone, and so on) and they will be bi-modal (enlisting text and voice).

 

 

 

 

Augmented Reality In Healthcare Will Be Revolutionary — from medicalfuturist.com

Excerpts:

1) Augmented reality can save lives through showing defibrillators nearby
2) Google Glass might help new mothers struggling with breastfeeding
3) Patients can describe their symptoms better through augmented reality
4) Nurses can find veins easier with augmented reality

5) Motivating runners through zombies
6) Pharma companies can provide more innovative drug information
7) Augmented reality can assist surgeons in the OR
8) Google’s digital contact lens can transform how we look at the world

 

How is AI used in healthcare – 5 powerful real-world examples that show the latest advances — from forbes.com by Bernard Marr

Excerpts:

1) AI-assisted robotic surgery
2) Virtual nursing assistants
3) Aid clinical judgment or diagnosis
4) Workflow and administrative tasks
5) Image analysis

 

 

Summary: A Manager’s guide to Augmented Reality.  — from twnkls.com by Prof. Michael Porter

Excerpt:

The full read can be found at the bottom of this page. But we summarized for you the 4 key take-aways:

  1. AR enables a new information-delivery paradigm
  2. AR helps to visualize
  3. Instruct and guide
  4. Eight AR strategy starting questions

 

 

What’s so great about VR? Virtually everything — from virtuallyinspired.org

Excerpt:

No doubt about it. Virtual reality isn’t just for gamers and gadget geeks anymore. In fact, as the technology gets better and cheaper, VR is the wave of the future when it comes to creating a truly memorable and effective learning experience – and for good reason.

Multiple Learning Attributes. To begin with, it empowers us to create any number of safely immersive virtual learning environments that feel and respond much as they would in real life, as students engage and explore, interact with and manipulate objects within these worlds. Imagine teleporting your students to re-enact historic battles; explore outer space; or travel the inner workings of the human body. What’s more, using sophisticated controls, they can actually “practice” complex procedures like cardiac surgery, or master difficult concepts, such as the molecular properties of brain cells.

Likewise, VR gives new meaning to the term “field trip,” by enabling students to virtually experience first-hand some of the world’s great museums, natural wonders and notable landmarks. You can also embed 360-degree objects within the virtual classroom to support course content, much as Drexel University Online is doing after assembling its one-of-a-kind VRtifacts+ repository.   And you can use it to live-stream events, guest lectures and campus tours, in addition to hosting virtual community spaces where learners can meet and connect in a seemingly “real” environment.

 

 

The Modern Alternative Learning Resource: Time To Drop The Ban On Phones In Schools? — from vrfocus.com by Robert Currie
Robert Currie discusses the mobile phone’s role in education, and how thanks in part to AR and VR it should now be considered a top tool.

 

 

Benefits of Virtual Reality in Education — from invisible.toys

 

 

 

The AVR Platform and Classroom 3.0 Showcased at EduTECH Asia 2018 — from eonreality.com

Excerpt:

At EduTECH Asia 2018 this week in Singapore, EON Reality spent two full days speaking, promoting, and demonstrating the latest updates to the AVR Platform to the thousands of education and technology professionals in attendance.

With a focus on how the AVR Platform can best be used in the education world, EON Reality’s discussion, ‘Augmented and Virtual Reality in Education: The Shift to Classroom 3.0,’ highlighted Wednesday’s offerings with a full presentation and hands-on demos of the new tools in Creator AVR. Over the course of both days, visitors filled the EON Reality booth to get their own one-on-one experience of Creator AVR, Virtual Trainer, and the ways in which AR Assist can help out in the classroom.

The AVR Platform’s three products are the fundamental tools of EON Reality’s Classroom 3.0 vision for the Immersed Flipped Classrooms of the future. With Creator AVR — a SaaS-based learning and content creation solution — leading the way, the AVR Platform empowers Classroom 3.0 by providing teachers and educators of all types with the tools needed to create Augmented and Virtual Reality learning modules.

Bringing Asian educators from all over the continent together, EON Reality’s presence at EduTECH showed just how significantly Augmented Reality and Virtual Reality can elevate the overall educational experience going forward. After two full days of demonstrations, EON Reality introduced the AVR Platform to approximately 1500 teachers, school administration officials, and other decision-makers in Asia’s education industry.

As the AVR Platform expands to educational markets around the world, EON Reality’s revolutionary spin on traditional learning branches into new cultures and nations. With local Singaporean educational institutions like Temasek Polytechnic already onboard, the EduTECH Asia 2018 conference marked the continued spread of Classroom 3.0 and the AVR Platform on both a regional and global level.

 

 

New Virtual 3D Microscope Lab Program Offered for Online Students by Oregon State University — from virtuallyinspired.org
OSU solves degree completion issue for online biology students

Excerpt:

“We had to create an alternative that gives students the foundational experience of being in a lab where they can maneuver a microscope’s settings and adjust the images just as they would in a face-to-face environment,” said Shannon Riggs, the Ecampus director of course development and training.

Multimedia developers mounted a camera on top of an actual microscope and took pictures of what was on the slides. Using 3D modeling software, the photos were interweaved to create 3D animation. Using game development software enabled students to adjust lighting, zoom and manipulate the images, just like in a traditional laboratory. The images were programmed to create a virtual simulation.

The final product is “an interactive web application that utilizes a custom 3D microscope and incorporates animation and real-life slide photos,” according to Victor Yee, an Ecampus assistant director of course development and training.

 

Also see:

  • YouTube to Invest $20 Million in Educational Content — from campustechnology.com by Dian Schaffhauser
    Excerpt:
    YouTube, a Google company, has announced plans to invest $20 million in YouTube Learning, an initiative hinted at during the summer. The goal: “to support education-focused creators and expert organizations that create and curate high-quality learning content on the video site.” Funding will be spent on supporting video creators who want to produce education series and wooing other education video providers to the site.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian