From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Gartner: Immersive experiences among top tech trends for 2019 — from campustechnology.com by Dian Schaffhauser

Excerpt:

IT analyst firm Gartner has named its top 10 trends for 2019, and the “immersive user experience” is on the list, alongside blockchain, quantum computing and seven other drivers influencing how we interact with the world. The annual trend list covers breakout tech with broad impact and tech that could reach a tipping point in the near future.

 

 

 

Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings — from alphr.com by Bobby Hellard
Microsoft Released on GitHub, Microsoft’s AI-powered developer tool can shave hours off web and app building

Excerpt:

Microsoft has developed an AI-powered web design tool capable of turning sketches of websites into functional HTML code.

Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.

 

 

 

 

 
 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

Augmented Reality: Everything You Need to Know for 2018 — from isl.co by Josh Strupp

Excerpt:

Here’s the trade-off: what we gain in development ease-of-use (native SDKs, integration into existing workflows) and performance enhancements (load times, battery efficiency, render quality, integration with native apps), we lose in universality; naturally, each company wants you staying within its own ecosystem.

In a nutshell: new AR platforms from today’s tech giants are aimed at reducing technical headache so you can focus on creating amazing experiences… but they also want you creating more apps for their respective mobile ecosystems.

 

 

 

 

 

 

This AR App Teaches You How To Play The Piano — from vrscout.com by Steve Ip & Sydney Wuu
AR piano learning system with improvised jam sessions.

Excerpt:

Learning to play the piano is getting an immersive upgrade with a new augmented reality (AR) piano training software called Music Everywhere. The HoloLens app aims to help students of all talent levels build fundamental music theory and performance skills. While traditional piano lessons can cost upwards of $100 per hour, Music Everywhere is free on the Microsoft store and offers a cost effective tutoring solution that provides students with immediate interaction feedback, making it differ greatly from watching a video tutorial.

Founded in 2017, Music Everywhere began at Carnegie Mellon’s ETC with Seth Glickman, Fu Yen Hsiao, and Byunghwan Lee realizing the nascent technology could be used for skills training. The app was the first Augmented Reality music learning platform to take first prize in Microsoft’s HoloLens Developer Contest, beating more than one-thousand submissions.

 

 

 

 

Making Virtual Reality a Reality in Today’s Classrooms — from thejournal.com by Meredith Thompson

Excerpt:

The market for virtual reality applications is growing at a rapid pace, and is expected to double in the next five years (Bolkan, 2017). As the cost of equipment falls and schools have greater access to technology, there is great interest in virtual reality as an educational tool. A small but growing group of educators have started to integrate virtual reality in their classrooms, with promising results (Castaneda, Cechony & Bautista, 2017). We reached out to teachers and administrators who are currently using virtual reality in their classrooms to hear their perspectives and practical strategies for infusing this resource into their classrooms.

Teachers have creative ideas for how to incorporate immersive education in current classrooms: how to select activities, how to set up the classroom, how to get support during the activity and how to transport devices. Teachers also shared their ideas for future applications of VR, including how to deepen the learning experience and to expand the reach of these technologies to a greater population of students.

Here we share three vignettes of three different approaches: a social studies class in a suburban school district, a district-wide perspective from an urban school district and a class designed entirely around understanding and implementing VR for other classrooms. We also share how we are using these ideas to inform our own project in designing a collaborative immersive virtual reality educational game for introductory high school biology.

 

 

3 best practices from VR implementation across departments — from ecampusnews.com by Andrew Woodberry
Professors across many disciplines are embracing VR technology as an integral part of their learning tools

Excerpts:

VR is already being used for many real-world applications–hiring, training, marketing/sales, medical purposes, entertainment, and more–and is worth considering for many different university departments.

At German University in Cairo, architecture students used our platform to create tours of historical Cairo buildings, complete with educational hotspot overlays on particularly interesting features. This multimedia approach educated students without them having to travel to the buildings. It also made for a more “stickier” learning experience for the students involved in creating it.

At Emporia State University, for example, the forensic science students view virtual crime scenes recorded at the Kansas Bureau of Investigation in Topeka. Forensic-science students can look for clues and learn facts via voiceover, mimicking an actual crime-scene inquiry quite impressively.

 

 

Augmented and virtual reality products to get excited about in 2018 — from gearbrain.com by Alistair Charlton
CES 2018 showed us the way forward for AR and VR this year

Excerpt:

Just as televisions and driverless cars have become part of the furniture at the CES technology show, so too have virtual and augmented reality headsets.

Although the momentum behind VR’s growth slowed in 2017 – the industry seemingly unsure if it should progress with a technology destined to remain a niche – AR is being welcomed into the spotlight with open arms.

Here are six AR and VR highlights from CES 2018.

 

 

Looking to boost AR and VR technology, University of Washington establishes center in Seattle — from edscoop.com by Emily Tate
The UW Reality Lab will focus on “core research advances” in augmented and virtual reality.

Excerpt:

The University of Washington, hoping to get ahead in the burgeoning field of augmented reality (AR) and virtual reality (VR), has launched the UW Reality Lab, a center for research, education and innovation in AR and VR.

One of the first research centers in the world built for AR and VR projects, the UW Reality Lab is also located in Seattle — a hotspot for technology companies, from behemoths like Amazon and Microsoft to startups still trying to get off the ground.

 

“We’re seeing some really compelling and high-quality AR and VR experiences being built today,” Steve Seitz, center co-lead and Allen School professor, said in the university’s statement. “But, there are still many core research advances needed to move the industry forward — tools for easily creating content, infrastructure solutions for streaming 3D video, and privacy and security safeguards — that university researchers are uniquely positioned to tackle.”

 

 

 

Augmented Reality: Is it the Future of eLearning

Excerpt:

Why Augmented Reality is Important for eLearning
According to a report released by CCS Insight, augmented and virtual reality hardware is set to become a $4 billion market by 2018. Let’s take a look at how augmented reality can be leveraged in the online learning space:

Simulated working environments
One of the most common advantages of online learning is the ability to form an environment in which the users have the freedom to experiment. As people usually learn from their mistakes, when they work in a consequence-free environment, they are most likely to remember the right way to do things.

Support Gamification
As online learning management systems (LMSs) use gamification widely, augmented reality can be directly applied. In AR reality training module, employees will be rewarded for effectively performing their routine tasks in the right way, which will eventually improve performance.

Immersive Learning Environments
Using a tablet, smartphone for the online training software means the users are constantly distracted with emails, notifications from social channels etc. This is one of the reasons why elearning content uses interactive multimedia elements to engage students. With augmented reality, elearning courses can be supported with 360° video, which will engage the user and remove distractions for them.

Motion tracking
Motion and gesture tracking are part of the AR experience. They are commonly leveraged for choosing menu items or engaging with video game-based environments.

In the online learning domain, LMSs can use this technology to track learner’s progress to ensure that they are achieving the set targets without fail. This will boost real-time training performance and improve interactivity with instant feedback.

Simply put, with augmented reality the possibilities are endless. With the growing number of Bring Your Own Device (BYOD) workplaces, it is expected that employees and learners will be delighted to use augmented reality.

 

 

Virtual Reality And Beyond: The Future Of Music Experiences — from hypebot.com by Jen Sako

Excerpt:

The Musical Future of VR
VR technology is still in its earliest stages, but musicians are already seeing how they will be able to connect to fans and make news ones without the expense of touring. In artificial environments, bands can invite music lovers into their world.

But beyond the obvious entertainment factor, VR has the potential to become a tool for education. Music students could enter a studio space using VR gear for lessons and practice. The immediate feedback provided and game-like atmosphere may keep students more motivated and engaged. Imagine methods for teaching that include ways to slow down and loop difficult parts or bringing in the composer for lessons.

VR can also connect music lovers to the many people behind the scenes involved in producing the music they enjoy. Listeners can learn about the industry and how a song comes to life. They’ll understand why it’s important to play a part in sustaining the music business.

For this technology to become a reality in itself inside consumers’ listening and learning spaces, obstacles need addressing. The hardware is still pricey, bulky and requires a power source. Apps need creators who will need more in the way of artificial intelligence.

 

 

ARiA, The AR Conference At MIT, Is The Anti-CES — from forbes.com by Charlie Fink

Excerpt:

“The ability to combine digital information with the real world is going to disrupt every business model, transform human/machine interaction, and generate innovative use cases across every discipline and in every vertical including education, healthcare, manufacturing,” Werner continued. “I see ARiA as the TED for AR, where the best minds come together to solve real work problems and share ideas to capitalize on the huge opportunity.”

 

Broadcast news and sports now routinely lay data, graphics, and animation onto the physical world. AR has become ubiquitous in ways that have nothing to do with smart glasses. “AR is on the verge.

 

 

2017 Augmented Reality Year in Review — from wikitude.com

 

 

 

Microsoft Education unveils new Windows 10 devices starting at $189, Office 365 tools for personalized learning, and curricula to ignite a passion for STEM — from blogs.windows.com by Yusuf Mehdi

Excerpt:

In regards to mixed reality for immersive learning:

  • Pearson – the world’s largest education company – will begin rolling out in March curriculum that will work on both HoloLens and Windows Mixed Reality immersive VR headsets. These six new applications will deliver seamless experiences across devices and further illustrate the value of immersive educational experiences.
  • We are expanding our mixed media reality curriculum offerings through a new partnership with WGBH’s Bringing the Universe to America’s Classrooms project****, for distribution nationally on PBS LearningMedia™. This effort brings cutting-edge Earth and Space Science content into classrooms through digital learning resources that increase student engagement with science phenomena and practices.
  • To keep up with growing demand for HoloLens in the classroom we are committed to providing affordable solutions. Starting on January 22, we are making available a limited-time academic pricing offer for HoloLens. To take advantage of the limited-time academic pricing offer, please visit, hololens.com/edupromo.

 

 

 

From DSC:
In Part I, I looked at the new, exponential pace of change that colleges, community colleges and universities now need to deal with – observing the enormous changes that are starting to occur throughout numerous societies around the globe. If we were to plot out the rate of change, we would see that we are no longer on a slow, steady, incremental type of linear pathway; but, instead, we would observe that we are now on an exponential trajectory (as the below graphic from sparks & honey very nicely illustrates).

 

 

How should colleges and universities deal with this new, exponential pace of change?

1) I suggest that you ensure that someone in your institution is lifting their gaze and peering out into the horizons, to see what’s coming down the pike. That person – or more ideally, persons – should also be looking around them, noticing what’s going on within the current landscapes of higher education. Regardless of how your institution tackles this task, given that we are currently moving at an incredibly fast pace, this trend analysis is very important. The results from this analysis should immediately be integrated into your strategic plan. Don’t wait 3-5 years to integrate these new findings into your plan. The new, exponential pace of change is going to reward those organizations who are nimble and responsive.

2) I recommend that you look at what programs you are offering and consider if you should be developing additional programs such as those that deal with:

  • Artificial Intelligence (Natural Language Processing, deep learning, machine learning, bots)
  • New forms of Human Computer Interaction such as Augmented Reality, Virtual Reality, and Mixed Reality
  • User Experience Design, User Interface Design, and/or Interaction Design
  • Big data, data science, working with data
  • The Internet of Things, machine-to-machine communications, sensors, beacons, etc.
  • Blockchain-based technologies/systems
  • The digital transformation of business
  • Freelancing / owning your own business / entrepreneurship (see this article for the massive changes happening now!)
  • …and more

3) If you are not already doing so, I recommend that you immediately move to offer a robust lineup of online-based programs. Why do I say this? Because:

  • Without them, your institution may pay a heavy price due to its diminishing credibility. Your enrollments could decline if learners (and their families) don’t think they will get solid jobs coming out of your institution. If the public perceives you as a dinosaur/out of touch with what the workplace requires, your enrollment/admissions groups may find meeting their quotas will get a lot harder as the years go on. You need to be sending some cars down the online/digital/virtual learning tracks. (Don’t get me wrong. We still need the liberal arts. However, even those institutions who offer liberal arts lineups will still need to have a healthy offering of online-based programs.)
  • Online-based learning methods can expand the reach of your faculty members while offering chances for individuals throughout the globe to learn from you, and you from them
  • Online-based learning programs can increase your enrollments, create new revenue streams, and develop/reach new markets
  • Online-based learning programs have been proven to offer the same learning gains – and sometimes better learning results than – what’s being achieved in face-to-face based classrooms
  • The majority of pedagogically-related innovations are occurring within the online/digital/virtual realm, and you will want to have built the prior experience, expertise, and foundations in order to leverage and benefit from them
  • Faculty take their learning/experiences from offering online-based courses back into their face-to-face courses
  • Due to the increasing price of obtaining a degree, students often need to work to help get them (at least part of the way) through school; thus, flexibility is becoming increasingly important and necessary for students
  • An increasing number of individuals within the K-12 world as well as the corporate world are learning via online-based means. This is true within higher education as well, as, according to a recent report from Digital Learning Compass states that “the number of higher education students taking at least one distance education course in 2015 now tops six million, about 30% of all enrollments.”
  • Families are looking very closely at their return on investments being made within the world of higher education. They want to see that their learners are being prepared for the ever-changing future that they will encounter. If people in the workforce often learn online, then current students should be getting practice in that area of their learning ecosystems as well.
  • As the (mostly) online-based Amazon.com is thriving and retail institutions such as Sears continue to close, people are in the process of forming more generalized expectations that could easily cross over into the realm of higher education. By the way, here’s how our local Sears building is looking these days…or what’s left of it.

 

 

 

4) I recommend that you move towards offering more opportunities for lifelong learning, as learners need to constantly add to their skillsets and knowledge base in order to remain marketable in today’s workforce. This is where adults greatly appreciate – and need – the greater flexibility offered by online-based means of learning. I’m not just talking about graduate programs or continuing studies types of programs here. Rather, I’m hoping that we can move towards providing streams of up-to-date content that learners can subscribe to at any time (and can drop their subscription to at any time). As a relevant side note here, keep your eyes on blockchain-based technologies here.

5) Consider the role of consortia and pooling resources. How might that fit into your strategic plan?

6) Consider why bootcamps continue to come onto the landscape.  What are traditional institutions of higher education missing here?

7) And lastly, if one doesn’t already exist, form a small, nimble, innovative group within your organization — what I call a TrimTab Group — to help identify what will and won’t work for your institution.

 

 

 

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian