Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

Augmented Reality: Everything You Need to Know for 2018 — from isl.co by Josh Strupp

Excerpt:

Here’s the trade-off: what we gain in development ease-of-use (native SDKs, integration into existing workflows) and performance enhancements (load times, battery efficiency, render quality, integration with native apps), we lose in universality; naturally, each company wants you staying within its own ecosystem.

In a nutshell: new AR platforms from today’s tech giants are aimed at reducing technical headache so you can focus on creating amazing experiences… but they also want you creating more apps for their respective mobile ecosystems.

 

 

 

 

 

 

This AR App Teaches You How To Play The Piano — from vrscout.com by Steve Ip & Sydney Wuu
AR piano learning system with improvised jam sessions.

Excerpt:

Learning to play the piano is getting an immersive upgrade with a new augmented reality (AR) piano training software called Music Everywhere. The HoloLens app aims to help students of all talent levels build fundamental music theory and performance skills. While traditional piano lessons can cost upwards of $100 per hour, Music Everywhere is free on the Microsoft store and offers a cost effective tutoring solution that provides students with immediate interaction feedback, making it differ greatly from watching a video tutorial.

Founded in 2017, Music Everywhere began at Carnegie Mellon’s ETC with Seth Glickman, Fu Yen Hsiao, and Byunghwan Lee realizing the nascent technology could be used for skills training. The app was the first Augmented Reality music learning platform to take first prize in Microsoft’s HoloLens Developer Contest, beating more than one-thousand submissions.

 

 

 

 

Making Virtual Reality a Reality in Today’s Classrooms — from thejournal.com by Meredith Thompson

Excerpt:

The market for virtual reality applications is growing at a rapid pace, and is expected to double in the next five years (Bolkan, 2017). As the cost of equipment falls and schools have greater access to technology, there is great interest in virtual reality as an educational tool. A small but growing group of educators have started to integrate virtual reality in their classrooms, with promising results (Castaneda, Cechony & Bautista, 2017). We reached out to teachers and administrators who are currently using virtual reality in their classrooms to hear their perspectives and practical strategies for infusing this resource into their classrooms.

Teachers have creative ideas for how to incorporate immersive education in current classrooms: how to select activities, how to set up the classroom, how to get support during the activity and how to transport devices. Teachers also shared their ideas for future applications of VR, including how to deepen the learning experience and to expand the reach of these technologies to a greater population of students.

Here we share three vignettes of three different approaches: a social studies class in a suburban school district, a district-wide perspective from an urban school district and a class designed entirely around understanding and implementing VR for other classrooms. We also share how we are using these ideas to inform our own project in designing a collaborative immersive virtual reality educational game for introductory high school biology.

 

 

3 best practices from VR implementation across departments — from ecampusnews.com by Andrew Woodberry
Professors across many disciplines are embracing VR technology as an integral part of their learning tools

Excerpts:

VR is already being used for many real-world applications–hiring, training, marketing/sales, medical purposes, entertainment, and more–and is worth considering for many different university departments.

At German University in Cairo, architecture students used our platform to create tours of historical Cairo buildings, complete with educational hotspot overlays on particularly interesting features. This multimedia approach educated students without them having to travel to the buildings. It also made for a more “stickier” learning experience for the students involved in creating it.

At Emporia State University, for example, the forensic science students view virtual crime scenes recorded at the Kansas Bureau of Investigation in Topeka. Forensic-science students can look for clues and learn facts via voiceover, mimicking an actual crime-scene inquiry quite impressively.

 

 

Augmented and virtual reality products to get excited about in 2018 — from gearbrain.com by Alistair Charlton
CES 2018 showed us the way forward for AR and VR this year

Excerpt:

Just as televisions and driverless cars have become part of the furniture at the CES technology show, so too have virtual and augmented reality headsets.

Although the momentum behind VR’s growth slowed in 2017 – the industry seemingly unsure if it should progress with a technology destined to remain a niche – AR is being welcomed into the spotlight with open arms.

Here are six AR and VR highlights from CES 2018.

 

 

Looking to boost AR and VR technology, University of Washington establishes center in Seattle — from edscoop.com by Emily Tate
The UW Reality Lab will focus on “core research advances” in augmented and virtual reality.

Excerpt:

The University of Washington, hoping to get ahead in the burgeoning field of augmented reality (AR) and virtual reality (VR), has launched the UW Reality Lab, a center for research, education and innovation in AR and VR.

One of the first research centers in the world built for AR and VR projects, the UW Reality Lab is also located in Seattle — a hotspot for technology companies, from behemoths like Amazon and Microsoft to startups still trying to get off the ground.

 

“We’re seeing some really compelling and high-quality AR and VR experiences being built today,” Steve Seitz, center co-lead and Allen School professor, said in the university’s statement. “But, there are still many core research advances needed to move the industry forward — tools for easily creating content, infrastructure solutions for streaming 3D video, and privacy and security safeguards — that university researchers are uniquely positioned to tackle.”

 

 

 

Augmented Reality: Is it the Future of eLearning

Excerpt:

Why Augmented Reality is Important for eLearning
According to a report released by CCS Insight, augmented and virtual reality hardware is set to become a $4 billion market by 2018. Let’s take a look at how augmented reality can be leveraged in the online learning space:

Simulated working environments
One of the most common advantages of online learning is the ability to form an environment in which the users have the freedom to experiment. As people usually learn from their mistakes, when they work in a consequence-free environment, they are most likely to remember the right way to do things.

Support Gamification
As online learning management systems (LMSs) use gamification widely, augmented reality can be directly applied. In AR reality training module, employees will be rewarded for effectively performing their routine tasks in the right way, which will eventually improve performance.

Immersive Learning Environments
Using a tablet, smartphone for the online training software means the users are constantly distracted with emails, notifications from social channels etc. This is one of the reasons why elearning content uses interactive multimedia elements to engage students. With augmented reality, elearning courses can be supported with 360° video, which will engage the user and remove distractions for them.

Motion tracking
Motion and gesture tracking are part of the AR experience. They are commonly leveraged for choosing menu items or engaging with video game-based environments.

In the online learning domain, LMSs can use this technology to track learner’s progress to ensure that they are achieving the set targets without fail. This will boost real-time training performance and improve interactivity with instant feedback.

Simply put, with augmented reality the possibilities are endless. With the growing number of Bring Your Own Device (BYOD) workplaces, it is expected that employees and learners will be delighted to use augmented reality.

 

 

Virtual Reality And Beyond: The Future Of Music Experiences — from hypebot.com by Jen Sako

Excerpt:

The Musical Future of VR
VR technology is still in its earliest stages, but musicians are already seeing how they will be able to connect to fans and make news ones without the expense of touring. In artificial environments, bands can invite music lovers into their world.

But beyond the obvious entertainment factor, VR has the potential to become a tool for education. Music students could enter a studio space using VR gear for lessons and practice. The immediate feedback provided and game-like atmosphere may keep students more motivated and engaged. Imagine methods for teaching that include ways to slow down and loop difficult parts or bringing in the composer for lessons.

VR can also connect music lovers to the many people behind the scenes involved in producing the music they enjoy. Listeners can learn about the industry and how a song comes to life. They’ll understand why it’s important to play a part in sustaining the music business.

For this technology to become a reality in itself inside consumers’ listening and learning spaces, obstacles need addressing. The hardware is still pricey, bulky and requires a power source. Apps need creators who will need more in the way of artificial intelligence.

 

 

ARiA, The AR Conference At MIT, Is The Anti-CES — from forbes.com by Charlie Fink

Excerpt:

“The ability to combine digital information with the real world is going to disrupt every business model, transform human/machine interaction, and generate innovative use cases across every discipline and in every vertical including education, healthcare, manufacturing,” Werner continued. “I see ARiA as the TED for AR, where the best minds come together to solve real work problems and share ideas to capitalize on the huge opportunity.”

 

Broadcast news and sports now routinely lay data, graphics, and animation onto the physical world. AR has become ubiquitous in ways that have nothing to do with smart glasses. “AR is on the verge.

 

 

2017 Augmented Reality Year in Review — from wikitude.com

 

 

 

Microsoft Education unveils new Windows 10 devices starting at $189, Office 365 tools for personalized learning, and curricula to ignite a passion for STEM — from blogs.windows.com by Yusuf Mehdi

Excerpt:

In regards to mixed reality for immersive learning:

  • Pearson – the world’s largest education company – will begin rolling out in March curriculum that will work on both HoloLens and Windows Mixed Reality immersive VR headsets. These six new applications will deliver seamless experiences across devices and further illustrate the value of immersive educational experiences.
  • We are expanding our mixed media reality curriculum offerings through a new partnership with WGBH’s Bringing the Universe to America’s Classrooms project****, for distribution nationally on PBS LearningMedia™. This effort brings cutting-edge Earth and Space Science content into classrooms through digital learning resources that increase student engagement with science phenomena and practices.
  • To keep up with growing demand for HoloLens in the classroom we are committed to providing affordable solutions. Starting on January 22, we are making available a limited-time academic pricing offer for HoloLens. To take advantage of the limited-time academic pricing offer, please visit, hololens.com/edupromo.

 

 

 

From DSC:
In Part I, I looked at the new, exponential pace of change that colleges, community colleges and universities now need to deal with – observing the enormous changes that are starting to occur throughout numerous societies around the globe. If we were to plot out the rate of change, we would see that we are no longer on a slow, steady, incremental type of linear pathway; but, instead, we would observe that we are now on an exponential trajectory (as the below graphic from sparks & honey very nicely illustrates).

 

 

How should colleges and universities deal with this new, exponential pace of change?

1) I suggest that you ensure that someone in your institution is lifting their gaze and peering out into the horizons, to see what’s coming down the pike. That person – or more ideally, persons – should also be looking around them, noticing what’s going on within the current landscapes of higher education. Regardless of how your institution tackles this task, given that we are currently moving at an incredibly fast pace, this trend analysis is very important. The results from this analysis should immediately be integrated into your strategic plan. Don’t wait 3-5 years to integrate these new findings into your plan. The new, exponential pace of change is going to reward those organizations who are nimble and responsive.

2) I recommend that you look at what programs you are offering and consider if you should be developing additional programs such as those that deal with:

  • Artificial Intelligence (Natural Language Processing, deep learning, machine learning, bots)
  • New forms of Human Computer Interaction such as Augmented Reality, Virtual Reality, and Mixed Reality
  • User Experience Design, User Interface Design, and/or Interaction Design
  • Big data, data science, working with data
  • The Internet of Things, machine-to-machine communications, sensors, beacons, etc.
  • Blockchain-based technologies/systems
  • The digital transformation of business
  • Freelancing / owning your own business / entrepreneurship (see this article for the massive changes happening now!)
  • …and more

3) If you are not already doing so, I recommend that you immediately move to offer a robust lineup of online-based programs. Why do I say this? Because:

  • Without them, your institution may pay a heavy price due to its diminishing credibility. Your enrollments could decline if learners (and their families) don’t think they will get solid jobs coming out of your institution. If the public perceives you as a dinosaur/out of touch with what the workplace requires, your enrollment/admissions groups may find meeting their quotas will get a lot harder as the years go on. You need to be sending some cars down the online/digital/virtual learning tracks. (Don’t get me wrong. We still need the liberal arts. However, even those institutions who offer liberal arts lineups will still need to have a healthy offering of online-based programs.)
  • Online-based learning methods can expand the reach of your faculty members while offering chances for individuals throughout the globe to learn from you, and you from them
  • Online-based learning programs can increase your enrollments, create new revenue streams, and develop/reach new markets
  • Online-based learning programs have been proven to offer the same learning gains – and sometimes better learning results than – what’s being achieved in face-to-face based classrooms
  • The majority of pedagogically-related innovations are occurring within the online/digital/virtual realm, and you will want to have built the prior experience, expertise, and foundations in order to leverage and benefit from them
  • Faculty take their learning/experiences from offering online-based courses back into their face-to-face courses
  • Due to the increasing price of obtaining a degree, students often need to work to help get them (at least part of the way) through school; thus, flexibility is becoming increasingly important and necessary for students
  • An increasing number of individuals within the K-12 world as well as the corporate world are learning via online-based means. This is true within higher education as well, as, according to a recent report from Digital Learning Compass states that “the number of higher education students taking at least one distance education course in 2015 now tops six million, about 30% of all enrollments.”
  • Families are looking very closely at their return on investments being made within the world of higher education. They want to see that their learners are being prepared for the ever-changing future that they will encounter. If people in the workforce often learn online, then current students should be getting practice in that area of their learning ecosystems as well.
  • As the (mostly) online-based Amazon.com is thriving and retail institutions such as Sears continue to close, people are in the process of forming more generalized expectations that could easily cross over into the realm of higher education. By the way, here’s how our local Sears building is looking these days…or what’s left of it.

 

 

 

4) I recommend that you move towards offering more opportunities for lifelong learning, as learners need to constantly add to their skillsets and knowledge base in order to remain marketable in today’s workforce. This is where adults greatly appreciate – and need – the greater flexibility offered by online-based means of learning. I’m not just talking about graduate programs or continuing studies types of programs here. Rather, I’m hoping that we can move towards providing streams of up-to-date content that learners can subscribe to at any time (and can drop their subscription to at any time). As a relevant side note here, keep your eyes on blockchain-based technologies here.

5) Consider the role of consortia and pooling resources. How might that fit into your strategic plan?

6) Consider why bootcamps continue to come onto the landscape.  What are traditional institutions of higher education missing here?

7) And lastly, if one doesn’t already exist, form a small, nimble, innovative group within your organization — what I call a TrimTab Group — to help identify what will and won’t work for your institution.

 

 

 

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remotely or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #VoiceRecognition |  #CognitiveComputing
#SmartClassrooms  |  #LearningSpaces  |#Collaboration |  #Meetings 


 

 

 

From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

Microsoft Accelerates HoloLens V3 Development, Sidesteps V2 — from thurrott.com by Brad Sams

 

 

Excerpt:

Back when the first version of Hololens came out, Microsoft created a roadmap that highlighted several release points for the product. This isn’t unusual, you start with the first device, second generation devices are typically smaller and more affordable and then with version three you introduce new technology that upgrades the experience; this is a standard process path in the technology sector. Microsoft, based on my sources, is sidelining what was going to be version two of HoloLens and is going straight to version three.

While some may see this as bad news that a cheaper version of HoloLens will not arrive this year or likely next year, by accelerating the technology that will bring us the expanded field of view with a smaller footprint, the new roadmap allows for a device that is usable in everyday life to arrive sooner.

Microsoft is playing for the long-term with this technology to make sure they are well positioned for the next revolution in computing. By adjusting their path today for HoloLens, they are making sure that they remain the segment leader for years to come.

 

 

 
 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

Top 200 Tools for Learning 2016: Overview — from c4lpt.co.uk by Jane Hart

Also see Jane’s:

  1. TOP 100 TOOLS FOR PERSONAL & PROFESSIONAL LEARNING (for formal/informal learning and personal productivity)
  2. TOP 100 TOOLS FOR WORKPLACE LEARNING (for training, e-learning, performance support and social collaboration
  3. TOP 100 TOOLS FOR EDUCATION (for use in primary and secondary (K12) schools, colleges, universities and adult education.)

 

top200tools-2016-jane-hart

 

Also see Jane’s “Best of Breed 2016” where she breaks things down into:

  1. Instructional tools
  2. Content development tools
  3. Social tools
  4. Personal tools

 

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
Interactive video — a potentially very powerful medium to use, especially for blended and online-based courses or training-related materials! This interactive piece from Heineken is very well done, even remembering how you answered and coming up with their evaluation of you from their 12-question “interview.”

But notice again, a TEAM of specialists are needed to create such a piece. Neither a faculty member, a trainer, nor an instructional designer can do something like this all on their own. Some of the positions I could imagine here are:

  • Script writer(s)
  • Editor(s)
  • Actors and actresses
  • Those skilled in stage lighting and sound / audio recording
  • Digital video editors
  • Programmers
  • Graphic designers
  • Web designers
  • Producers
  • Product marketers
  • …and perhaps others

This is the kind of work that I wish we saw more of in the world of online and blended courses!  Also, I appreciated their use of humor. Overall, a very engaging, fun, and informative piece!

 

heineken-interactive-video-cover-sep2016

 

heineken-interactive-video-first-sep2016

 

heineken-interactive-video0-sep2016

 

heineken-interactive-video1-sep2016

 

heineken-interactive-video2-sep2016

 

heineken-interactive-video3-sep2016

 

 

 

LinkedIn ProFinder expands nationwide to help you hire freelancers — from blog.linkedin.com

Excerpt:

The freelance economy is on the rise. In fact, the number of freelancers on LinkedIn has grown by nearly 50% in just the past five years. As the workforce evolves, we, too, are evolving to ensure we’re creating opportunity for the expanding sector of professionals looking for independent, project-based work in place of the typical 9 to 5 profession.

Last October, we began piloting a brand new platform in support of this very endeavor and today, we’re excited to announce its nationwide availability. Introducing LinkedIn ProFinder, a LinkedIn marketplace that connects consumers and small businesses looking for professional services – think Design, Writing and Editing, Accounting, Real Estate, Career Coaching – with top quality freelance professionals best suited for the job.

 

 

Also see:

 

linkedin-profinder-aug2016

 

Also see:

 

40percentfreelancersby2020-quartz-april2013

 

DanielChristian2-ARVRMRCampusTechArticle-8-16-16

 

From Dreams to Realities: AR/VR/MR in Education | A Q&A with Daniel Christian — from campustechnology.com by Mary Grush; I’d like to thank Jason VanHorn for his contributions to this article

Excerpt:

Grush: Is there a signpost you might point to that would indicate that there’s going to be more product development in AR/VR/MR?

Christian: There’s a significant one. Several major players — with very deep pockets — within the corporate world are investing in new forms of HCI, including Microsoft, Google, Apple, Facebook, Magic Leap, and others. In fact, according to an article on engadget.com from 6/16/16, “Magic Leap has amassed an astounding $1.39 billion in funding without shipping an actual product.” So to me, it’s just not likely that the billions of dollars being invested in a variety of R&D-related efforts are simply going to evaporate without producing any impactful, concrete products or services. There are too many extremely smart, creative people working on these projects, and they have impressive financial backing behind their research and product development efforts. So, I think we can expect an array of new choices in AR/VR/MR.

Just the other day I was talking to Jason VanHorn, an associate professor in our geology, geography, and environmental studies department. After finishing our discussion about a particular learning space and how we might implement active learning in it, we got to talking about mixed reality. He related his wonderful dreams of being able to view, manipulate, maneuver through, and interact with holographic displays of our planet Earth.

When I mentioned a video piece done by Case Western and the Cleveland Clinic that featured Microsoft’s Hololens technology, he knew exactly what I was referring to. But this time, instead of being able to drill down through the human body to review, explore, and learn about the various systems composing our human anatomy, he wanted to be able to drill down through the various layers of the planet Earth. He also wanted to be able to use gestures to maneuver and manipulate the globe — turning the globe to just the right spot before using a gesture to drill down to a particular place.

 

 

 

 
© 2024 | Daniel Christian