From DSC: DC: Will Amazon get into delivering education/degrees? Is is working on a next generation learning platform that could highly disrupt the world of higher education? Hmmm…time will tell.
But Amazon has a way of getting into entirely new industries. From its roots as an online bookseller, it has branched off into numerous other arenas. It has the infrastructure, talent, and the deep pockets to bring about the next generation learning platform that I’ve been tracking for years. It is only one of a handful of companies that could pull this type of endeavor off.
And now, we see articles like these:
Amazon Snags a Higher Ed Superstar — from insidehighered.com by Doug Lederman Candace Thille, a pioneer in the science of learning, takes a leave from Stanford to help the ambitious retailer better train its workers, with implications that could extend far beyond the company.
Excerpt:
A major force in the higher education technology and learning space has quietly begun working with a major corporate force in — well, in almost everything else.
Candace Thille, a pioneer in learning science and open educational delivery, has taken a leave of absence from Stanford University for a position at Amazon, the massive (and getting bigger by the day) retailer.
Thille’s title, as confirmed by an Amazon spokeswoman: director of learning science and engineering. In that capacity, the spokeswoman said, Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon.”
No further details were forthcoming, and Thille herself said she was “taking time away” from Stanford to work on a project she was “not really at liberty to discuss.”
Jeff Bezos’ Amazon empire—which recently dabbled in home security, opened artificial intelligence-powered grocery stores, and started planning a second headquarters (and manufactured a vicious national competition out of it)—has not been idle in 2018.
The e-commerce/retail/food/books/cloud-computing/etc company made another move this week that, while nowhere near as flashy as the above efforts, tells of curious things to come. Amazon has hired Candace Thille, a leader in learning science, cognitive science, and open education at Stanford University, to be “director of learning science and engineering.” A spokesperson told Inside Higher Ed that Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon”; Thille herself said she is “not really at liberty to discuss” her new project.
What could Amazon want with a higher education expert? The company already has footholds in the learning market, running several educational resource platforms. But Thille is famous specifically for her data-driven work, conducted at Stanford and Carnegie Mellon University, on nontraditional ways of learning, teaching, and training—all of which are perfect, perhaps even necessary, for the education of employees.
From DSC: It could just be that Amazon is simply building its own corporate university and will stay focused on developing its own employees and its own corporate learning platform/offerings — and/or perhaps license their new platform to other corporations.
But from my perspective, Amazon continues to work on pieces of a powerful puzzle, one that could eventually involve providing learning experiences to lifelong learners:
Personal assistants
Voice recognition / Natural Language Processing (NLP)
The development of “skills” at an incredible pace
Personalized recommendation engines
Cloud computing and more
If Alexa were to get integrated into a AI-based platform for personalized learning — one that features up-to-date recommendation engines that can identify and personalize/point out the relevant critical needs in the workplace for learners — better look out higher ed! Better look out if such a platform could interactively deliver (and assess) the bulk of the content that essentially does the heavy initial lifting of someone learning about a particular topic.
Amazon will be able to deliver a cloud-based platform, with cloud-based learner profiles and blockchain-based technologies, at a greatly reduced cost. Think about it. No physical footprints to build and maintain, no lawns to mow, no heating bills to pay, no coaches making $X million a year, etc. AI-driven recommendations for digital playlists. Links to the most in demand jobs — accompanied by job descriptions, required skills & qualifications, and courses/modules to take in order to master those jobs.
Such a solution would still need professors, instructional designers, multimedia specialists, copyright experts, etc., but they’ll be able to deliver up-to-date content at greatly reduced costs. That’s my bet. And that’s why I now call this potential development The New Amazon.com of Higher Education.
[Microsoft — with their purchase of Linked In (who had previously
purchased Lynda.com) — is another such potential contender.]
8 ways augmented and virtual reality are changing medicine — from israel21c.org by Abigail Klein Leichman Israeli companies are using futuristic technologies to simplify complex surgery, manage rehab, relieve pain, soothe autistic kids and much more.
1 in 5 workers will have AI as their co worker in 2022
More job roles will change than will be become totally automated so HR needs to prepare today
…
As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.
How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…
…
The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”
…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.
From DSC: Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.
Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.
From DSC: “Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent #AIis a bit unnerving.
Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.
The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.
…
“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.
Here’s the trade-off: what we gain in development ease-of-use (native SDKs, integration into existing workflows) and performance enhancements (load times, battery efficiency, render quality, integration with native apps), we lose in universality; naturally, each company wants you staying within its own ecosystem.
In a nutshell: new AR platforms from today’s tech giants are aimed at reducing technical headache so you can focus on creating amazing experiences… but they also want you creating more apps for their respective mobile ecosystems.
Learning to play the piano is getting an immersive upgrade with a new augmented reality (AR) piano training software called Music Everywhere. The HoloLens app aims to help students of all talent levels build fundamental music theory and performance skills. While traditional piano lessons can cost upwards of $100 per hour, Music Everywhere is free on the Microsoft store and offers a cost effective tutoring solution that provides students with immediate interaction feedback, making it differ greatly from watching a video tutorial.
Founded in 2017, Music Everywhere began at Carnegie Mellon’s ETC with Seth Glickman, Fu Yen Hsiao, and Byunghwan Lee realizing the nascent technology could be used for skills training. The app was the first Augmented Reality music learning platform to take first prize in Microsoft’s HoloLens Developer Contest, beating more than one-thousand submissions.
The market for virtual reality applications is growing at a rapid pace, and is expected to double in the next five years (Bolkan, 2017). As the cost of equipment falls and schools have greater access to technology, there is great interest in virtual reality as an educational tool. A small but growing group of educators have started to integrate virtual reality in their classrooms, with promising results (Castaneda, Cechony & Bautista, 2017). We reached out to teachers and administrators who are currently using virtual reality in their classrooms to hear their perspectives and practical strategies for infusing this resource into their classrooms.
Teachers have creative ideas for how to incorporate immersive education in current classrooms: how to select activities, how to set up the classroom, how to get support during the activity and how to transport devices. Teachers also shared their ideas for future applications of VR, including how to deepen the learning experience and to expand the reach of these technologies to a greater population of students.
Here we share three vignettes of three different approaches: a social studies class in a suburban school district, a district-wide perspective from an urban school district and a class designed entirely around understanding and implementing VR for other classrooms. We also share how we are using these ideas to inform our own project in designing a collaborative immersive virtual reality educational game for introductory high school biology.
VR is already being used for many real-world applications–hiring, training, marketing/sales, medical purposes, entertainment, and more–and is worth considering for many different university departments.
…
At German University in Cairo, architecture students used our platform to create tours of historical Cairo buildings, complete with educational hotspot overlays on particularly interesting features. This multimedia approach educated students without them having to travel to the buildings. It also made for a more “stickier” learning experience for the students involved in creating it.
…
At Emporia State University, for example, the forensic science students view virtual crime scenes recorded at the Kansas Bureau of Investigation in Topeka. Forensic-science students can look for clues and learn facts via voiceover, mimicking an actual crime-scene inquiry quite impressively.
Just as televisions and driverless cars have become part of the furniture at the CES technology show, so too have virtual and augmented reality headsets.
Although the momentum behind VR’s growth slowed in 2017 – the industry seemingly unsure if it should progress with a technology destined to remain a niche – AR is being welcomed into the spotlight with open arms.
…
Here are six AR and VR highlights from CES 2018.
The University of Washington, hoping to get ahead in the burgeoning field of augmented reality (AR) and virtual reality (VR), has launched the UW Reality Lab, a center for research, education and innovation in AR and VR.
…
One of the first research centers in the world built for AR and VR projects, the UW Reality Lab is also located in Seattle — a hotspot for technology companies, from behemoths like Amazon and Microsoft to startups still trying to get off the ground.
“We’re seeing some really compelling and high-quality AR and VR experiences being built today,” Steve Seitz, center co-lead and Allen School professor, said in the university’s statement. “But, there are still many core research advances needed to move the industry forward — tools for easily creating content, infrastructure solutions for streaming 3D video, and privacy and security safeguards — that university researchers are uniquely positioned to tackle.”
Why Augmented Reality is Important for eLearning According to a report released by CCS Insight, augmented and virtual reality hardware is set to become a $4 billion market by 2018. Let’s take a look at how augmented reality can be leveraged in the online learning space:
Simulated working environments One of the most common advantages of online learning is the ability to form an environment in which the users have the freedom to experiment. As people usually learn from their mistakes, when they work in a consequence-free environment, they are most likely to remember the right way to do things.
Support Gamification
As online learning management systems (LMSs) use gamification widely, augmented reality can be directly applied. In AR reality training module, employees will be rewarded for effectively performing their routine tasks in the right way, which will eventually improve performance.
Immersive Learning Environments
Using a tablet, smartphone for the online training software means the users are constantly distracted with emails, notifications from social channels etc. This is one of the reasons why elearning content uses interactive multimedia elements to engage students. With augmented reality, elearning courses can be supported with 360° video, which will engage the user and remove distractions for them.
Motion tracking
Motion and gesture tracking are part of the AR experience. They are commonly leveraged for choosing menu items or engaging with video game-based environments.
In the online learning domain, LMSs can use this technology to track learner’s progress to ensure that they are achieving the set targets without fail. This will boost real-time training performance and improve interactivity with instant feedback.
Simply put, with augmented reality the possibilities are endless. With the growing number of Bring Your Own Device (BYOD) workplaces, it is expected that employees and learners will be delighted to use augmented reality.
The Musical Future of VR
VR technology is still in its earliest stages, but musicians are already seeing how they will be able to connect to fans and make news ones without the expense of touring. In artificial environments, bands can invite music lovers into their world.
But beyond the obvious entertainment factor, VR has the potential to become a tool for education. Music students could enter a studio space using VR gear for lessons and practice. The immediate feedback provided and game-like atmosphere may keep students more motivated and engaged. Imagine methods for teaching that include ways to slow down and loop difficult parts or bringing in the composer for lessons.
VR can also connect music lovers to the many people behind the scenes involved in producing the music they enjoy. Listeners can learn about the industry and how a song comes to life. They’ll understand why it’s important to play a part in sustaining the music business.
For this technology to become a reality in itself inside consumers’ listening and learning spaces, obstacles need addressing. The hardware is still pricey, bulky and requires a power source. Apps need creators who will need more in the way of artificial intelligence.
“The ability to combine digital information with the real world is going to disrupt every business model, transform human/machine interaction, and generate innovative use cases across every discipline and in every vertical including education, healthcare, manufacturing,” Werner continued. “I see ARiA as the TED for AR, where the best minds come together to solve real work problems and share ideas to capitalize on the huge opportunity.”
Broadcast news and sports now routinely lay data, graphics, and animation onto the physical world. AR has become ubiquitous in ways that have nothing to do with smart glasses. “AR is on the verge.
In regards to mixed reality for immersive learning:
Pearson – the world’s largest education company – will begin rolling out in March curriculum that will work on both HoloLens and Windows Mixed Reality immersive VR headsets. These six new applications will deliver seamless experiences across devices and further illustrate the value of immersive educational experiences.
We are expanding our mixed media reality curriculum offerings through a new partnership with WGBH’s Bringing the Universe to America’s Classrooms project****, for distribution nationally on PBS LearningMedia™. This effort brings cutting-edge Earth and Space Science content into classrooms through digital learning resources that increase student engagement with science phenomena and practices.
To keep up with growing demand for HoloLens in the classroom we are committed to providing affordable solutions. Starting on January 22, we are making available a limited-time academic pricing offer for HoloLens. To take advantage of the limited-time academic pricing offer, please visit, hololens.com/edupromo.
Television. TV. There’s an app for that. Finally! TV — that is, live shows such as the news, specials, documentaries (and reality shows, if you must) — is now just like Candy Crunch and Facebook. TV apps (e.g., DirecTV Now) are available on all devices — smartphones, tablets, laptops, Chromebooks. Accessing streams upon streams of videos is, literally, now just a tap away.
…
Plain and simple: readily accessible video can be a really valuable resource for learners and learning.
…
Not everything that needs to be learned is on video. Instruction will need to balance the use of video with the use of printed materials. That balance, of course, needs to take in cost and accessibility.
…
Now for the 800 pound gorilla in the room: Of course, that TV app could be a huge distraction in the classroom. The TV app has just piled yet another classroom management challenge onto a teacher’s back.
…
That said, it is early days for TV as an app. For example, HD (High Definition) TV demands high bandwidth — and we can experience stuttering/skipping at times. But, when 5G comes around in 2020, just two years from now, POOF, that stuttering/skipping will disappear. “5G will be as much as 1,000 times faster than 4G.” Yes, POOF!
From DSC: Learning via apps is here to stay. “TV” as apps is here to stay. But what’s being described here is but one piece of the learning ecosystem that will be built over the next 5-15 years and will likely be revolutionary in its global impact on how people learn and grow. There will be opportunities for social-based learning, project-based learning, and more — with digital video being a component of the ecosystem, but is and will be insufficient to completely move someone through all of the levels of Bloom’s Taxonomy.
I will continue to track this developing learning ecosystem, but voice-driven personal assistants are already here. Algorithm-based recommendations are already here. Real-time language translation is already here. The convergence of the telephone/computer/television continues to move forward. AI-based bots will only get better in the future. Tapping into streams of up-to-date content will continue to move forward. Blockchain will likely bring us into the age of cloud-based learner profiles. And on and on it goes.
We’ll still need teachers, professors, and trainers. But this vision WILL occur. It IS where things are heading. It’s only a matter of time.
Though the Next Big Thing won’t appear for a while, we know pretty much what it will look like: a lightweight, always-on wearable that obliterates the divide between the stuff we see on screens and the stuff we see when we look up from our screens.
…
Not every company working on post-reality glasses shares an identical vision; some have differing views of how immersive it should be. But all have quietly adopted the implicit assumption that a persistent, wearable artificial reality is the next big thing. The pressure of the competition has forced them to begin releasing interim products, now.
And he added that “these glasses will offer AR, VR, and everything in between, and we’ll wear them all day and we’ll use them in every aspect of our lives.”
Concerned with the decay of African heritage sites, The Zamani Project, based at the University of Cape Town, South Africa, is seeking to immortalize historic spots in three-dimensional, virtual reality-ready models.
Professor Heinz Ruther steers the project. He ventures up and down the continent — visiting Ghana, Tanzania, Mali, Ethiopia, Kenya and elsewhere — recording in remarkable detail the structure and condition of tombs, churches and other buildings.
“I’ve seen how sites are deteriorating visibly,” Ruther told CNN.
The project’s aim is to build a database of complex, lifelike 3-D models. Presently, they’ve mapped around 16 sites including Lalibela in Ethiopia, Timbuktu in Mali and Kilwa in Tanzania.
Education has been a recurring theme throughout the many programs of the NYC Media Lab, a public-private partnership where I serve as an Executive Director. How will virtual and augmented reality change the classroom? How can teachers use immersive media to educate citizens and keep our communities vibrant? In what ways can enterprises leverage innovation to better train employees and streamline workflows?
These are just a few of the top-of-mind questions that NYC Media Lab’s consortium is thinking about as we enter the next wave of media transformation.
Researchers and professionals at work across the VR/AR community in New York City are excited for what comes next. At NYC Media Lab’s recent Exploring Future Reality conference, long-time educators including Agnieszka Roginska of New York University and Columbia University’s Steven Feiner pointed to emerging media as a way to improve multi-modal learning for students and train computer systems to understand the world around us.
NYC Media Lab merges engineering and design research happening at the city’s universities with resources and opportunities from the media and technology industry—to produce new prototypes, launch new companies and advocate for the latest thinking.
In the past year, the Lab has completed dozens of rapid prototyping projects; exhibited hundreds of demos from the corporate, university and entrepreneurship communities; helped new startups make their mark; and hosted three major events, all to explore emerging media technologies and their evolving impact.
While it’s all too easy to lose ourselves in the countless VR worlds at our fingertips, sometimes we just need to access the desktop and get things done in Windows. Thanks to a few innovative apps, this is possible without removing your headset.
The Innovation Group rounds up a preview of what’s in store at CES.
Next week, the electronics industry will descend on Las Vegas for the annual Consumer Electronics Show, the country’s top showcase for technology and innovation. Vendors will show off everything from new gadgets (drones, TVs) to the future of cities and transportation (smart robots, self-driving cars).
What can attendees expect from this year’s CES? And what trends will rise to the top this year? Below, the Innovation Group rounds up our top picks.
This year, expect to see more enhanced voice and image recognition technology, allowing seamless interactions between consumer, lifestyle and retail.
Artificial Intelligence is the talk of the town. It has evolved past merely being a buzzword in 2016, to be used in a more practical manner in 2017. As 2018 rolls out, we will gradually notice AI transitioning into a necessity. We have prepared a detailed report, on what we can expect from AI in the upcoming year. So sit back, relax, and enjoy the ride through the future. (Don’t forget to wear your VR headgear! )
Here are 18 things that will happen in 2018 that are either AI driven or driving AI:
Artificial General Intelligence may gain major traction in research.
We will turn to AI enabled solution to solve mission-critical problems.
Machine Learning adoption in business will see rapid growth.
Safety, ethics, and transparency will become an integral part of AI application design conversations.
Mainstream adoption of AI on mobile devices
Major research on data efficient learning methods
AI personal assistants will continue to get smarter
Race to conquer the AI optimized hardware market will heat up further
We will see closer AI integration into our everyday lives.
The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications.
Deep learning will continue to play a significant role in AI development progress.
AI will be on both sides of the cybersecurity challenge.
Augmented reality content will be brought to smartphones.
Reinforcement learning will be applied to a large number of real-world situations.
Robotics development will be powered by Deep Reinforcement learning and Meta-learning
Rise in immersive media experiences enabled by AI.
A large number of organizations will use Digital Twin.
Inside AI — from inside.com The year 2017 has been full of interesting news about Artificial Intelligence, so to close out the year, we’re doing two special retrospective issues covering the highlights.
Excerpt:
A Reality Check For IBM’s A.I. Ambitions. MIT Tech Review. This is a must read piece about the failures, and continued promise, of Watson. Some of the press about Watson has made IBM appear behind some of the main tech leaders, but, keep in mind that Google, Amazon, Facebook, and others don’t do the kinds of customer facing projects IBM is doing with Watson. When you look at how the tech giants are positioned, I think IBM has been vastly underestimated, given that they have the one thing few others do – large scale enterprise A.I. projects. Whether it all works today, or not, doesn’t matter. The experience and expertise they are building is a competitive advantage in a market that is very young where no other companies are doing these types of projects that will soon enough be mainstream.
The Business of Artificial Intelligence. Harvard Business Review. This cover story for the latest edition of HBR explains why artificial intelligence is the most powerful general purpose technology to come around in a long time. It also looks into some of the key ways to think about applying A.I. at work, and how to expect the next phase of this technology to play out.
The Robot Revolution Is Coming. Just Be Patient. Bloomberg. We keep hearing that robots and A.I. are about to make us super productive. But when? Sooner than we think, according to this.
There are few electronic devices with which you cannot order a Domino’s pizza. When the craving hits, you can place an order via Twitter, Slack, Facebook Messenger, SMS, your tablet, your smartwatch, your smart TV, and even your app-enabled Ford. This year, the pizza monger added another ordering tool: If your home is one of the 20 million with a voice assistant, you can place a regular order through Alexa or Google Home. Just ask for a large extra-cheese within earshot, and voila—your pizza is in the works.
Amazon’s Alexa offers more than 25,000 skills—the set of actions that serve as applications for voice technology. Yet Domino’s is one of a relatively small number of brands that has seized the opportunity to enter your home by creating a skill of its own. Now that Amazon Echoes and Google Homes are in kitchens and living rooms across the country, they open a window into user behavior that marketers previously only dreamt of. But brands’ efforts to engage consumers directly via voice have been scattershot. The list of those that have tried is sparse: some banks; a couple of fast food chains; a few beauty companies; retailers here and there. Building a marketing plan for Alexa has been a risky venture. That’s because, when it comes to our virtual assistants, no one knows what the *&^& is going on.
But if 2017 was the year that Alexa hit the mainstream, 2018 will be the year that advertisers begin to take her seriously by investing time and money in figuring out how to make use of her.
8 emerging AI jobs for IT pros— from enterprisersproject.com by Kevin Casey What IT jobs will be hot in the age of AI? Take a sneak peek at roles likely to be in demand
Excerpt:
If you’re watching the impact of artificial intelligence on the IT organization, your interest probably starts with your own job. Can robots do what you do? But more importantly, you want to skate where the puck is headed. What emerging IT roles will AI create? We talked to AI and IT career experts to get a look at some emerging roles that will be valuable in the age of AI.
This past June, Fortune Magazine asked all the CEOs of the Fortune 500 what they believed the biggest challenge facing their companies was. Their biggest concern for 2017: “The rapid pace of technological change” said 73% of those polled, up from 64% in 2016. Cyber security came in only a far second, at 61%, even after all the mega hacks of the past year.
But apart from the Big 8 technology companies – Google, Facebook, Microsoft, Amazon, IBM, Baidu, Tencent, and Alibaba – business leaders, especially of earlier generations, may feel they don’t know enough about AI to make informed decisions.
For the first time, artificial intelligence has been used to discover two new exoplanets. One of the discoveries, made by Nasa’s Kepler mission, brings the Kepler-90 solar system to a total of 8 planets – the first solar system found with the same number as our own.
Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”
The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.
…
The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.
TeachLive.org TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk. This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.
From DSC: It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!
The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.
But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.” It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.
In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.
A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.
The 10 Most Exciting Digital Health Stories of 2017 — from medicalfuturist.com by Dr. Bertalan Mesko Gene-edited human embryo. Self-driving trucks. Practical quantum computers. 2017 has been an exciting year for science, technology – and digital health! It’s that time of the year again when it’s worth looking back at the past months; and list the inventions, methods and milestone events in healthcare to get a clearer picture what will shape medicine for the years to come.
Excerpt:
Medical chatbots and health assistants on the rise Chatbots, A.I. supported messaging apps or voice controlled bots are forecasted to replace simple messaging apps soon. In healthcare, they could help solve easily diagnosable health concerns or support patient management, e.g. general organizational issues. In the last months, several signs have pointed to the direction that a more widespread use is forthcoming.
We’ve all seen the commercials: “Alexa, is it going to rain today?” “Hey, Google, turn up the volume.” Consumers across the globe are finding increased utility in voice command technology in their homes. But dimming lights and reciting weather forecasts aren’t the only ways these devices are being put to work.
Educators from higher ed powerhouses like Arizona State University to small charter schools like New Mexico’s Taos Academy are experimenting with Amazon Echo, Google Home or Microsoft Invoke and discovering new ways this technology can create a more efficient and creative learning environment.
The devices are being used to help students with and without disabilities gain a new sense for digital fluency, find library materials more quickly and even promote events on college campuses to foster greater social connection.
Like many technologies, the emerging presence of voice command devices in classrooms and at universities is also raising concerns about student privacy and unnatural dependence on digital tools. Yet, many educators interviewed for this report said the rise of voice command technology in education is inevitable — and welcome.
…
“One example,” he said, “is how voice dictation helped a student with dysgraphia. Putting the pencil and paper in front of him, even typing on a keyboard, created difficulties for him. So, when he’s able to speak to the device and see his words on the screen, the connection becomes that much more real to him.”
The use of voice dictation has also been beneficial for students without disabilities, Miller added. Through voice recognition technology, students at Taos Academy Charter School are able to perceive communication from a completely new medium.
This three-part lab can be experienced all at once or separately. At the beginning of each part, Beatriz’s brain acts as an omniscient narrator, helping learners understand how changes to the brain affect daily life and interactions.
Pre and post assessments, along with a facilitation guide, allow learners and instructors to see progression towards outcomes that are addressed through the story and content in the three parts, including:
1) increased knowledge of Alzheimer’s disease and the brain
2) enhanced confidence to care for people with Alzheimer’s disease
3) improvement in care practice
Why a lab about Alzheimer’s Disease?
The Beatriz Lab is very important to us at Embodied Labs. It is the experience that inspired the start of our company. We believe VR is more than a way to evoke feelings of empathy; rather, it is a powerful behavior change tool. By taking the perspective of Beatriz, healthcare professionals and trainees are empowered to better care for people with Alzheimer’s disease, leading to more effective care practices and better quality of life. Through embodying Beatriz, you will gain insight into life with Alzheimer’s and be able to better connect with and care for your loved ones, patients, clients, or others in this communities who live with the disease every day. In our embodied VR experience, we hope to portray both the difficult and joyful moments — the disease surely is a mix of both.
As part of the experience, you will take a 360 degree trip into Beatriz’s brain,
and visit a neuron “forest” that is being affected by amyloid beta plaques and tau proteins.
From DSC: I love the work that Carrie Shaw and @embodiedLabs are doing! Thanks Carrie & Company!
As VR continues to grow and improve, the experiences will feel more real. But for now, here are the best business conference applications in virtual reality.
A sign of how Apple is supporting VR in parts of its ecosystem, Final Cut Pro X (along with Motion and Compressor), now has a complete toolset that lets you import, edit, and deliver 360° video in both monoscopic and stereoscopic formats.
Final Cut Pro X 10.4 comes with a handful of slick new features that we tested, such as advanced color grading and support for High Dynamic Range (HDR) workflows. All useful features for creators, not just VR editors, especially since Final Cut Pro is used so heavily in industries like video editing and production. But up until today, VR post-production options have been minimal, with no support from major VR headsets. We’ve had options with Adobe Premiere plus plugins, but not everyone wants to be pigeon-holed into a single software option. And Final Cut Pro X runs butter smooth on the new iMac, so there’s that.
Now with the ability to create immersive 360° films right in Final Cut Pro, an entirely new group of creators have the ability to dive into the world of 360 VR video. Its simple and intuitive, something we expect from an Apple product. The 360 VR toolset just works.
HWAM’s first exhibition is a unique collection of Star Wars production pieces, including the very first drawings made for the film franchise and never-before-seen production art from the original trilogy by Lucasfilm alum Joe Johnston, Ralph McQuarrie, Phil Tippett, Drew Struzan, Colin Cantwell, and more.
Will virtual reality help you learn a language more quickly? Or will it simply replace your memory?
VR is the ultimate medium for delivering what is known as “experiential learning.” This education theory is based on the idea that we learn and remember things much better when doing something ourselves than by merely watching someone else do it or being told about it.
The immersive nature of VR means users remember content they interact with in virtual scenarios much more vividly than with any other medium. (According to experiments carried out by professor Ann Schlosser at the University of Washington, VR even has the capacity to prompt the development of false memories.)