From DSC: Along these lines, will faculty use their voices to control their room setups (i.e., the projection, shades, room lighting, what’s shown on the LMS, etc.)?
Or will machine-to-machine communications, the Internet of Things, sensors, mobile/cloud-based apps, and the like take care of those items automatically when a faculty member walks into the room?
From DSC:
Check out the 2 items below regarding the use of voice as it pertains to using virtual assistants: 1 involves healthcare and the other involves education (Canvas).
The majority of intelligent voice assistant platforms today are built around smart speakers, such as the Amazon Echo and Google Home. But that might change soon, as several specialized devices focused on the health market are slated to be released this year.
One example is ElliQ, an elder care assistant robot from Samsung NEXT portfolio company Intuition Robotics. Powered by AI cognitive technology, it encourages an active and engaged lifestyle. Aimed at older adults aging in place, it can recognizing their activity level and suggest activities,while also making it easier to connect with loved ones.
Pillo is an example of another such device. It is a robot that combines machine learning, facial recognition, video conferencing, and automation to work as a personal health assistant. It can dispense vitamins and medication, answer health and wellness questions in a conversational manner, securely sync with a smartphone and wearables, and allow users to video conference with health care professionals.
“It is much more than a smart speaker. It is HIPAA compliant and it recognizes the user; acknowledges them and delivers care plans,” said Rogers, whose company created the voice interface for the platform.
Orbita is now working with toSense’s remote monitoring necklace to track vitals and cardiac fluids as a way to help physicians monitor patients remotely. Many more seem to be on their way.
“Be prepared for several more devices like these to hit the market soon,” Rogers predicted.
From DSC:
I see the piece about Canvas and Alexa as a great example of where a piece of our future learning ecosystems are heading towards — in fact, it’s been a piece of my Learning from the Living [Class] Room vision for a while now. The use of voice recognition/NLP is only picking up steam; look for more of this kind of functionality in the future.
The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.
Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.
But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.
As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.
“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.
“Very little time to read, very little time to absorb.”
Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.
“Do I have concerns about the content of that agreement? Yes,” he said.
“What is it that is being hidden, why does it have to be secret?”
From DSC: Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.
For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.
But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?
In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.
There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.
Creating New Technology Jobs
Using Machine Learning To Eliminate Busywork
Preventing Workplace Injuries With Automation
Reducing Human Error With Smart Algorithms
From DSC: This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.
From DSC: This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.
Investing in an Automated Future — from clomedia.com by Mariel Tishma Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?
2018 TECH TRENDS REPORT— from the Future Today Institute Emerging technology trends that will influence business, government, education, media and society in the coming year.
Description:
The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.
Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.
a calendar of events that will shape technology this year
detailed near-future scenarios for several of the technologies
a new framework to help organizations decide when to take action on trends
an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader
01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?
From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?
Scientists Are Turning Alexa into an Automated Lab Helper— from technologyreview.com by Jamie Condliffe Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.
Excerpt:
Alexa, what’s the next step in my titration?
Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.
It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.
…
For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…
In the first part of this blog series, we gave you simple and elaborative definitions of what is artificial intelligence (AI), machine learning and deep learning. This is the second part of the series; here we are elucidating our readers with – What is the difference between AI, machine learning, and deep learning.
You can think of artificial intelligence (AI), machine learning and deep learning as a set of a matryoshka doll, also known as a Russian nesting doll. Deep learning is a subset of machine learning, which is a subset of AI.
1. Feed your chatbot with information your students don’t have. Your institute’s website can be as elaborate as it gets, but if your students can’t find a piece of information on it, it’s as good as incomplete. Say, for example, you offer certain scholarships that students can voluntarily apply for. But the information on these scholarships are tucked away on a remote page that your students don’t access in their day-to-day usage of your site.
So Amy, a new student, has no idea that there’s a scholarship that can potentially make her course 50% cheaper. She can scour your website for details when she finds the time. Or she can ask your university’s chatbot, “Where can I find information on your scholarships?”
And the chatbot can tell her, “Here’s a link to all our current scholarships.”
The best chatbots for colleges and universities tend to be programmed with even more detail, and can actually strike up a conversation by saying things like:
“Please give me the following details so I can pull out all the scholarships that apply to you. “Which department are you in? (Please select one.) “Which course are you enrolled in? (Please select one.) “Which year of study are you in? (Please select one.) “Thank you for the details! Here’s a list of all applicable scholarships. Please visit the links for detailed information and let me know if I can be of further assistance.”
2. Let it answer all the “What do I do now?” questions.
3. Turn it into a campus guide.
4. Let it take care of paperwork.
From DSC: This is the sort of thing that I was trying to get at last year at the NGLS 2017 Conference:
1. Mobile-first to AI-first
A major shift in business thinking has placed Artificial Intelligence at the very heart of business strategy. 2017 saw tech giants including Google and Microsoft focus on an“AI first” strategy, leading the way for other major corporates to follow suit. Companies are demonstrating a willingness to use AI and related tools like machine learning to automate processes, reduce administrative tasks, and collect and organise data. Understanding vast amounts of information is vital in the age of mass data, and AI is proving to be a highly effective solution. Whilst AI has been vilified in the media as the enemy of jobs, many businesses have undergone a transformation in mentalities, viewing AI as enhancing rather than threatening the human workforce.
…
7. Voice based virtual assistants become ubiquitous Google HomeThe wide uptake of home based and virtual assistants like Alexa and Google Home have built confidence in conversational interfaces, familiarising consumers with a seamless way of interacting with tech. Amazon and Google have taken prime position between brand and customer, capitalising on conversational convenience. The further adoption of this technology will enhance personalised advertising and sales, creating a direct link between company and consumer.
PALO ALTO, Calif. — The medical profession has an ethic: First, do no harm.
Silicon Valley has an ethos: Build it first and ask for forgiveness later.
Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.
This semester, Harvard University and the Massachusetts Institute of Technology are jointly offering a new course on the ethics and regulation of artificial intelligence. The University of Texas at Austin just introduced a course titled “Ethical Foundations of Computer Science” — with the idea of eventually requiring it for all computer science majors.
And at Stanford University, the academic heart of the industry, three professors and a research fellow are developing a computer science ethics course for next year. They hope several hundred students will enroll.
The idea is to train the next generation of technologists and policymakers to consider the ramifications of innovations — like autonomous weapons or self-driving cars — before those products go on sale.
The VR / AR / MR Breakdown
This year will see growth in a variety of virtual technologies and uses. There are differences and similarities between virtual, augmented, and mixed reality technologies. The technology is constantly evolving and even the terminology around it changes quickly, so you may hear variations on these terms.
Augmented reality is what was behind the Pokémon Go craze. Players could see game characters on their devices superimposed over images of their physical surroundings. Virtual features seemed to exist in the real world.
Mixed reality combines virtual features and real-life objects. So, in this way it includes AR but it also includes environments where real features seem to exist in a virtual world.
The folks over at Recode explain mixed reality this way:
In theory, mixed reality lets the user see the real world (like AR) while also seeing believable, virtual objects (like VR). And then it anchors those virtual objects to a point in real space, making it possible to treat them as “real,” at least from the perspective of the person who can see the MR experience.
And, virtual reality uses immersive technology to seemingly place a user into a simulated lifelike environment.
… Where You’ll Find These New Realities Education and research fields are at the forefront of VR and AR technologies, where an increasing number of students have access to tools. But higher education isn’t the only place you see this trend. The number of VR companies grew 250 percent between 2012 and 2017. Even the latest iPhones include augmented reality capabilities. Aside from the classroom and your pocket, here are some others places you’re likely to see VR and AR pop up in 2018.
VR/AR for Impact experiences shown this week at WEF 2018 include:
OrthoVR aims to increase the availability of well-fitting prosthetics in low-income countries by using Virtual Reality and 3D rapid prototyping tools to increase the capacity of clinical staff without reducing quality. VR allows current prosthetists and orthosists to leverage their hands-on and embodied skills within a digital environment.
The Extraordinary Honey Bee is designed to help deepen our understanding of the honey bee’s struggle and learn what is at stake for humanity due to the dying global population of the honey bee. Told from a bee’s perspective, The Extraordinary Honey Bee harnesses VR to inspire change in the next generation of honey bee conservationists.
The Blank Canvas: Hacking Nature is an episodic exploration of the frontiers of bioengineering as taught by the leading researchers within the field. Using advanced scientific visualization techniques, the Blank Canvas will demystify the cellular and molecular mechanisms that are being exploited to drive substantial leaps such as gene therapy.
LIFE (Life-saving Instruction For Emergencies) is a new mobile and VR platform developed by the University of Oxford that enables all types of health worker to manage medical emergencies. Through the use of personalized simulation training and advanced learning analytics, the LIFE platform offers the potential to dramatically extend access to life-saving knowledge in low-income countries.
Tree is a critically acclaimed virtual reality experience to immerse viewers in the tragic fate that befalls a rainforest tree. The experience brings to light the harrowing realities of deforestation, one of the largest contributors to global warming.
For the Amazonian Yawanawa, ‘medicine’ has the power to travel you in a vision to a place you have never been. Hushuhu, the first woman shaman of the Yawanawa, uses VR like medicine to open a portal to another way of knowing. AWAVENAis a collaboration between a community and an artist, melding technology and transcendent experience so that a vision can be shared, and a story told of a people ascending from the edge of extinction.
Types of Virtual Reality Technology
We can segregate the type of Virtual Reality Technology according to their user experience
Non-Immersive Non-immersive simulations are the least immersion implementation of Virtual Reality Technology.
In this kind of simulation, only a subset of the user’s senses is replicated, allowing for marginal awareness of the reality outside the VR simulation. A user enters into 3D virtual environments through a portal or window by utilizing standard HD monitors typically found on conventional desktop workstations.
Semi Immersive In this simulation, users experience a more rich immersion, where a user partly, not fully involved in a virtual environment. Semi immersive simulations are based on high-performance graphical computing, which is often coupled with large screen projector systems or multiple TV projections to properly simulate the user’s visuals.
Fully immersive Offers the full immersive experience to the user of Virtual Reality Technology, in this phase of VR head-mounted displays and motion sensing devices are used to simulate all of the user senses. In this situation, a user can experience the realistic virtual environment, where a user can experience a wide view field, high resolutions, increased refresh rates and a high quality of visualization through HMD.
These workers already routinely use technology such as tablets to access plans and data on site, but going from 2D to 3D at scale brings that to a whole new level. “Superimposing the digital model on the physical environment provides a clear understanding of the relations between the 3D design model and the actual work on a jobsite,” explained Olivier Pellegrin, BIM manager, GA Smart Building.
The application they are using is called Trimble Connect. It turns data into 3D holograms, which are then mapped out to scale onto the real-world environment. This gives workers an instant sense of where and how various elements will fit and exposes mistakes early on in the process.
Also see:
Trimble Connect for HoloLens is a mixed reality solution that improves building coordination by combining models from multiple stakeholders such as structural, mechanical and electrical trade partners. The solution provides for precise alignment of holographic data on a 1:1 scale on the job site, to review models in the context of the physical environment. Predefined views from Trimble Connect further simplify in-field use with quick and easy access to immersive visualizations of 3D data. Users can leverage mixed reality for training purposes and to compare plans against work completed. Advanced visualization further enables users to view assigned tasks and capture data with onsite measurement tools. Trimble Connect for HoloLens is available now through the Microsoft Windows App Store. A free trial option is available enabling integration with HoloLens. Paid subscriptions support premium functionality allowing for precise on-site alignment and collaboration. Trimble’s Hard Hat Solution for Microsoft HoloLens extends the benefits of HoloLens mixed reality into areas where increased safety requirements are mandated, such as construction sites, offshore facilities, and mining projects. The solution, which is ANSI-approved, integrates the HoloLens holographic computer with an industry-standard hard hat. Trimble’s Hard Hat Solution for HoloLens is expected to be available in the first quarter of 2018. To learn more, visit mixedreality.trimble.com.
From DSC:
Combining voice recognition / Natural Language Processing (NLP) with Mixed Reality should provide some excellent, powerful user experiences. Doing so could also provide some real-time understanding as well as highlight potential issues in current designs. It will be interesting to watch this space develop. If there were an issue, wouldn’t it be great to remotely ask someone to update the design and then see the updated design in real-time? (Or might there be a way to make edits via one’s voice and/or with gestures?)
I could see where these types of technologies could come in handy when designing / enhancing learning spaces.
There’s been a lot of cool stuff happening lately around Augmented Reality (AR), and since I love exploring and having fun with new technologies, I thought I would see what I could do with AR and the Web?—?and it turns out I was able to do quite a lot!
Most AR demos are with static objects, like showing how you can display a cool model on a table, but AR really begins to shine when you start adding in animations!
With animated AR, your models come to life, and you can then start telling a story with them.
If you’re in the market for some art in your house or apartment, Art.com will now let you use AR to put digital artwork up on your wall.
The company’s ArtView feature is one of the few augmented reality features that actually adds a lot to the app it’s put in. With the ARKit-enabled tech, the artwork is accurately sized so you can get a perfect idea of how your next purchase could fit on your wall. The feature can be used for the two million pieces of art on the site and can be customized with different framing types.
Bailenson’s newest book, Experience on Demand, builds on that earlier work while focusing more clearly — even bluntly — on what we do and don’t know about how VR affects humans.
…
“The best way to use it responsibly is to be educated about what it is capable of, and to know how to use it — as a developer or a user — responsibly,” Bailenson wrote in the book.
Among the questions raised:
“How educationally effective are field trips in VR? What are the design principles that should guide these types of experiences?”
How many individuals are not meeting their potential because they lack the access to good instruction and learning tools?”
“When we consider that the subjects were made uncomfortable by the idea of administering fake electric shocks, what can we expect people will feel when they are engaging all sorts of fantasy violence and mayhem in virtual reality?”
“What is the effect of replacing social contact with virtual social contact over long periods of time?”
“How do we walk the line and leverage what is amazing about VR, without falling prey to the bad parts?”
From DSC: DC: Will Amazon get into delivering education/degrees? Is is working on a next generation learning platform that could highly disrupt the world of higher education? Hmmm…time will tell.
But Amazon has a way of getting into entirely new industries. From its roots as an online bookseller, it has branched off into numerous other arenas. It has the infrastructure, talent, and the deep pockets to bring about the next generation learning platform that I’ve been tracking for years. It is only one of a handful of companies that could pull this type of endeavor off.
And now, we see articles like these:
Amazon Snags a Higher Ed Superstar — from insidehighered.com by Doug Lederman Candace Thille, a pioneer in the science of learning, takes a leave from Stanford to help the ambitious retailer better train its workers, with implications that could extend far beyond the company.
Excerpt:
A major force in the higher education technology and learning space has quietly begun working with a major corporate force in — well, in almost everything else.
Candace Thille, a pioneer in learning science and open educational delivery, has taken a leave of absence from Stanford University for a position at Amazon, the massive (and getting bigger by the day) retailer.
Thille’s title, as confirmed by an Amazon spokeswoman: director of learning science and engineering. In that capacity, the spokeswoman said, Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon.”
No further details were forthcoming, and Thille herself said she was “taking time away” from Stanford to work on a project she was “not really at liberty to discuss.”
Jeff Bezos’ Amazon empire—which recently dabbled in home security, opened artificial intelligence-powered grocery stores, and started planning a second headquarters (and manufactured a vicious national competition out of it)—has not been idle in 2018.
The e-commerce/retail/food/books/cloud-computing/etc company made another move this week that, while nowhere near as flashy as the above efforts, tells of curious things to come. Amazon has hired Candace Thille, a leader in learning science, cognitive science, and open education at Stanford University, to be “director of learning science and engineering.” A spokesperson told Inside Higher Ed that Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon”; Thille herself said she is “not really at liberty to discuss” her new project.
What could Amazon want with a higher education expert? The company already has footholds in the learning market, running several educational resource platforms. But Thille is famous specifically for her data-driven work, conducted at Stanford and Carnegie Mellon University, on nontraditional ways of learning, teaching, and training—all of which are perfect, perhaps even necessary, for the education of employees.
From DSC: It could just be that Amazon is simply building its own corporate university and will stay focused on developing its own employees and its own corporate learning platform/offerings — and/or perhaps license their new platform to other corporations.
But from my perspective, Amazon continues to work on pieces of a powerful puzzle, one that could eventually involve providing learning experiences to lifelong learners:
Personal assistants
Voice recognition / Natural Language Processing (NLP)
The development of “skills” at an incredible pace
Personalized recommendation engines
Cloud computing and more
If Alexa were to get integrated into a AI-based platform for personalized learning — one that features up-to-date recommendation engines that can identify and personalize/point out the relevant critical needs in the workplace for learners — better look out higher ed! Better look out if such a platform could interactively deliver (and assess) the bulk of the content that essentially does the heavy initial lifting of someone learning about a particular topic.
Amazon will be able to deliver a cloud-based platform, with cloud-based learner profiles and blockchain-based technologies, at a greatly reduced cost. Think about it. No physical footprints to build and maintain, no lawns to mow, no heating bills to pay, no coaches making $X million a year, etc. AI-driven recommendations for digital playlists. Links to the most in demand jobs — accompanied by job descriptions, required skills & qualifications, and courses/modules to take in order to master those jobs.
Such a solution would still need professors, instructional designers, multimedia specialists, copyright experts, etc., but they’ll be able to deliver up-to-date content at greatly reduced costs. That’s my bet. And that’s why I now call this potential development The New Amazon.com of Higher Education.
[Microsoft — with their purchase of Linked In (who had previously
purchased Lynda.com) — is another such potential contender.]
“Rise of the machines” — from January 2018 edition of InAVate magazine AI is generating lots of buzz in other verticals, but what can AV learn from those? Tim Kridel reports.
From DSC: Learning spaces are relevant as well in the discussion of AI and AV-related items.
A full-width frosted glass panel sits on the desk surface, above it fixed in the ceiling is a Wolfvision VZ-C12 visualiser. This means the teaching staff can write on the (wipeclean) surface and the text appears directly on two 94-in screens behind them, using Christie short-throw laser 4,000 lumens projectors. When the lecturer is finished or has filled up the screen with text, the image can be saved on the intranet or via USB. Simply wipe with a cloth and start again. Not only is the technology inventive, but it allows the teaching staff to remain in face-to-face contact with the students at all times, instead of students having to stare at the back of the lecturer’s head whilst they write.
Television. TV. There’s an app for that. Finally! TV — that is, live shows such as the news, specials, documentaries (and reality shows, if you must) — is now just like Candy Crunch and Facebook. TV apps (e.g., DirecTV Now) are available on all devices — smartphones, tablets, laptops, Chromebooks. Accessing streams upon streams of videos is, literally, now just a tap away.
…
Plain and simple: readily accessible video can be a really valuable resource for learners and learning.
…
Not everything that needs to be learned is on video. Instruction will need to balance the use of video with the use of printed materials. That balance, of course, needs to take in cost and accessibility.
…
Now for the 800 pound gorilla in the room: Of course, that TV app could be a huge distraction in the classroom. The TV app has just piled yet another classroom management challenge onto a teacher’s back.
…
That said, it is early days for TV as an app. For example, HD (High Definition) TV demands high bandwidth — and we can experience stuttering/skipping at times. But, when 5G comes around in 2020, just two years from now, POOF, that stuttering/skipping will disappear. “5G will be as much as 1,000 times faster than 4G.” Yes, POOF!
From DSC: Learning via apps is here to stay. “TV” as apps is here to stay. But what’s being described here is but one piece of the learning ecosystem that will be built over the next 5-15 years and will likely be revolutionary in its global impact on how people learn and grow. There will be opportunities for social-based learning, project-based learning, and more — with digital video being a component of the ecosystem, but is and will be insufficient to completely move someone through all of the levels of Bloom’s Taxonomy.
I will continue to track this developing learning ecosystem, but voice-driven personal assistants are already here. Algorithm-based recommendations are already here. Real-time language translation is already here. The convergence of the telephone/computer/television continues to move forward. AI-based bots will only get better in the future. Tapping into streams of up-to-date content will continue to move forward. Blockchain will likely bring us into the age of cloud-based learner profiles. And on and on it goes.
We’ll still need teachers, professors, and trainers. But this vision WILL occur. It IS where things are heading. It’s only a matter of time.
Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”
The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.
…
The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.
TeachLive.org TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk. This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.
From DSC: It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!
The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.
But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.” It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.
In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.
A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.
Artificial Intelligence has leapt to the forefront of global discourse, garnering increased attention from practitioners, industry leaders, policymakers, and the general public. The diversity of opinions and debates gathered from news articles this year illustrates just how broadly AI is being investigated, studied, and applied. However, the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field.
Without the relevant data for reasoning about the state of AI technology, we are essentially “flying blind” in our conversations and decision-making related to AI.
Created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), the AI Index is an open, not-for-profit project to track activity and progress in AI. It aims to facilitate an informed conversation about AI that is grounded in data. This is the inaugural annual report of the AI Index, and in this report we look at activity and progress in Artificial Intelligence through a range of perspectives. We aggregate data that exists freely on the web, contribute original data, and extract new metrics from combinations of data series.
All of the data used to generate this report will be openly available on the AI Index website at aiindex.org. Providing data, however, is just the beginning. To become truly useful, the AI Index needs support from a larger community. Ultimately, this report is a call for participation. You have the ability to provide data, analyze collected data, and make a wish list of what data you think needs to be tracked. Whether you have answers or questions to provide, we hope this report inspires you to reach out to the AI Index and become part of the effort to ground the conversation about AI.
What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:
According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses. (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.
Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30
While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.
The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.
Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.
2017 has turned out to be the year of voice. Amazon Alexa passed over 10 million unit sales earlier in the year and there are over 24,000 Skills in the store. With the addition of new devices like the Echo Show, Echo Plus, improved Echo Dot, and a new form factor for the Echo, there’s an option for everyone’s budget. Google is right there as well with the addition of the Google Mini to go along with the original Google Home. Apple’s efforts with Siri and HomePod, Samsung’s Bixby, and Microsoft’s Cortana round out the major tech firms efforts in this space.