2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.


The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader



01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?




How to Set Up a VR Pilot — from campustechnology.com by Dian Schaffhauser
As Washington & Lee University has found, there is no best approach for introducing virtual reality into your classrooms — just stages of faculty commitment.


The work at the IQ Center offers a model for how other institutions might want to approach their own VR experimentation. The secret to success, suggested IQ Center Coordinator David Pfaff, “is to not be afraid to develop your own stuff” — in other words, diving right in. But first, there’s dipping a toe.

The IQ Center is a collaborative workspace housed in the science building but providing services to “departments all over campus,” said Pfaff. The facilities include three labs: one loaded with high-performance workstations, another decked out for 3D visualization and a third packed with physical/mechanical equipment, including 3D printers, a laser cutter and a motion-capture system.




The Future of Language Learning: Augmented Reality vs Virtual Reality — from medium.com by Denis Hurley


Here, I would like to stick to the challenges and opportunities presented by augmented reality and virtual reality for language learning.

While the challenge is a significant one, I am more optimistic than most that wearable AR will be available and popular soon. We don’t yet know how Snap Spectacles will evolve, and, of course, there’s always Apple.

I suspect we will see a flurry of new VR apps from language learning startups soon, especially from Duolingo and in combination with their AI chat bots. I am curious if users will quickly abandon the isolating experiences or become dedicated users.



Bose has a plan to make AR glasses — from cnet.com by David Carnoy
Best known for its speakers and headphones, the company has created a $50 million development fund to back a new AR platform that’s all about audio.


“Unlike other augmented reality products and platforms, Bose AR doesn’t change what you see, but knows what you’re looking at — without an integrated lens or phone camera,” Bose said. “And rather than superimposing visual objects on the real world, Bose AR adds an audible layer of information and experiences, making every day better, easier, more meaningful, and more productive.”

The secret sauce seems to be the tiny, “wafer-thin” acoustics package developed for the platform. Bose said it represents the future of mobile micro-sound and features “jaw-dropping power and clarity.”

Bose adds the technology can “be built into headphones, eyewear, helmets and more and it allows simple head gestures, voice, or a tap on the wearable to control content.”


Bose is making AR glasses focused on audio, not visuals

Here are some examples Bose gave for how it might be used:

    • For travel, the Bose AR could simulate historic events at landmarks as you view them — “so voices and horses are heard charging in from your left, then passing right in front of you before riding off in the direction of their original route, fading as they go.” You could hear a statue make a famous speech when you approach it. Or get told which way to turn towards your departure gate while checking in at the airport.
    • Bose AR could translate a sign you’re reading. Or tell you the word or phrase for what you’re looking at in any language. Or explain the story behind the painting you’ve just approached.
  • With gesture controls, you could choose or change your music with simple head nods indicating yes, no, or next (Bragi headphones already do this).
  • Bose AR would add useful information based on where you look. Like the forecast when you look up or information about restaurants on the street you look down.



The 10 Best VR Apps for Classrooms Using Merge VR’s New Merge Cube — from edsurge.com


Google Lens arrives on iOS — from techcrunch.com by Sarah Perez


On the heels of last week’s rollout on Android, Google’s  new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.

For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.


The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.



15 Top Augmented Reality (AR) Apps Changing Education — from vudream.com by Steven Wesley




CNN VR App Brings News to Oculus Rift — from vrscout.com by Jonathan Nafarrete





Virtual reality technology enters a Chinese courtroom — from supchina.com by Jiayun Feng


The introduction of VR technology is part of a “courtroom evidence visualization system” developed by the local court. The system also includes a newly developed computer program that allows lawyers to present evidence with higher quality and efficiency, which will replace a traditional PowerPoint slideshow.

It is reported that the system will soon be implemented in courtrooms across the city of Beijing.




Watch Waymo’s Virtual-Reality View of the World — from spectrum.ieee.org by Philip Ross

From DSC:
This is mind blowing. Now I see why Nvidia’s products/services are so valuable.



Along these same lines, also see this clip and/or this article entitled, This is why AR and Autonomous Driving are the Future of Cars:




The Legal Hazards of Virtual Reality and Augmented Reality Apps — from spectrum.ieee.org by Tam Harbert
Liability and intellectual property issues are just two areas developers need to know about


As virtual- and augmented-reality technologies mature, legal questions are emerging that could trip up VR and AR developers. One of the first lawyers to explore these questions is Robyn Chatwood, of the international law firm Dentons. “VR and AR are areas where the law is just not keeping up with [technology] developments,” she says. IEEE Spectrum contributing editor Tam Harbert talked with Chatwood about the legal challenges.




This VR Tool Could Make Kids A Lot Less Scared Of Medical Procedures — from fastcompany.com by Daniel Terdiman
The new app creates a personalized, explorable 3D model of a kid’s own body that makes it much easier for them to understand what’s going on inside.


A new virtual reality app that’s designed to help kids suffering from conditions like Crohn’s disease understand their maladies immerses those children in a cartoon-like virtual reality tour through their body.

Called HealthVoyager, the tool, a collaboration between Boston Children’s Hospital and the health-tech company Klick Health, is being launched today at an event featuring former First Lady Michelle Obama.

A lot of kids are confused by doctors’ intricate explanations of complex procedures like a colonoscopy, and they, and their families, can feel much more engaged, and satisfied, if they really understand what’s going on. But that’s been hard to do in a way that really works and doesn’t get bogged down with a lot of meaningless jargon.



Augmented Reality in Education — from invisible.toys


Star Chart -- AR and astronomy



The state of virtual reality — from furthermore.equinox.com by Rachael Schultz
How the latest advancements are optimizing performance, recovery, and injury prevention


Virtual reality is increasingly used to enhance everything from museum exhibits to fitness classes. Elite athletes are using VR goggles to refine their skills, sports rehabilitation clinics are incorporating it into recovery regimes, and others are using it to improve focus and memory.

Here, some of the most exciting things happening with virtual reality, as well as what’s to come.



Augmented Reality takes 3-D printing to next level — from rtoz.org


Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work. To use the Robotic Modeling Assistant (RoMA), a designer wears an AR headset with hand controllers. As soon as a design feature is completed, the robotic arm prints the new feature.




From DSC:
How might the types of technologies being developed and used by Kazendi’s Holomeeting be used for building/enhancing learning spaces?





AR and Blockchain: A Match Made in The AR Cloud — from medium.com by Ori Inbar


In my introduction to the AR Cloud I argued that in order to reach mass adoption, AR experiences need to persist in the real world across space, time, and devices.

To achieve that, we will need a persistent realtime spatial map of the world that enables sharing and collaboration of AR Experiences among many users.

And according to AR industry insiders, it’s poised to become:

“the most important software infrastructure in computing”

aka: The AR Cloud.





From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?



From page 45 of the PDF available here:


Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?




Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.


Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…


Also see:




Tech companies should stop pretending AI won’t destroy jobs — from technologyreview.com / MIT Technology Review by Kai-Fu Lee
No matter what anyone tells you, we’re not ready for the massive societal upheavals on the way.

Excerpt (emphasis DSC):

The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-­collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.

And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.


From DSC:
If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.

However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust. Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.


Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.



To Fight Fatal Infections, Hospitals May Turn to Algorithms — from scientificamerican.com by John McQuaid
Machine learning could speed up diagnoses and improve accuracy


The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.



Lawyer-Bots Are Shaking Up Jobs — from technologyreview.com by Erin Winick


Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.

As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.


“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”


So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.

People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.



Lessons From Artificial Intelligence Pioneers — from gartner.com by Christy Pettey

CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.

Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.

Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations

“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”

So what lessons can we learn from these early AI pioneers?



Why Artificial Intelligence Researchers Should Be More Paranoid — from wired.com by Tom Simonite


What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.



How to Prepare College Graduates for an AI World — from wsj.com by
Northeastern University President Joseph Aoun says schools need to change their focus, quickly


WSJ: What about adults who are already in the workforce?

DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.

That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.



Inside Amazon’s Artificial Intelligence Flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.


Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.





The Implications of Gartner’s Top 10 Tech Trends of 2018 for Education — from gettingsmart.com by Jim Goodell, Liz Glowa and Brandt Redd


In October, Gartner released a report with predictions about the top tech trends for business in 2018. Gartner uses the term the intelligent digital mesh to describe “the entwining of people, devices, content and services” that will create the “foundation for the next generation of digital business models and ecosystems.” These trends are classified within three categories.

  • Intelligent: How AI is seeping into virtually every technology and with a defined, well-scoped focus can allow more dynamic, flexible and potentially autonomous systems.
  • Digital: Blending the virtual and real worlds to create an immersive digitally enhanced and connected environment.
  • Mesh: The connections between an expanding set of people, business, devices, content and services to deliver digital outcomes.

What are the implications of these trends for education?
Education often falls behind the business world in realizing the potential of new technologies. There are however a few bright spots where the timing might be right for the tech trends in the business world to have a positive impact in education sooner rather than later.

The top 10 trends according to Gartner are analyzed below for their implications for education…

1) Artificial Intelligence Foundation
2) Intelligent Apps and Analytics
3) Intelligent Things




From DSC:
DC: Will Amazon get into delivering education/degrees? Is is working on a next generation learning platform that could highly disrupt the world of higher education? Hmmm…time will tell.

But Amazon has a way of getting into entirely new industries. From its roots as an online bookseller, it has branched off into numerous other arenas. It has the infrastructure, talent, and the deep pockets to bring about the next generation learning platform that I’ve been tracking for years. It is only one of a handful of companies that could pull this type of endeavor off.

And now, we see articles like these:

Amazon Snags a Higher Ed Superstar — from insidehighered.com by Doug Lederman
Candace Thille, a pioneer in the science of learning, takes a leave from Stanford to help the ambitious retailer better train its workers, with implications that could extend far beyond the company.


A major force in the higher education technology and learning space has quietly begun working with a major corporate force in — well, in almost everything else.

Candace Thille, a pioneer in learning science and open educational delivery, has taken a leave of absence from Stanford University for a position at Amazon, the massive (and getting bigger by the day) retailer.

Thille’s title, as confirmed by an Amazon spokeswoman: director of learning science and engineering. In that capacity, the spokeswoman said, Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon.”

No further details were forthcoming, and Thille herself said she was “taking time away” from Stanford to work on a project she was “not really at liberty to discuss.”


Amazon is quietly becoming its own university — from qz.com by Amy Wang


Jeff Bezos’ Amazon empire—which recently dabbled in home security, opened artificial intelligence-powered grocery stores, and started planning a second headquarters (and manufactured a vicious national competition out of it)—has not been idle in 2018.

The e-commerce/retail/food/books/cloud-computing/etc company made another move this week that, while nowhere near as flashy as the above efforts, tells of curious things to come. Amazon has hired Candace Thille, a leader in learning science, cognitive science, and open education at Stanford University, to be “director of learning science and engineering.” A spokesperson told Inside Higher Ed that Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon”; Thille herself said she is “not really at liberty to discuss” her new project.

What could Amazon want with a higher education expert? The company already has footholds in the learning market, running several educational resource platforms. But Thille is famous specifically for her data-driven work, conducted at Stanford and Carnegie Mellon University, on nontraditional ways of learning, teaching, and training—all of which are perfect, perhaps even necessary, for the education of employees.


From DSC:
It could just be that Amazon is simply building its own corporate university and will stay focused on developing its own employees and its own corporate learning platform/offerings — and/or perhaps license their new platform to other corporations.

But from my perspective, Amazon continues to work on pieces of a powerful puzzle, one that could eventually involve providing learning experiences to lifelong learners:

  • Personal assistants
  • Voice recognition / Natural Language Processing (NLP)
  • The development of “skills” at an incredible pace
  • Personalized recommendation engines
  • Cloud computing and more

If Alexa were to get integrated into a AI-based platform for personalized learning — one that features up-to-date recommendation engines that can identify and personalize/point out the relevant critical needs in the workplace for learners — better look out higher ed! Better look out if such a platform could interactively deliver (and assess) the bulk of the content that essentially does the heavy initial lifting of someone learning about a particular topic.

Amazon will be able to deliver a cloud-based platform, with cloud-based learner profiles and blockchain-based technologies, at a greatly reduced cost. Think about it. No physical footprints to build and maintain, no lawns to mow, no heating bills to pay, no coaches making $X million a year, etc.  AI-driven recommendations for digital playlists. Links to the most in demand jobs — accompanied by job descriptions, required skills & qualifications, and courses/modules to take in order to master those jobs.

Such a solution would still need professors, instructional designers, multimedia specialists, copyright experts, etc., but they’ll be able to deliver up-to-date content at greatly reduced costs. That’s my bet. And that’s why I now call this potential development The New Amazon.com of Higher Education.

[Microsoft — with their purchase of Linked In (who had previously
purchased Lynda.com) — is
another such potential contender.]




Google launches enterprise-grade G Suite for Education — from venturebeat.com by Blair Hanley Frank


Google announced today that universities and other large educational institutions will have a new version of its G Suite productivity service tailored just for them. Called G Suite Enterprise for Education, the service will first be a roughly straight port of the tech giant’s offering for large businesses, but will later receive features that are tailored specifically for schools.

With the new offering, organizations will get features like the ability to hold video calls in Hangouts Meet with up to 50 participants, a security center for managing potential threats, and advanced mobile device management.

Google’s cloud productivity suite is already popular among schools large and small. This offering will likely make it even more appealing to IT administrators at the largest organizations, who need more advanced features.




The next era of human|machine partnerships
From delltechnologies.com by the Institute for the Future and Dell Technologies


From DSC:
Though this outlook report paints a rosier picture than I think we will actually encounter, there are several interesting perspectives in this report. We need to be peering out into the future to see which trends and scenarios are most likely to occur…then plan accordingly. With that in mind, I’ve captured a few of the thoughts below.


At its inception, very few people anticipated the pace at which the internet would spread across the world, or the impact it would have in remaking business and culture. And yet, as journalist Oliver Burkeman wrote in 2009, “Without most of us quite noticing when it happened, the web went from being a strange new curiosity to a background condition of everyday life.”1


In Dell’s Digital Transformation Index study, with 4,000 senior decision makers across the world, 45% say they are concerned about becoming obsolete in just 3-5 years, nearly half don’t know what their industry will look like in just three years’ time, and 73% believe they need to be more ‘digital’ to succeed in the future.

With this in mind, we set out with 20 experts to explore how various social and technological drivers will influence the next decade and, specifically, how emerging technologies will recast our society and the way we conduct business by the year 2030. As a result, this outlook report concludes that, over the next decade, emerging technologies will underpin the formation of new human-machine partnerships that make the most of their respective complementary strengths. These partnerships will enhance daily activities around the coordination of resources and in-the-moment learning, which will reset expectations for work and require corporate structures to adapt to the expanding capabilities of human-machine teams.

For the purpose of this study, IFTF explored the impact that Robotics, Artificial Intelligence (AI) and Machine Learning, Virtual Reality (VR) and Augmented Reality (AR), and Cloud Computing, will have on society by 2030. These technologies, enabled by significant advances in software, will underpin the formation of new human-machine partnerships.

On-demand access to AR learning resources will reset expectations and practices around workplace training and retraining, and real-time decision-making will be bolstered by easy access to information flows. VR-enabled simulation will immerse people in alternative scenarios, increasing empathy for others and preparation for future situations. It will empower the internet of experience by blending physical and virtual worlds.


Already, the number of digital platforms that are being used to orchestrate either physical or human resources has surpassed 1,800.9 They are not only connecting people in need of a ride with drivers, or vacationers with a place to stay, but job searchers with work, and vulnerable populations with critical services. The popularity of the services they offer is introducing society to the capabilities of coordinating technologies and resetting expectations about the ownership of fixed assets.


Human-machine partnerships won’t spell the end of human jobs, but work will be vastly different.

The U.S. Bureau of Labor Statistics says that today’s learners will have 8 to 10 jobs by the time they are 38. Many of them will join the workforce of freelancers. Already 50 million strong, freelancers are projected to make up 50% of the workforce in the United States by 2020.12 Most freelancers will not be able to rely on traditional HR departments, onboarding processes, and many of the other affordances of institutional work.


By 2030, in-the-moment learning will become the modus operandi, and the ability to gain new knowledge will be valued higher than the knowledge people already have.



The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV




Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian