Everyday Media Literacy — from routledge.com by Sue Ellen Christian An Analog Guide for Your Digital Life, 1st Edition
Description:
In this graphic guide to media literacy, award-winning educator Sue Ellen Christian offers students an accessible, informed and lively look at how they can consume and create media intentionally and critically.
The straight-talking textbook offers timely examples and relevant activities to equip students with the skills and knowledge they need to assess all media, including news and information. Through discussion prompts, writing exercises, key terms, online links and even origami, readers are provided with a framework from which to critically consume and create media in their everyday lives. Chapters examine news literacy, online activism, digital inequality, privacy, social media and identity, global media corporations and beyond, giving readers a nuanced understanding of the key concepts and concerns at the core of media literacy.
Concise, creative and curated, this book highlights the cultural, political and economic dynamics of media in our contemporary society, and how consumers can mindfully navigate their daily media use. Everyday Media Literacy is perfect for students (and educators) of media literacy, journalism, education and media effects looking to build their understanding in an engaging way.
The legal profession is in the early stages of a fundamental transformation driven by an entirely new breed of intelligent technologies and it is a perilous place for the profession to be.
If the needs of the law guide the ways in which the new technologies are put into use they can greatly advance the cause of justice. If not, the result may well be profits for those who design and sell the technologies but a legal system that is significantly less just.
…
We are entering an era of technology that goes well beyond the web. The law is seeing the emergence of systems based on analytics and cognitive computing in areas that until now have been largely immune to the impact of technology. These systems can predict, advise, argue and write and they are entering the world of legal reasoning and decision making.
Unfortunately, while systems built on the foundation of historical data and predictive analytics are powerful, they are also prone to bias and can provide advice that is based on incomplete or imbalanced data.
…
We are not arguing against the development of such technologies. The key question is who will guide them. The transformation of the field is in its early stages. There is still opportunity to ensure that the best intentions of the law are built into these powerful new systems so that they augment and aid rather than simply replace.
From DSC: This is where we need more collaborations between those who know the law and those who know how to program, as well as other types of technologists.
Scholars around the world share their latest research findings with a decidedly low-tech ritual: printing a 48-inch by 36-inch poster densely packed with charts, graphs and blocks of text describing their research hypothesis, methods and findings. Then they stand with the poster in an exhibit hall for an hour, surrounded by rows of other researchers presenting similar posters, while hundreds of colleagues from around the world walk by trying to skim the displays.
…
Not only does the exercise deflate the morale of the scholars sharing posters, the ritual is incredibly inefficient at communicating science, Morrison argues.
…
Morrison says he has a solution: A better design for those posters, plus a dash of tech.
To make up for all the nuance and detail lost in this approach, the template includes a QR code that viewers can scan to get to the full research paper.
From DSC: Wouldn’t this be great if more journal articles would do the same thing? That is, give us the key findings, conclusions (with some backbone to them), and recommendations right away! Abstracts don’t go far enough, and often scholars/specialists are talking amongst themselves…not to the world. They could have a far greater reach/impact with this kind of approach.
(The QR code doesn’t make as much sense if one is already reading the full journal article…but the other items make a great deal of sense!)
American students have changed their majors— from bloomberg.com by Justin Fox
Health professions are in, education and the humanities are out. Here are some reasons for the shift.
The digital era has utterly changed the way readers interact with the news.
Traditional news outlets struggle to remain relevant as the media sector’s influence is refocused online.
Journalism in the U.S. faces a number of challenges that blockchain technology has the potential to address and possibly solve — if the technology actually can achieve what it promises.
2018 itself has seen journalism move into uncharted waters as the industry comes up against issues, stemming from the continued digital migration of news organizations.
Because blockchain functions as a platform to facilitate peer-to-peer transactions, there are a few news organizations that believe blockchain technology finally will enable micropayments to be widely adopted in the U.S.
At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.
Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.
The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.
Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.
Robot wars — from ethicaljournalismnetwork.org by James Ball How artificial intelligence will define the future of news
Excerpt:
There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.
The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.
Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.
The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.
That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?
This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.
Alibaba looks to arm hotels, cities with its AI technology— from zdnet.com by Eileen Yu Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.
Excerpt:
Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.
Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.
Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.
FRANKFURT AM MAIN (AFP) – German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.
Launched at the Online News Association conference in Austin, Texas, the Future Today Institute’s new industry report for the future of journalism, media and technology follows the same approach as our popular annual mega trends report, now in its 11th year with more than 7.5 million cumulative views.
Key findings:
Blockchain emerged as a significant driver of change in 2019 and beyond. The blockchain ecosystem is still maturing, however we’ve now seen enough development, adoption and consolidation that it warrants its own, full section. There are numerous opportunities for media and journalism organizations. For that reason, we’ve included an explainer, a list of companies to watch, and a cross-indexed list of trends to compliment blockchain technology. We’ve also included detailed scenarios in this section.
Mixed Reality is entering the mainstream.
The mixed reality ecosystem has grown enough that we now see concrete opportunities on the horizon for media organizations. From immersive video to wearable technology, news and entertainment media organizations should begin mapping their strategy for new kinds of devices and platforms.
Artificial Intelligence is not a tech trend—it is the third era of computing. And it isn’t just for story generation. You will see the AI ecosystem represented in many of the trends in this report, and it is vitally important that all decision-makers and teams familiarize themselves with current and emerging AI trends.
In addition to the 108 trends identified, the report also includes several guides for journalists, including a Blockchain Primer, an AI Primer, a mixed reality explainer, hacker terms and lingo, and a guide to policy changes on the horizon.
The report also includes guidance on how everyone working within journalism and media can take action on tech trends and how to evaluate a trend’s impact on their local marketplaces.
Technology has conditioned workers to expect quick and easy experiences — from Google searches to help from voice assistants — so they can get the answers they need and get back to work. While the concept of “on-demand” learning is not new, it’s been historically tough to deliver, and though most learning and development departments have linear e-learning modules or traditional classroom experiences, today’s learners are seeking more performance-adjacent, “point-of-need” models that fit into their busy, fast-paced work environments.
Enter emerging technologies. Artificial intelligence, voice interfaces and augmented reality, when applied correctly, have the potential to radically change the nature of how we learn at work. What’s more, these technologies are emerging at a consumer-level, meaning HR’s lift in implementing them into L&D may not be substantial. Consider the technologies we already use regularly — voice assistants like Alexa, Siri and Google Assistant may be available in 55 percent of homes by 2022, providing instant, seamless access to information we need on the spot. While asking a home assistant for the weather, the best time to leave the house to beat traffic or what movies are playing at a local theater might not seem to have much application in the workplace, this nonlinear, point-of-need interaction is already playing out across learning platforms.
Artificial intelligence, voice interfaces and augmented reality, when applied correctly, have the potential to radically change the nature of how we learn at work.
As computer algorithms become more advanced, artificial intelligence (AI) increasingly has grown prominent in the workplace. Top news organizations now use AI for a variety of newsroom tasks.
…
But current AI systems largely are still dependent on humans to function correctly, and the most pressing concern is understanding how to correctly operate these systems as they continue to thrive in a variety of media-related industries.
…
So, while [Machine Learning] systems soon will become ubiquitous in many professions, they won’t replace the professionals working in those fields for some time — rather, they will become an advanced tool that will aid in decision making. This is not to say that AI will never endanger human jobs. Automation always will find a way.
From DSC: While I don’t find this article to be exemplary, I post this one mainly to encourage innovative thinking about how we might use some of these technologies in our future learning ecosystems.
Complete Anatomy 2018 +Courses (iOS): Give your preschoolers a head start on their education! Okay, clearly this app is meant for more advanced learners. Compared to the average app, you’ll end up paying through the nose with in-app purchases, but it’s really a drop in the bucket compared to the student loans students will accumulate in college. Price: Free with in-app purchases ranging from $0.99 to $44.99.
SkyView (iOS & Android): If I can wax nostalgic for a bit, I recall one of the first mobile apps that wowed me being Google’s original SkyView app. Now you can bring back that feeling with some augmented reality. With SkyView, you can point your phone to the sky and the app will tell you what constellations or other celestial bodies you are looking at. Price: $1.99, but there’s a free version for iOS and Android.
JigSpace (iOS): JigSpace is an app dedicated to showing users how things work (the human body, mechanical objects, etc.). And the app recently added how-to info for those who WonderHowTo do other things as well. JigSpace can now display its content in augmented reality as well, which is a brilliant application of immersive content to education. Price: Free.
NY Times (iOS & Android): The New York Times only recently adopted augmented reality as a means for covering the news, but already we’ve had the chance to see Olympic athletes and David Bowie’s freaky costumes up close. That’s a pretty good start! Price: Free with in-app purchases ranging from $9.99 to $129.99 for subscriptions.
BBC Civilisations (iOS & Android): Developed as a companion to the show of the same name, this app ends up holding its own as an AR app experience. Users can explore digital scans of ancient artifacts, learn more about their significance, and even interact with them. Sure, Indiana Jones would say this stuff belongs in a museum, but augmented reality lets you view them in your home as well. Price: Free.
SketchAR (iOS, Android, & Windows): A rare app that works on the dominant mobile platforms and HoloLens, Sketch AR helps users learn how to draw. Sketch AR scans your environment for your drawing surface and anchors the content there as you draw around it. As you can imagine, the app works best on HoloLens since it keeps users’ hands free to draw. Price: Free.
Sun Seeker (iOS & Android): This app displays the solar path, hour intervals, and more in augmented reality. While this becomes a unique way to teach students about the Earth’s orbit around the sun (and help refute silly flat-earthers), it can also be a useful tool for professionals. For instance, it can help photographers plan a photoshoot and see where sunlight will shine at certain times of the day. Price: $9.99.
Froggipedia (iOS): Dissecting a frog is basically a rite of passage for anyone who has graduated from primary school in the US within the past 50 years or so. Thanks to augmented reality, we can now save precious frog lives while still learning about their anatomy. The app enables users to dissect virtual frogs as if they are on the table in front of them, and without the stench of formaldehyde. Price: $3.99.
GeoGebra Augmented Reality (iOS): Who needs a graphing calculator when you can visualize equations in augmented reality. That’s what GeoGebra does. The app is invaluable for visualizing graphs. Price: Free.
How to Set Up a VR Pilot — from campustechnology.com by Dian Schaffhauser As Washington & Lee University has found, there is no best approach for introducing virtual reality into your classrooms — just stages of faculty commitment.
Excerpt:
The work at the IQ Center offers a model for how other institutions might want to approach their own VR experimentation. The secret to success, suggested IQ Center Coordinator David Pfaff, “is to not be afraid to develop your own stuff” — in other words, diving right in. But first, there’s dipping a toe.
The IQ Center is a collaborative workspace housed in the science building but providing services to “departments all over campus,” said Pfaff. The facilities include three labs: one loaded with high-performance workstations, another decked out for 3D visualization and a third packed with physical/mechanical equipment, including 3D printers, a laser cutter and a motion-capture system.
Here, I would like to stick to the challenges and opportunities presented by augmented reality and virtual reality for language learning.
…
While the challenge is a significant one, I am more optimistic than most that wearable AR will be available and popular soon. We don’t yet know how Snap Spectacles will evolve, and, of course, there’s always Apple.
…
I suspect we will see a flurry of new VR apps from language learning startups soon, especially from Duolingo and in combination with their AI chat bots. I am curious if users will quickly abandon the isolating experiences or become dedicated users.
Bose has a plan to make AR glasses— from cnet.com by David Carnoy Best known for its speakers and headphones, the company has created a $50 million development fund to back a new AR platform that’s all about audio.
Excerpts:
“Unlike other augmented reality products and platforms, Bose AR doesn’t change what you see, but knows what you’re looking at — without an integrated lens or phone camera,” Bose said. “And rather than superimposing visual objects on the real world, Bose AR adds an audible layer of information and experiences, making every day better, easier, more meaningful, and more productive.”
…
The secret sauce seems to be the tiny, “wafer-thin” acoustics package developed for the platform. Bose said it represents the future of mobile micro-sound and features “jaw-dropping power and clarity.”
Bose adds the technology can “be built into headphones, eyewear, helmets and more and it allows simple head gestures, voice, or a tap on the wearable to control content.”
Here are some examples Bose gave for how it might be used:
For travel, the Bose AR could simulate historic events at landmarks as you view them — “so voices and horses are heard charging in from your left, then passing right in front of you before riding off in the direction of their original route, fading as they go.” You could hear a statue make a famous speech when you approach it. Or get told which way to turn towards your departure gate while checking in at the airport.
Bose AR could translate a sign you’re reading. Or tell you the word or phrase for what you’re looking at in any language. Or explain the story behind the painting you’ve just approached.
With gesture controls, you could choose or change your music with simple head nods indicating yes, no, or next (Bragi headphones already do this).
Bose AR would add useful information based on where you look. Like the forecast when you look up or information about restaurants on the street you look down.
On the heels of last week’s rollout on Android, Google’s new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.
For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.
The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.
Oklahoma State University’s first inaugural “Virtual + Augmented Reality Hackathon” hosted January 26-27 by the Mixed Reality Lab in the university’s College of Human Sciences gave students and the community a chance to tackle real-world problems using augmented and virtual reality tools, while offering researchers a glimpse into the ways teams work with digital media tools. Campus Technology asked Dr. Tilanka Chandrasekera, an assistant professor in the department of Design, Housing and Merchandising at Oklahoma State University about the hackathon and how it fits into the school’s broader goals.
To set up the audio feed, use the Alexa mobile app to search for “Campus Technology News” in the Alexa Skills catalog. Once you enable the skill, you can ask Alexa “What’s in the news?” or “What’s my Flash Briefing?” and she will read off the latest news briefs from Campus Technology.
Computer simulations are nothing new in the field of aviation education. But a new partnership between Western Michigan University and Microsoft is taking that one big step further. Microsoft has selected Lori Brown, an associate professor of aviation at WMU, to test out their new HoloLens, the world’s first self-contained holographic computer. The augmented reality interface will bring students a little closer to the realities of flight.
When it comes to the use of innovative technology in the classroom, this is by no means Professor Brown’s first rodeo. She has spent years researching the uses of virtual and augmented reality in aviation education.
“In the past 16 years that I’ve been teaching advanced aircraft systems, I have identified many gaps in the tools and equipment available to me as a professor. Ultimately, mixed reality bridges the gap between simulation, the aircraft and the classroom,” Brown told WMU News.
Storytelling traces its roots back to the very beginning of human experience. It’s found its way through multiple forms, from oral traditions to art, text, images, cinema, and multimedia formats on the web.
As we move into a world of immersive technologies, how will virtual and augmented reality transform storytelling? What roles will our institutions and students play as early explorers? In the traditional storytelling format, a narrative structure is presented to a listener, reader, or viewer. In virtual reality, in contrast, you’re no longer the passive witness. As Chris Milk said, “In the future, you will be the character. The story will happen to you.”
If the accepted rules of storytelling are undermined, we find ourselves with a remarkably creative opportunity no longer bound by the rectangular frame of traditional media.
We are in the earliest stages of virtual reality as an art form. The exploration and experimentation with immersive environments is so nascent that new terms have been proposed for immersive storytelling. Abigail Posner, the head of strategic planning at Google Zoo, said that it totally “shatters” the storytelling experience and refers to it as “storyliving.” At the Tribeca Film Festival, immersive stories are termed “storyscapes.”
Learning through a virtual experience The concept to use VR as an educational tool has been gaining success amongst teachers and students, who apply the medium to a wide range of activities and in a variety of subjects. Many schools start with a simple cardboard viewer such as the Google cardboard, available for less than $10 and enough to play with simple VRs.
A recent study by Foundry10 analyzed how students perceived the usage of VR in their education and in what subjects they saw it being the most useful. According to the report, 44% of students were interested in using VR for science education, 38% for history education, 12% for English education, 3% for math education, and 3% for art education.
Among the many advantages brought by VR, the aspect that generally comes first when discussing the new technology is the immersion made possible by entering a 360° and 3-dimensional virtual space. This immersive aspect offers a different perception of the content being viewed, which enables new possibilities in education.
Schools today seem to be getting more and more concerned with making their students “future-ready.” By bringing the revolutionary medium of VR to the classroom and letting kids experiment with it, they help prepare them for the digital world in which they will grow and later start a career.
Last but not least, the new medium also adds a considerable amount of fun to the classroom as students get excited to receive the opportunity, sometimes for the first time, to put a headset viewer on and try VR.
VR also has the potential to stimulate enthusiasm within the classroom and increase students’ engagement. Several teachers have reported that they were impressed by the impact on students’ motivation and in some cases, even on their new perspective toward learning matter.
These teachers explained that when put in control of creating a piece of content and exposed to the fascinating new medium of VR, some of their students showed higher levels of participation and in some cases, even better retention of the information.
“The good old reality is no longer best practice in teaching. Worksheets and book reports do not foster imagination or inspire kids to connect with literature. I want my students to step inside the characters and the situations they face, I want them to visualize the setting and the elements of conflict in the story that lead to change.”
1 in 5 workers will have AI as their co worker in 2022
More job roles will change than will be become totally automated so HR needs to prepare today
…
As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.
How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…
…
The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”
…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.
From DSC: Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.
Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.
From DSC: “Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent #AIis a bit unnerving.
Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.
The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.
…
“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.
What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:
According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses. (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.
Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30
While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.
The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.
Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.
2018 marks the beginning of the end of smartphones in the world’s largest economies. What’s coming next are conversational interfaces with zero-UIs. This will radically change the media landscape, and now is the best time to start thinking through future scenarios.
In 2018, a critical mass of emerging technologies will converge finding advanced uses beyond initial testing and applied research. That’s a signal worth paying attention to. News organizations should devote attention to emerging trends in voice interfaces, the decentralization of content, mixed reality, new types of search, and hardware (such as CubeSats and smart cameras).
Journalists need to understand what artificial intelligence is, what it is not, and what it means for the future of news. AI research has advanced enough that it is now a core component of our work at FTI. You will see the AI ecosystem represented in many of the trends in this report, and it is vitally important that all decision-makers within news organizations familiarize themselves with the current and emerging AI landscapes. We have included an AI Primer For Journalists in our Trend Report this year to aid in that effort.
Decentralization emerged as a key theme for 2018. Among the companies and organizations FTI covers, we discovered a new emphasis on restricted peer-to-peer networks to detect harassment, share resources and connect with sources. There is also a push by some democratic governments around the world to divide internet access and to restrict certain content, effectively creating dozens of “splinternets.”
Consolidation is also a key theme for 2018. News brands, broadcast spectrum, and artificial intelligence startups will continue to be merged with and acquired by relatively few corporations. Pending legislation and policy in the U.S., E.U. and in parts of Asia could further concentrate the power among a small cadre of information and technology organizations in the year ahead.
To understand the future of news, you must pay attention to the future of many industries and research areas in the coming year. When journalists think about the future, they should broaden the usual scope to consider developments from myriad other fields also participating in the knowledge economy. Technology begets technology. We are witnessing an explosion in slow motion.
Those in the news ecosystem should factor the trends in this report into their strategic thinking for the coming year, and adjust their planning, operations and business models accordingly.
This year’s report has 159 trends.
This is mostly due to the fact that 2016 was the year that many areas of science and technology finally started to converge. As a result we’re seeing a sort of slow-motion explosion––we will undoubtedly look back on the last part of this decade as a pivotal moment in our history on this planet.
…
Our 2017 Trend Report reveals strategic opportunities and challenges for your organization in the coming year. The Future Today Institute’s annual Trend Report prepares leaders and organizations for the year ahead, so that you are better positioned to see emerging technology and adjust your strategy accordingly. Use our report to identify near-future business disruption and competitive threats while simultaneously finding new collaborators and partners. Most importantly, use our report as a jumping off point for deeper strategic planning.
Augmented and virtual reality offer ways to immerse learners in experiences that can aid training in processes and procedures, provide realistic simulations to deepen empathy and build communication skills, or provide in-the-workflow support for skilled technicians performing complex procedures.
Badges and other digital credentials provide new ways to assess and validate employees’ skills and mark their eLearning achievements, even if their learning takes place informally or outside of the corporate framework.
Chatbots are proving an excellent tool for spaced learning, review of course materials, guiding new hires through onboarding, and supporting new managers with coaching and tips.
Content curation enables L&D professionals to provide information and educational materials from trusted sources that can deepen learners’ knowledge and help them build skills.
eBooks, a relative newcomer to the eLearning arena, offer rich features for portable on-demand content that learners can explore, review, and revisit as needed.
Interactive videos provide branching scenarios, quiz learners on newly introduced concepts and terms, offer prompts for small-group discussions, and do much more to engage learners.
Podcasts can turn drive time into productive time, allowing learners to enjoy a story built around eLearning content.
Smartphone apps, available wherever learners take their phones or tablets, can be designed to offer product support, info for sales personnel, up-to-date information for repair technicians, and games and drills for teaching and reviewing content; the possibilities are limited only by designers’ imagination.
Social platforms like Slack, Yammer, or Instagram facilitate collaboration, sharing of ideas, networking, and social learning. Adopting social learning platforms encourages learners to develop their skills and contribute to their communities of practice, whether inside their companies or more broadly.
xAPI turns any experience into a learning experience. Adding xAPI capability to any suitable tool or platform means you can record learner activity and progress in a learning record store (LRS) and track it.
How does all of this relate to eLearning? Again, Webb anticipated the question. Her response gave hope to some—and terrified others. She presented three possible future scenarios:
Everyone in the learning arena learns to recognize weak signals; they work with technologists to refine artificial intelligence to instill values. Future machines learn not only to identify correct and incorrect answers; they also learn right and wrong. Webb said that she gives this optimistic scenario a 25 percent chance of occurring.
Everyone present is inspired by her talk but they, and the rest of the learning world, do nothing. Artificial intelligence continues to develop as it has in the past, learning to identify correct answers but lacking values. Webb’s prediction is that this pragmatic optimistic scenario has a 50 percent chance of occurring.
Learning and artificial intelligence continue to develop on separate tracks. Future artificial intelligence and machine learning projects incorporate real biases that affect what and how people learn and how knowledge is transferred. Webb said that she gives this catastrophic scenario a 25 percent chance of occurring.
In an attempt to end on a strong positive note, Webb said that “the future hasn’t happened yet—we think” and encouraged attendees to take action. “To build the future of learning that you want, listen to weak signals now.”