2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.

Description:

The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader

 


 

01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?

 


 

 

With great tech success, comes even greater responsibility — from techcrunch.com by Ron Miller

Excerpts:

As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.

 

 

But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.

 

 

 

 

Why the Public Overlooks and Undervalues Tech’s Power — from morningconsult.com by Joanna Piacenza
Some experts say the tech industry is rapidly nearing a day of reckoning

Excerpts:

  • 5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
  • Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).

It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.

But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.

 

 

 

 

The Space Satellite Revolution Could Turn Earth into a Surveillance Nightmare — from scout.ai by Becky Ferreira
Laser communication between satellites is revolutionizing our ability to track climate change, manage resources, and respond to natural disasters. But there are downsides to putting Earth under a giant microscope.

Excerpts:

And while universal broadband has the potential to open up business and education opportunities to hundreds of thousands of people, it’s the real-time satellite feeds of earth that may have both the most immediate and widespread financial upsides — and the most frightening surveillance implications — for the average person here on earth.

Among the industries most likely to benefit from laser communications between these satellites are agriculture and forestry.

Satellite data can also be used to engage the public in humanitarian efforts. In the wake of Typhoon Haiyan, DigitalGlobe launched online crowdsourcing campaigns to map damage and help NGOs respond on the ground. And they’ve been identifying vulnerable communities in South Sudan as the nation suffers through unrest and famine.

In an age of intensifying natural disasters, combining these tactics with live satellite video feeds could mean the difference between life and death for thousands of people.

Should a company, for example, be able to use real-time video feeds to track your physical location, perhaps in order to better target advertising? Should they be able to use facial recognition and sentiment analysis algorithms to assess your reactions to those ads in real time?

While these commercially available images aren’t yet sharp enough to pick up intimate details like faces or phone screens, it’s foreseeable that regulations will be eased to accommodate even sharper images. That trend will continue to prompt privacy concerns, especially if a switch to laser-based satellite communication enables near real-time coverage at high resolutions.

A kaleidoscopic swirl of possible futures confronts us, filled with scenarios where law enforcement officials could rewind satellite footage to identify people at a crime scene, or on a more familial level, parents could remotely watch their kids — or keep tabs on each other — from space. In that world, it’s not hard to imagine privacy becoming even more of a commodity, with wealthy enclaves lobbying to be erased from visual satellite feeds, in a geospatial version of “gated communities.”

 

 

From DSC:
The pros and cons of technologies…hmmm…this article nicely captures the pluses and minuses that societies around the globe need to be aware of, struggle with, and discuss with each other. Some exciting things here, but some disturbing things here as well.

 

 

 

Fake videos are on the rise. As they become more realistic, seeing shouldn’t always be believing — from latimes.com by David Pierson Fe

Excerpts:

It’s not hard to imagine a world in which social media is awash with doctored videos targeting ordinary people to exact revenge, extort or to simply troll.

In that scenario, where Twitter and Facebook are algorithmically flooded with hoaxes, no one could fully believe what they see. Truth, already diminished by Russia’s misinformation campaign and President Trump’s proclivity to label uncomplimentary journalism “fake news,” would be more subjective than ever.

The danger there is not just believing hoaxes, but also dismissing what’s real.

The consequences could be devastating for the notion of evidentiary video, long considered the paradigm of proof given the sophistication required to manipulate it.

“This goes far beyond ‘fake news’ because you are dealing with a medium, video, that we traditionally put a tremendous amount of weight on and trust in,” said David Ryan Polgar, a writer and self-described tech ethicist.

 

 

 

 

From DSC:
Though I’m typically pro-technology, this is truly disturbing. There are certainly downsides to technology as well as upsides — but it’s how we use a technology that can make the real difference. Again, this is truly disturbing.

 

 

2018 Tech Trends for Journalism & Media Report + the 2017 Tech Trends Annual Report that I missed from the Future Today Institute

 

2018 Tech Trends For Journalism Report — from the Future Today Institute

Key Takeaways

  • 2018 marks the beginning of the end of smartphones in the world’s largest economies. What’s coming next are conversational interfaces with zero-UIs. This will radically change the media landscape, and now is the best time to start thinking through future scenarios.
  • In 2018, a critical mass of emerging technologies will converge finding advanced uses beyond initial testing and applied research. That’s a signal worth paying attention to. News organizations should devote attention to emerging trends in voice interfaces, the decentralization of content, mixed reality, new types of search, and hardware (such as CubeSats and smart cameras).
  • Journalists need to understand what artificial intelligence is, what it is not, and what it means for the future of news. AI research has advanced enough that it is now a core component of our work at FTI. You will see the AI ecosystem represented in many of the trends in this report, and it is vitally important that all decision-makers within news organizations familiarize themselves with the current and emerging AI landscapes. We have included an AI Primer For Journalists in our Trend Report this year to aid in that effort.
  • Decentralization emerged as a key theme for 2018. Among the companies and organizations FTI covers, we discovered a new emphasis on restricted peer-to-peer networks to detect harassment, share resources and connect with sources. There is also a push by some democratic governments around the world to divide internet access and to restrict certain content, effectively creating dozens of “splinternets.”
  • Consolidation is also a key theme for 2018. News brands, broadcast spectrum, and artificial intelligence startups will continue to be merged with and acquired by relatively few corporations. Pending legislation and policy in the U.S., E.U. and in parts of Asia could further concentrate the power among a small cadre of information and technology organizations in the year ahead.
  • To understand the future of news, you must pay attention to the future of many industries and research areas in the coming year. When journalists think about the future, they should broaden the usual scope to consider developments from myriad other fields also participating in the knowledge economy. Technology begets technology. We are witnessing an explosion in slow motion.

Those in the news ecosystem should factor the trends in this report into their strategic thinking for the coming year, and adjust their planning, operations and business models accordingly.

 



 

 

2017 Tech Trends Annual Report — from the Future Today Institute; this is the first I’ve seen this solid report

Excerpts:

This year’s report has 159 trends.
This is mostly due to the fact that 2016 was the year that many areas of science and technology finally started to converge. As a result we’re seeing a sort of slow-motion explosion––we will undoubtedly look back on the last part of this decade as a pivotal moment in our history on this planet.

Our 2017 Trend Report reveals strategic opportunities and challenges for your organization in the coming year. The Future Today Institute’s annual Trend Report prepares leaders and organizations for the year ahead, so that you are better positioned to see emerging technology and adjust your strategy accordingly. Use our report to identify near-future business disruption and competitive threats while simultaneously finding new collaborators and partners. Most importantly, use our report as a jumping off point for deeper strategic planning.

 

 



 

Also see:

Emerging eLearning Tools and Platforms Improve Results — from learningsolutionsmag.com

  • Augmented and virtual reality offer ways to immerse learners in experiences that can aid training in processes and procedures, provide realistic simulations to deepen empathy and build communication skills, or provide in-the-workflow support for skilled technicians performing complex procedures.
  • Badges and other digital credentials provide new ways to assess and validate employees’ skills and mark their eLearning achievements, even if their learning takes place informally or outside of the corporate framework.
  • Chatbots are proving an excellent tool for spaced learning, review of course materials, guiding new hires through onboarding, and supporting new managers with coaching and tips.
  • Content curation enables L&D professionals to provide information and educational materials from trusted sources that can deepen learners’ knowledge and help them build skills.
  • eBooks, a relative newcomer to the eLearning arena, offer rich features for portable on-demand content that learners can explore, review, and revisit as needed.
  • Interactive videos provide branching scenarios, quiz learners on newly introduced concepts and terms, offer prompts for small-group discussions, and do much more to engage learners.
  • Podcasts can turn drive time into productive time, allowing learners to enjoy a story built around eLearning content.
  • Smartphone apps, available wherever learners take their phones or tablets, can be designed to offer product support, info for sales personnel, up-to-date information for repair technicians, and games and drills for teaching and reviewing content; the possibilities are limited only by designers’ imagination.
  • Social platforms like Slack, Yammer, or Instagram facilitate collaboration, sharing of ideas, networking, and social learning. Adopting social learning platforms encourages learners to develop their skills and contribute to their communities of practice, whether inside their companies or more broadly.
  • xAPI turns any experience into a learning experience. Adding xAPI capability to any suitable tool or platform means you can record learner activity and progress in a learning record store (LRS) and track it.

 



 

DevLearn Attendees Learn How to ‘Think Like a Futurist’ — from learningsolutionsmag.com

Excerpt:

How does all of this relate to eLearning? Again, Webb anticipated the question. Her response gave hope to some—and terrified others. She presented three possible future scenarios:

  • Everyone in the learning arena learns to recognize weak signals; they work with technologists to refine artificial intelligence to instill values. Future machines learn not only to identify correct and incorrect answers; they also learn right and wrong. Webb said that she gives this optimistic scenario a 25 percent chance of occurring.
  • Everyone present is inspired by her talk but they, and the rest of the learning world, do nothing. Artificial intelligence continues to develop as it has in the past, learning to identify correct answers but lacking values. Webb’s prediction is that this pragmatic optimistic scenario has a 50 percent chance of occurring.
  • Learning and artificial intelligence continue to develop on separate tracks. Future artificial intelligence and machine learning projects incorporate real biases that affect what and how people learn and how knowledge is transferred. Webb said that she gives this catastrophic scenario a 25 percent chance of occurring.

In an attempt to end on a strong positive note, Webb said that “the future hasn’t happened yet—we think” and encouraged attendees to take action. “To build the future of learning that you want, listen to weak signals now.”

 



 

 

 

 

 

From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

From DSC:
Many times we don’t want to hear news that could be troubling in terms of our futures. But we need to deal with these trends now or face the destabilization that Harold Jarche mentions in his posting below. 

The topics found in the following items should be discussed in courses involving economics, business, political science, psychology, futurism, engineering, religion*, robotics, marketing, the law/legal affairs and others throughout the world.  These trends are massive and have enormous ramifications for our societies in the not-too-distant future.

* When I mention religion classes here, I’m thinking of questions such as :

  • What does God have in mind for the place of work in our lives?
    Is it good for us? If so, why or why not?
  • How might these trends impact one’s vocation/calling?
  • …and I’m sure that professors who teach faith/
    religion-related courses can think of other questions to pursue

 

turmoil and transition — from jarche.com by Harold Jarche

Excerpts (emphasis DSC):

One of the greatest issues that will face Canada, and many developed countries in the next decade will be wealth distribution. While it does not currently appear to be a major problem, the disparity between rich and poor will increase. The main reason will be the emergence of a post-job economy. The ‘job’ was the way we redistributed wealth, making capitalists pay for the means of production and in return creating a middle class that could pay for mass produced goods. That period is almost over. From self-driving vehicles to algorithms replacing knowledge workers, employment is not keeping up with production. Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.

The emerging economy of platform capitalism includes companies like Amazon, Facebook, Google, and Apple. These giants combined do not employ as many people as General Motors did.  But the money accrued by them is enormous and remains in a few hands. The rest of the labour market has to find ways to cobble together a living income. Hence we see many people willing to drive for a company like Uber in order to increase cash-flow. But drivers for Uber have no career track. The platform owners get richer, but the drivers are limited by finite time. They can only drive so many hours per day, and without benefits. At the same time, those self-driving cars are poised to replace all Uber drivers in the near future. Standardized work, like driving a vehicle, has little future in a world of nano-bio-cogno-techno progress.

 

Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.

 

For the past century, the job was the way we redistributed wealth and protected workers from the negative aspects of early capitalism. As the knowledge economy disappears, we need to re-think our concepts of work, income, employment, and most importantly education. If we do not find ways to help citizens lead productive lives, our society will face increasing destabilization. 

 

Also see:

Will artificial intelligence and robots take your marketing job? — from by markedu.com by
Technology will overtake jobs to an extent and at a rate we have not seen before. Artificial intelligence is threatening jobs even in service and knowledge intensive sectors. This begs the question: are robots threatening to take your marketing job?

Excerpt:

What exactly is a human job?
The benefits of artificial intelligence are obvious. Massive productivity gains while a new layer of personalized services from your computer – whether that is a burger robot or Dr. Watson. But artificial intelligence has a bias. Many jobs will be lost.

A few years ago a study from the University of Oxford got quite a bit of attention. The study said that 47 percent of the US labor market could be replaced by intelligent computers within the next 20 years.

The losers are a wide range of job categories within the administration, service, sales, transportation and manufacturing.

Before long we should – or must – redefine what exactly a human job is and the usefulness of it. How we as humans can best complement the extraordinary capabilities of artificial intelligence.

 

This development is expected to grow fast. There are different predictions about the timing, but by 2030 there will be very few tasks that only a human can solve.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian