2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.

Description:

The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader

 


 

01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?

 


 

 

With great tech success, comes even greater responsibility — from techcrunch.com by Ron Miller

Excerpts:

As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.

 

 

But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.

 

 

 

 

Why the Public Overlooks and Undervalues Tech’s Power — from morningconsult.com by Joanna Piacenza
Some experts say the tech industry is rapidly nearing a day of reckoning

Excerpts:

  • 5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
  • Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).

It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.

But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.

 

 

 

 

The Space Satellite Revolution Could Turn Earth into a Surveillance Nightmare — from scout.ai by Becky Ferreira
Laser communication between satellites is revolutionizing our ability to track climate change, manage resources, and respond to natural disasters. But there are downsides to putting Earth under a giant microscope.

Excerpts:

And while universal broadband has the potential to open up business and education opportunities to hundreds of thousands of people, it’s the real-time satellite feeds of earth that may have both the most immediate and widespread financial upsides — and the most frightening surveillance implications — for the average person here on earth.

Among the industries most likely to benefit from laser communications between these satellites are agriculture and forestry.

Satellite data can also be used to engage the public in humanitarian efforts. In the wake of Typhoon Haiyan, DigitalGlobe launched online crowdsourcing campaigns to map damage and help NGOs respond on the ground. And they’ve been identifying vulnerable communities in South Sudan as the nation suffers through unrest and famine.

In an age of intensifying natural disasters, combining these tactics with live satellite video feeds could mean the difference between life and death for thousands of people.

Should a company, for example, be able to use real-time video feeds to track your physical location, perhaps in order to better target advertising? Should they be able to use facial recognition and sentiment analysis algorithms to assess your reactions to those ads in real time?

While these commercially available images aren’t yet sharp enough to pick up intimate details like faces or phone screens, it’s foreseeable that regulations will be eased to accommodate even sharper images. That trend will continue to prompt privacy concerns, especially if a switch to laser-based satellite communication enables near real-time coverage at high resolutions.

A kaleidoscopic swirl of possible futures confronts us, filled with scenarios where law enforcement officials could rewind satellite footage to identify people at a crime scene, or on a more familial level, parents could remotely watch their kids — or keep tabs on each other — from space. In that world, it’s not hard to imagine privacy becoming even more of a commodity, with wealthy enclaves lobbying to be erased from visual satellite feeds, in a geospatial version of “gated communities.”

 

 

From DSC:
The pros and cons of technologies…hmmm…this article nicely captures the pluses and minuses that societies around the globe need to be aware of, struggle with, and discuss with each other. Some exciting things here, but some disturbing things here as well.

 

 

 

Fake videos are on the rise. As they become more realistic, seeing shouldn’t always be believing — from latimes.com by David Pierson Fe

Excerpts:

It’s not hard to imagine a world in which social media is awash with doctored videos targeting ordinary people to exact revenge, extort or to simply troll.

In that scenario, where Twitter and Facebook are algorithmically flooded with hoaxes, no one could fully believe what they see. Truth, already diminished by Russia’s misinformation campaign and President Trump’s proclivity to label uncomplimentary journalism “fake news,” would be more subjective than ever.

The danger there is not just believing hoaxes, but also dismissing what’s real.

The consequences could be devastating for the notion of evidentiary video, long considered the paradigm of proof given the sophistication required to manipulate it.

“This goes far beyond ‘fake news’ because you are dealing with a medium, video, that we traditionally put a tremendous amount of weight on and trust in,” said David Ryan Polgar, a writer and self-described tech ethicist.

 

 

 

 

From DSC:
Though I’m typically pro-technology, this is truly disturbing. There are certainly downsides to technology as well as upsides — but it’s how we use a technology that can make the real difference. Again, this is truly disturbing.

 

 

2018 Tech Trends for Journalism & Media Report + the 2017 Tech Trends Annual Report that I missed from the Future Today Institute

 

2018 Tech Trends For Journalism Report — from the Future Today Institute

Key Takeaways

  • 2018 marks the beginning of the end of smartphones in the world’s largest economies. What’s coming next are conversational interfaces with zero-UIs. This will radically change the media landscape, and now is the best time to start thinking through future scenarios.
  • In 2018, a critical mass of emerging technologies will converge finding advanced uses beyond initial testing and applied research. That’s a signal worth paying attention to. News organizations should devote attention to emerging trends in voice interfaces, the decentralization of content, mixed reality, new types of search, and hardware (such as CubeSats and smart cameras).
  • Journalists need to understand what artificial intelligence is, what it is not, and what it means for the future of news. AI research has advanced enough that it is now a core component of our work at FTI. You will see the AI ecosystem represented in many of the trends in this report, and it is vitally important that all decision-makers within news organizations familiarize themselves with the current and emerging AI landscapes. We have included an AI Primer For Journalists in our Trend Report this year to aid in that effort.
  • Decentralization emerged as a key theme for 2018. Among the companies and organizations FTI covers, we discovered a new emphasis on restricted peer-to-peer networks to detect harassment, share resources and connect with sources. There is also a push by some democratic governments around the world to divide internet access and to restrict certain content, effectively creating dozens of “splinternets.”
  • Consolidation is also a key theme for 2018. News brands, broadcast spectrum, and artificial intelligence startups will continue to be merged with and acquired by relatively few corporations. Pending legislation and policy in the U.S., E.U. and in parts of Asia could further concentrate the power among a small cadre of information and technology organizations in the year ahead.
  • To understand the future of news, you must pay attention to the future of many industries and research areas in the coming year. When journalists think about the future, they should broaden the usual scope to consider developments from myriad other fields also participating in the knowledge economy. Technology begets technology. We are witnessing an explosion in slow motion.

Those in the news ecosystem should factor the trends in this report into their strategic thinking for the coming year, and adjust their planning, operations and business models accordingly.

 



 

 

2017 Tech Trends Annual Report — from the Future Today Institute; this is the first I’ve seen this solid report

Excerpts:

This year’s report has 159 trends.
This is mostly due to the fact that 2016 was the year that many areas of science and technology finally started to converge. As a result we’re seeing a sort of slow-motion explosion––we will undoubtedly look back on the last part of this decade as a pivotal moment in our history on this planet.

Our 2017 Trend Report reveals strategic opportunities and challenges for your organization in the coming year. The Future Today Institute’s annual Trend Report prepares leaders and organizations for the year ahead, so that you are better positioned to see emerging technology and adjust your strategy accordingly. Use our report to identify near-future business disruption and competitive threats while simultaneously finding new collaborators and partners. Most importantly, use our report as a jumping off point for deeper strategic planning.

 

 



 

Also see:

Emerging eLearning Tools and Platforms Improve Results — from learningsolutionsmag.com

  • Augmented and virtual reality offer ways to immerse learners in experiences that can aid training in processes and procedures, provide realistic simulations to deepen empathy and build communication skills, or provide in-the-workflow support for skilled technicians performing complex procedures.
  • Badges and other digital credentials provide new ways to assess and validate employees’ skills and mark their eLearning achievements, even if their learning takes place informally or outside of the corporate framework.
  • Chatbots are proving an excellent tool for spaced learning, review of course materials, guiding new hires through onboarding, and supporting new managers with coaching and tips.
  • Content curation enables L&D professionals to provide information and educational materials from trusted sources that can deepen learners’ knowledge and help them build skills.
  • eBooks, a relative newcomer to the eLearning arena, offer rich features for portable on-demand content that learners can explore, review, and revisit as needed.
  • Interactive videos provide branching scenarios, quiz learners on newly introduced concepts and terms, offer prompts for small-group discussions, and do much more to engage learners.
  • Podcasts can turn drive time into productive time, allowing learners to enjoy a story built around eLearning content.
  • Smartphone apps, available wherever learners take their phones or tablets, can be designed to offer product support, info for sales personnel, up-to-date information for repair technicians, and games and drills for teaching and reviewing content; the possibilities are limited only by designers’ imagination.
  • Social platforms like Slack, Yammer, or Instagram facilitate collaboration, sharing of ideas, networking, and social learning. Adopting social learning platforms encourages learners to develop their skills and contribute to their communities of practice, whether inside their companies or more broadly.
  • xAPI turns any experience into a learning experience. Adding xAPI capability to any suitable tool or platform means you can record learner activity and progress in a learning record store (LRS) and track it.

 



 

DevLearn Attendees Learn How to ‘Think Like a Futurist’ — from learningsolutionsmag.com

Excerpt:

How does all of this relate to eLearning? Again, Webb anticipated the question. Her response gave hope to some—and terrified others. She presented three possible future scenarios:

  • Everyone in the learning arena learns to recognize weak signals; they work with technologists to refine artificial intelligence to instill values. Future machines learn not only to identify correct and incorrect answers; they also learn right and wrong. Webb said that she gives this optimistic scenario a 25 percent chance of occurring.
  • Everyone present is inspired by her talk but they, and the rest of the learning world, do nothing. Artificial intelligence continues to develop as it has in the past, learning to identify correct answers but lacking values. Webb’s prediction is that this pragmatic optimistic scenario has a 50 percent chance of occurring.
  • Learning and artificial intelligence continue to develop on separate tracks. Future artificial intelligence and machine learning projects incorporate real biases that affect what and how people learn and how knowledge is transferred. Webb said that she gives this catastrophic scenario a 25 percent chance of occurring.

In an attempt to end on a strong positive note, Webb said that “the future hasn’t happened yet—we think” and encouraged attendees to take action. “To build the future of learning that you want, listen to weak signals now.”

 



 

 

 

 

 

From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

From DSC:
Many times we don’t want to hear news that could be troubling in terms of our futures. But we need to deal with these trends now or face the destabilization that Harold Jarche mentions in his posting below. 

The topics found in the following items should be discussed in courses involving economics, business, political science, psychology, futurism, engineering, religion*, robotics, marketing, the law/legal affairs and others throughout the world.  These trends are massive and have enormous ramifications for our societies in the not-too-distant future.

* When I mention religion classes here, I’m thinking of questions such as :

  • What does God have in mind for the place of work in our lives?
    Is it good for us? If so, why or why not?
  • How might these trends impact one’s vocation/calling?
  • …and I’m sure that professors who teach faith/
    religion-related courses can think of other questions to pursue

 

turmoil and transition — from jarche.com by Harold Jarche

Excerpts (emphasis DSC):

One of the greatest issues that will face Canada, and many developed countries in the next decade will be wealth distribution. While it does not currently appear to be a major problem, the disparity between rich and poor will increase. The main reason will be the emergence of a post-job economy. The ‘job’ was the way we redistributed wealth, making capitalists pay for the means of production and in return creating a middle class that could pay for mass produced goods. That period is almost over. From self-driving vehicles to algorithms replacing knowledge workers, employment is not keeping up with production. Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.

The emerging economy of platform capitalism includes companies like Amazon, Facebook, Google, and Apple. These giants combined do not employ as many people as General Motors did.  But the money accrued by them is enormous and remains in a few hands. The rest of the labour market has to find ways to cobble together a living income. Hence we see many people willing to drive for a company like Uber in order to increase cash-flow. But drivers for Uber have no career track. The platform owners get richer, but the drivers are limited by finite time. They can only drive so many hours per day, and without benefits. At the same time, those self-driving cars are poised to replace all Uber drivers in the near future. Standardized work, like driving a vehicle, has little future in a world of nano-bio-cogno-techno progress.

 

Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.

 

For the past century, the job was the way we redistributed wealth and protected workers from the negative aspects of early capitalism. As the knowledge economy disappears, we need to re-think our concepts of work, income, employment, and most importantly education. If we do not find ways to help citizens lead productive lives, our society will face increasing destabilization. 

 

Also see:

Will artificial intelligence and robots take your marketing job? — from by markedu.com by
Technology will overtake jobs to an extent and at a rate we have not seen before. Artificial intelligence is threatening jobs even in service and knowledge intensive sectors. This begs the question: are robots threatening to take your marketing job?

Excerpt:

What exactly is a human job?
The benefits of artificial intelligence are obvious. Massive productivity gains while a new layer of personalized services from your computer – whether that is a burger robot or Dr. Watson. But artificial intelligence has a bias. Many jobs will be lost.

A few years ago a study from the University of Oxford got quite a bit of attention. The study said that 47 percent of the US labor market could be replaced by intelligent computers within the next 20 years.

The losers are a wide range of job categories within the administration, service, sales, transportation and manufacturing.

Before long we should – or must – redefine what exactly a human job is and the usefulness of it. How we as humans can best complement the extraordinary capabilities of artificial intelligence.

 

This development is expected to grow fast. There are different predictions about the timing, but by 2030 there will be very few tasks that only a human can solve.

 

 

Assignment #1:
Review the opinion/posting out at marketwatch.com entitled,
Opinion: How the stock market destroyed the middle class” by Rex Nutting and make a listing of the items you believe he is right on the mark on and another listing of items you believe he is mistaken about. Then answer the following questions:

  • What data or other types of support can you find to backup your lists and perspectives? 
  • What data or other types of support does he bring to the table?
  • What are the potential ramifications of this topic (on career development/livelihoods, policy, business practices, business ethics, families, society, innovation, other)?
  • If companies aren’t investing in their employees as much, what advice would you give to existing employees within the corporate world?  To your peers in your colleges and universities or to your peers within your MBA programs?
  • Has the middle class decreased in size since the early 1980’s? What are some of the other factors involved here? Is this situation currently impacting families across the nation and if so, how?

Excerpt:

“The ‘buyback corporation’ is in large part responsible for a national economy characterized by income inequality, employment instability, and diminished innovative capacity,” wrote William Lazonick, an economics professor at the University of Massachusetts at Lowell in a new paper published by the Brookings Institution.

Lazonick argues that corporations — which once retained a sizable share of profits to reinvest (including investing in their workforce by paying them enough to get them to stay) — have adopted a “downsize-and-distribute” model.

It’s not just lefty academics and pundits who think buybacks are ruining America. Last week, the CEOs of America’s 500 biggest companies received a letter from Lawrence Fink, CEO of BlackRock BLK, +0.46% the largest asset manager in the world, saying exactly the same thing.

“The effects of the short-termist phenomenon are troubling both to those seeking to save for long-term goals such as retirement and for our broader economy,” Fink wrote, adding that favoring shareholders comes at the expense of investing in “innovation, skilled work forces or essential capital expenditures necessary to sustain long-term growth.”

 

Also see:

————–

Assignment #2:
Review the current information out at usdebtclock.org and answer the following questions:

  • What does the $18.2+ trillion (as of 4/24/15) U.S. National Debt affect?
  • What level of debt is acceptable for a nation?
  • What does that level of debt depend upon?
  • Which of the pieces of information below have the most impact on future interest rates?  Do we even know that or is that crystal balling it?
  • Do the C-Suites at major companies look at this information? If so, what pieces of this information do they focus in on?
  • Are there potential implications for inflation or items related to the financial stability of the banking systems throughout the globe?
  • Are there any other ramifications of this information that you can think of?
  • What might you focus in on if you were addressing the masses (i.e., all U.S. citizens)?
  • Should politicians be aware of these #’s? If so, what might their concerns be for their constituents? For their local economies?

 

USDebtClockDotOrg-April242015

 

————–

 

Extra Credit Questions:
Now let’s bring it closer to home. Do you have some student loans that are contributing to the Student Loan Debt figure of $1.3+trillion (as of 4/24/15)?  Do you see such loans impacting you in the future? If so, how?

 

studentdebt-4-24-15

 

 

 

From DSC:
To set the stage for the following reflections…first, an excerpt from
Climate researcher claims CIA asked about weaponized weather: What could go wrong? — from computerworld.com (emphasis DSC)

We’re not talking about chemtrails, HAARP (High Frequency Active Auroral Research Program) or other weather warfare that has been featured in science fiction movies; the concerns were raised not a conspiracy theorist, but by climate scientist, geoengineering specialist and Rutgers University Professor Alan Robock. He “called on secretive government agencies to be open about their interest in radical work that explores how to alter the world’s climate.” If emerging climate-altering technologies can effectively alter the weather, Robock is “worried about who would control such climate-altering technologies.”

 

Exactly what I’ve been reflecting on recently.

***Who*** is designing, developing, and using the powerful technologies that are coming into play these days and ***for what purposes?***

Do these individuals care about other people?  Or are they much more motivated by profit or power?

Given the increasingly potent technologies available today, we need people who care about other people. 

Let me explain where I’m coming from here…

I see technologies as tools.  For example, a pencil is a technology. On the positive side of things, it can be used to write or draw something. On the negative side of things, it could be used as a weapon to stab someone.  It depends upon the user of the pencil and what their intentions are.

Let’s look at some far more powerful — and troublesome — examples.

 



DRONES

Drones could be useful…or they could be incredibly dangerous. Again, it depends on who is developing/programming them and for what purpose(s).  Consider the posting from B.J. Murphy below (BTW, nothing positive or negative is meant by linking to this item, per se).

DARPA’s Insect and Bird Drones Are On Their Way — from proactiontranshuman.wordpress.com by B.J. Murphy

.

Insect drone

From DSC:
I say this is an illustrative posting because if the inventor/programmer of this sort of drone wanted to poison someone, they surely could do so. I’m not even sure that this drone exists or not; it doesn’t matter, as we’re quickly heading that way anyway.  So potentially, this kind of thing is very scary stuff.

We need people who care about other people.

Or see:
Five useful ideas from the World Cup of Drones — from  dezeen.com
The article mentions some beneficial purposes of drones, such as for search and rescue missions or for assessing water quality.  Some positive intentions, to be sure.

But again, it doesn’t take too much thought to come up with some rather frightening counter-examples.
 

 

GENE-RELATED RESEARCH

Or another example re: gene research/applications; an excerpt from:

Turning On Genes, Systematically, with CRISPR/Cas9 — from by genengnews.com
Scientists based at MIT assert that they can reliably turn on any gene of their choosing in living cells.

Excerpt:

It was also suggested that large-scale screens such as the one demonstrated in the current study could help researchers discover new cancer drugs that prevent tumors from becoming resistant.

From DSC:
Sounds like there could be some excellent, useful, positive uses for this technology.  But who is to say which genes should be turned on and under what circumstances? In the wrong hands, there could be some dangerous uses involved in such concepts as well.  Again, it goes back to those involved with designing, developing, selling, using these technologies and services.

 

ROBOTICS

Will robots be used for positive or negative applications?

The mechanized future of warfare — from theweek.com
OR
Atlas Unplugged: The six-foot-two humanoid robot that might just save your life — from zdnet.com
Summary:From the people who brought you the internet, the latest version of the Atlas robot will be used in its disaster-fighting robotic challenge.

 

atlasunpluggedtorso

 

AUTONOMOUS CARS

How Uber’s autonomous cars will destroy 10 million jobs and reshape the economy by 2025 — from sanfrancisco.cbslocal.com by

Excerpt:

Autonomous cars will be commonplace by 2025 and have a near monopoly by 2030, and the sweeping change they bring will eclipse every other innovation our society has experienced. They will cause unprecedented job loss and a fundamental restructuring of our economy, solve large portions of our environmental problems, prevent tens of thousands of deaths per year, save millions of hours with increased productivity, and create entire new industries that we cannot even imagine from our current vantage point.

One can see the potential for good and for bad from the above excerpt alone.

Or Ford developing cross country automotive remote control — from spectrum.ieee.org

 

Ford-RemoteCtrl-Feb-2015

Or Germany has approved the use of self driving cars on Autobahn A9 Route — from wtvox.com

While the above items list mostly positive elements, there are those who fear that autonomous cars could be used by terrorists. That is, could a terrorist organization make some adjustments to such self-driving cars and load them up with explosives, then remotely control them in order to drive them to a certain building or event and cause them to explode?

Again, it depends upon whether the designers and users of a system care about other people.

 

BIG DATA / AI / COGNITIVE COMPUTING

The rise of machines that learn — from infoworld.com by Eric Knorr; with thanks to Oliver Hansen for his tweet on this
A new big data analytics startup, Adatao, reminds us that we’re just at the beginning of a new phase of computing when systems become much, much smarter

Excerpt:

“Our warm and creepy future,” is how Miko refers to the first-order effect of applying machine learning to big data. In other words, through artificially intelligent analysis of whatever Internet data is available about us — including the much more detailed, personal stuff collected by mobile devices and wearables — websites and merchants of all kinds will become extraordinarily helpful. And it will give us the willies, because it will be the sort of personalized help that can come only from knowing us all too well.

 

Privacy is dead: How Twitter and Facebook are exposing you — from finance.yahoo.com

Excerpt:

They know who you are, what you like, and how you buy things. Researchers at MIT have matched up your Facebook (FB) likes, tweets, and social media activity with the products you buy. The results are a highly detailed and accurate profile of how much money you have, where you go to spend it and exactly who you are.

The study spanned three months and used the anonymous credit card data of 1.1 million people. After gathering the data, analysts would marry the findings to a person’s public online profile. By checking things like tweets and Facebook activity, researchers found out the anonymous person’s actual name 90% of the time.

 

iBeacon, video analysis top 2015 tech trends — from progressivegrocer.com

Excerpt:

Using digital to engage consumers will make the store a more interesting and – dare I say – fun place to shop. Such an enhanced in-store experience leads to more customer loyalty and a bigger basket at checkout. It also gives supermarkets a competitive edge over nearby stores not equipped with the latest technology.

Using video cameras in the ceilings of supermarkets to record shopper behavior is not new. But more retailers will analyze and use the resulting data this year. They will move displays around the store and perhaps deploy new traffic patterns that follow a shopper’s true path to purchase. The result will be increased sales.

Another interesting part of this video analysis that will become more important this year is facial recognition. The most sophisticated cameras are able to detect the approximate age and ethnicity of shoppers. Retailers will benefit from knowing, say, that their shopper base includes more Millennials and Hispanics than last year. Such valuable information will change product assortments.

Scientists join Elon Musk & Stephen Hawking, warn of dangerous AI — from rt.com

Excerpt:

Hundreds of leading scientists and technologists have joined Stephen Hawking and Elon Musk in warning of the potential dangers of sophisticated artificial intelligence, signing an open letter calling for research on how to avoid harming humanity.

The open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

 

 

SMART/ CONNECTED TVs

 



Though there are many other examples, I think you get the point.

That biblical idea of loving our neighbors as ourselves…well, as you can see,
that idea is as highly applicable, important, and relevant today as it ever was.



 

 

Addendum on 3/19/15 that gets at exactly the same thing here:

  • Teaching robots to be moral — from newyorker.com by Gary Marcus
    Excerpt:
    Robots and advanced A.I. could truly transform the world for the better—helping to cure cancer, reduce hunger, slow climate change, and give all of us more leisure time. But they could also make things vastly worse, starting with the displacement of jobs and then growing into something closer to what we see in dystopian films. When we think about our future, it is vital that we try to understand how to make robots a force for good rather than evil.

 

 

Addendum on 3/20/15:

 

Jennifer A. Doudna, an inventor of a new genome-editing technique, in her office at the University of California, Berkeley. Dr. Doudna is the lead author of an article calling for a worldwide moratorium on the use of the new method, to give scientists, ethicists and the public time to fully understand the issues surrounding the breakthrough.
Credit Elizabeth D. Herman for The New York Times

 

Does Studying Fine Art = Unemployment? Introducing LinkedIn’s Field of Study Explorer — from LinkedIn.com by Kathy Hwang

Excerpt:

[On July 28, 2014], we are pleased to announce a new product – Field of Study Explorer – designed to help students like Candice explore the wide range of careers LinkedIn members have pursued based on what they studied in school.

So let’s explore the validity of this assumption: studying fine art = unemployment by looking at the careers of members who studied Fine & Studio Arts at Universities around the world. Are they all starving artists who live in their parents’ basements?

 

 

LinkedInDotCom-July2014-FieldofStudyExplorer

 

 

Also see:

The New Rankings? — from insidehighered.com by Charlie Tyson

Excerpt:

Who majored in Slovak language and literature? At least 14 IBM employees, according to LinkedIn.

Late last month LinkedIn unveiled a “field of study explorer.” Enter a field of study – even one as obscure in the U.S. as Slovak – and you’ll see which companies Slovak majors on LinkedIn work for, which fields they work in and where they went to college. You can also search by college, by industry and by location. You can winnow down, if you desire, to find the employee who majored in Slovak at the Open University and worked in Britain after graduation.

 

 

Trends and breakthroughs likely to affect your work, your investments, and your family

Excerpts:

At the outset, let me say that futurists do not claim to predict precisely what will happen in the future. If we could know the future with certainty, it would mean that the future could not be changed. Yet this is the main purpose of studying the future: to look at what may happen if present trends continue, decide if this is desirable, and, if it’s not, work to change it.

The main goal of studying the future is to make it better. Trends, forecasts, and ideas about the future enable you to spot opportunities and threats early, and position yourself, your business, and your investments accordingly.

How you can succeed in the age of hyperchange
Look how quickly our world is transforming around us. Entire new industries and technologies unheard of 15 years ago are now regular parts of our lives. Technology, globalization, and the recent financial crisis have left many of us reeling. It’s increasingly difficult to keep up with new developments—much less to understand their implications.

And, if you think things are changing fast now, you haven’t seen anything yet.

 

In this era of accelerating change, knowledge alone is no longer the key to a prosperous life. The critical skill is foresight.

 

 

7 ways to spot tomorrow’s trends today

  1. Scan the media to identify trends
  2. Analyze and extrapolate trends
  3. Develop scenarios
  4. Ask groups of experts
  5. Use computer modeling
  6. Explore possibilities with simulations
  7. Create the vision

 

 

App Ed Review

 

APPEdReview-April2014

 

From the About Us page (emphasis DSC):

App Ed Review is a free searchable database of educational app reviews designed to support classroom teachers finding and using apps effectively in their teaching practice. In its database, each app review includes:

  • A brief, original description of the app;
  • A classification of the app based on its purpose;
  • Three or more ideas for how the app could be used in the classroom;
  • A comprehensive app evaluation;
  • The app’s target audience;
  • Subject areas where the app can be used; and,
  • The cost of the app.

 

 

Also see the Global Education Database:

 

GlobalEducationDatabase-Feb2014

 

From the About Us page:

It’s our belief that digital technologies will utterly change the way education is delivered and consumed over the next decade. We also reckon that this large-scale disruption doesn’t come with an instruction manual. And we’d like GEDB to be part of the answer to that.

It’s the pulling together of a number of different ways in which all those involved in education (teachers, parents, administrators, students) can make some sense of the huge changes going on around them. So there’s consumer reviews of technologies, a forum for advice, an aggregation of the most important EdTech news and online courses for users to equip themselves with digital skills. Backed by a growing community on social media (here, here and here for starters).

It’s a fast-track to digital literacy in the education industry.

GEDB has been pulled together by California residents Jeff Dunn, co-founder of Edudemic, and Katie Dunn, the other Edudemic co-founder, and, across the Atlantic in London, Jimmy Leach, a former habitue of digital government and media circles.

 

 

Addendum:

Favorite educational iPad apps that are also on Android — from the Learning in Hand blog by Tony Vincent

 
© 2024 | Daniel Christian