The top 10 emerging technologies of 2016 — from visualcapitalist.com by Jeff Desjardins

Excerpt:

  1. Nanosensors and the Internet of Nanothings is one of the most exciting areas of science today. Tiny sensors that are circulated in the human body or construction materials will be able to relay information and diagnostics to the outside world. This will have an impact on medicine, architecture, agriculture, and drug manufacturing.
  2. Next Generation Batteries are helping to eliminate one of the biggest obstacles with renewable energy, which is energy storage. Though not commercially available yet, this area shows great promise – and it is something we are tracking in our five-part Battery Series.
  3. The Blockchain had investment exceeding $1 billion in 2015. The blockchain ecosystem is evolving rapidly and will change the way banking, markets, contracts, and governments work.
  4. 2d Materials such as graphene will have an impact in a variety of applications ranging from air and water filters to batteries and wearable technology.
  5. Autonomous Vehicles are here, and the potential impact is huge. While there are still a few problems to overcome, driverless cars will save lives, cut pollution, boost economies, and improve the quality of life for people.
  6. Organs-on-Chips, which are tiny models of human organs, are making it easier for scientists to test drugs and conduct medical research.
  7. Petrovskite Solar Cells are making photovoltaic cells easier to make and more efficient. They also allow cells to be used virtually anywhere.
  8. Open AI Ecosystem will allow for smart digital assistants in the cloud that will be able to advise us on finance, health, or even fashion.
  9. Optogenetics, or the use of light and color to record activity in the brain, could help lead to better treatment of brain disorders.
  10. Systems Metabolic Engineering will allow for building block chemicals to be built with plants more efficiently than can be done with fossil fuels.

 

OpenAIEcosystem-July2016

 

 

 

MIT10BreakthroughTechs2016

 

10 Breakthrough Technologies 2016 — from technologyreview.com

Excerpt:

Which of today’s emerging technologies have a chance at solving a big problem and opening up new opportunities? Here are our picks. The 10 on this list all had an impressive milestone in the past year or are on the verge of one. These are technologies you need to know about right now.

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part VI in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III, Part IV, Part V, and to ask:

  • How do we collectively start talking about the future that we want?
  • How do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved in these discussions? Shouldn’t each one of us participate in some way, shape, or form?

 

 

AIsWhiteGuyProblem-NYTimes-June2016

 

Artificial Intelligence’s White Guy Problem — from nytimes.com by Kate Crawford

Excerpt:

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.

 

 

Facebook is using artificial intelligence to categorize everything you write — from futurism.com

Excerpt:

Facebook has just revealed DeepText, a deep learning AI that will analyze everything you post or type and bring you closer to relevant content or Facebook services.

 

 

March of the machines — from economist.com
What history tells us about the future of artificial intelligence—and how society should respond

Excerpt:

EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a “Terminator”-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the “machinery question”. Now a very similar debate is under way.

After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.

 

As technology changes the skills needed for each profession, workers will have to adjust. That will mean making education and training flexible enough to teach new skills quickly and efficiently. It will require a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation. AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.

 

 

Backlash-Data-DefendantsFutures-June2016

 

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures — from nytimes.com by Mitch Smith

Excerpt:

CHICAGO — When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

 

 

Google Tackles Challenge of How to Build an Honest Robot — from bloomberg.com by

Excerpt:

Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence development company backed by Elon Musk — have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They published a technical paper Tuesday outlining their thinking.

The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.’s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.

But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.

 

 

Policy paper | Data Science Ethical Framework — from gov.uk
From: Cabinet Office, Government Digital Service and The Rt Hon Matt Hancock MP
First published: 19 May 2016
Part of: Government transparency and accountability

This framework is intended to give civil servants guidance on conducting data science projects, and the confidence to innovate with data.

Detail: Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking. We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with. The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog.

 

 

 

WhatsNextForAI-June2016

Excerpt (emphasis DSC):

We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.

The next step in achieving human-level ai is creating intelligent—but not autonomous—machines. The AI system in your car will get you safely home, but won’t choose another destination once you’ve gone inside. From there, we’ll add basic drives, along with emotions and moral values. If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws.

 

 

DARPA to Build “Virtual Data Scientist” Assistants Through A.I. — from inverse.com by William Hoffman
A.I. will make up for the lack of data scientists.

Excerpt:

The Defense Advanced Research Projects Agency (DARPA) announced on Friday the launch of Data-Driven Discovery of Models (D3M), which aim to help non-experts bridge what it calls the “data-science expertise gap” by allowing artificial assistants to help people with machine learning. DARPA calls it a “virtual data scientist” assistant.

This software is doubly important because there’s a lack of data scientists right now and a greater demand than ever for more data-driven solutions. DARPA says experts project 2016 deficits of 140,000 to 190,000 data scientists worldwide, and increasing shortfalls in coming years.

 

 

Robot that chooses to inflict pain sparks debate about AI systems — from interestingengineering.com by Maverick Baker

Excerpt:

A robot built by roboticist Alexander Reben from the University of Berkeley, California has the ability to decide using AI whether or not to inflict pain.

The robot aims to spark a debate on if an AI system can get out of control, reminiscent of the terminator. The robot design is incredibly simple, designed to serve only one purpose; to decide whether or not to inflict pain. The robot was engineered by Alexander Reben of the University of Berkeley and was published in a scientific journal aimed to spark a debate on whether or not artificial intelligent robots can get out of hand if given the opportunity.

 

 

The NSA wants to spy on the Internet of Things. Everything from thermostats to pacemakers could be mined for intelligence data. — from engadget.com by Andrew Dalton

Excerpt:

We already know the National Security Agency is all up in our data, but the agency is reportedly looking into how it can gather even more foreign intelligence information from internet-connected devices ranging from thermostats to pacemakers. Speaking at a military technology conference in Washington D.C. on Friday, NSA deputy director Richard Ledgett said the agency is “looking at it sort of theoretically from a research point of view right now.” The Intercept reports Ledgett was quick to point out that there are easier ways to keep track of terrorists and spies than to tap into any medical devices they might have, but did confirm that it was an area of interest.

 

 

The latest tool in the NSA’s toolbox? The Internet of Things — from digitaltrends.com by Lulu Chang

Excerpt:

You may love being able to set your thermostat from your car miles before you reach your house, but be warned — the NSA probably loves it too. On Friday, the National Security Agency — you know, the federal organization known for wiretapping and listening it on U.S. citizens’ conversations — told an audience at Washington’s Newseum that it’s looking into using the Internet of Things and other connected devices to keep tabs on individuals.

 


Addendum on 6/29/16:

 

Addendums on 6/30/16

 

Addendum on 7/1/16

  • Humans are willing to trust chatbots with some of their most sensitive information — from businessinsider.com by Sam Shead
    Excerpt:
    A study has found that people are inclined to trust chatbots with sensitive information and that they are open to receiving advice from these AI services. The “Humanity in the Machine” report —published by media agency Mindshare UK on Thursday — urges brands to engage with customers through chatbots, which can be defined as artificial intelligence programmes that conduct conversations with humans through chat interfaces.

 

 

 

 

Some relatively recent additions to the education landscape include:


 

GoogleUdacity-CodingJuly2016

 

 

treehouse-2016

 

 

Teachable-June2016

 

 

FutureLeague-2016

 

 

StackSocial-July2016

 

 

Skillshare-July2016

 

 

CenterCentre-June2016

 

 

IBMCourseraGitHub-Courses-June2016

 

 

AmazonVideoDirect-June2016

 

 

Also see:

 

MillennialsPursuingOtherOptions-Selingo-May2016

 

 

Taking competency-based credentials seriously in the workforce — from campustechnology.com by John K. Waters
Companies like AT&T and Google are expanding their partnerships with online education providers, creating new educational pathways to real jobs.

Excerpt:

But in the Age of the Internet, for-profit online education providers such as Udacity and Coursera have tweaked that model by collaborating with companies to develop programs tailored to their specific needs.

Together the two companies created the Front-End Web Developer Nanodegree program, Udacity’s first branded microcredential. (“Nanodegree” is trademarked.)

“We worked with Udacity to develop curriculum based on tangible hiring and training needs,” said John Palmer, senior vice president and chief learning officer at AT&T, in an e-mail. “Our teams collaborated on determining what skills we needed now to address the needs of our business, but also what skills would be needed five to 10 years from now — not just at AT&T, but at other tech companies.”

 

 

uCertify

uCertify-june2016

 

 

Also related:

  • Students and higher ed leaders put their faith in online classes [#Infographic] — from edtechmagazine.com by Meg Conlan
    As a growing number of students enroll in nontraditional college classes, the value of online education becomes more clear.
    Excerpt:
    As cost-effective alternatives to traditional college classes, online learning programs continue to gain steam in higher ed.
    According to statistics gathered for an Online Learning Consortium infographic, 5.8 million students are now enrolled in online courses, and the majority put tremendous stock in the quality of their education: 90 percent of students say their online learning experiences are the same or better than in-classroom options.College and university leadership agrees: The infographic states that 71 percent of academic leaders say learning outcomes for online courses are the same or better than that of face-to-face classes.

 

OnlineLearningAlternativesGrowing-June2016

 

 

 


Related postings:


Acquisitions, mergers and reinvention (not closures) will characterize higher ed’s future — from evolllution.com; an interview with Kenneth Hartman | Past President of Drexel University Online, Drexel University

Excerpts:

We’re going to see a lot of different alternative options popping up at alternative prices with alternative delivery mechanisms offering alternative credentials in the future. I don’t think a lot of institutions will be shutting down. There will be some that close, but it’s more likely that their assets will be acquired by other, stronger institutions.

These types of programs are popping up all over the country and I think the market forces tell a story. Colleges that are able to be adaptable and flexible will be the leaders in this new higher education marketplace. Adaptability, vision and flexibility are going to be critical for schools that are not heavily-endowed. If they do not have the will to do that then I think unfortunately Christensen’s prediction will probably come true. However, I’m optimistic that when the pain gets high enough, trustees of these institutions will demand that their senior leadership provide them with the way to prevent closure.

 

What a Microsoft-owned LinkedIn means for education — from campustechnology.com by Dian Schaffhauser

Excerpt:

Ironically, he suggested, higher ed is also the most vulnerable target of LinkedIn as it continues to work on development of a competency marketplace that could one day replace four-year degrees as the baseline requirement for employment.

The vision of this competency marketplace is that employers can identify candidates who are close matches for positions based on the competencies their jobs require. Likewise, job candidates can get information from LinkedIn about what competencies a given position requires and pursue that through some form of training, whether through a class at a local college, a bootcamp, online learning or some other form of instruction.

“The signal for universities that the world is about to change is when employers begin to drop degree requirements from job descriptions,” said Craig. And by the way, he added, that’s already happening at recognizable companies such as Google, Penguin Random House, EY and PwC, which have either eliminated that requirement from entry-level job descriptions or begun masking a candidate’s degree status from hiring managers because they “think the degrees are actually false or poor or misleading signals of ultimate job performance.”

Not only does LinkedIn have by far the largest collection of candidate profiles, but it has become the leading platform for distributing microcredentials, said Craig.

 

“You can identify education and training opportunities to remediate gaps between where you are and what the job description says you need to have to qualify. So all the pieces are there,” he said. “Currently, it’s still early, but you can see where this is going. We think that is the story of the next decade in higher education.”

 

 

12 promising non-traditional college pathways to attainment — from eddesignlab.org

Excerpt:

We hear a lot about reinventing college and how we might better design the journey from school to work. Some students want faster or more experiential pathways to prosperity, re-entry points after stop-outs or opportunities for lifelong learning. “Non-traditional pathways” is a phrase you’ll hear a lot if you hang around policy and design folks who are thinking about broadening “attainment of degrees” to include meaningful credentials that lead to career readiness. This broader college success definition is not a cop out—it’s a recognition that technology, access to micro-credentials, and access to modular learning generally are blurring the lines between vocational training, liberal arts exploration, and 21st century skill building because, increasingly, students are in a position to order all these off the menu.

Lumina Foundation strategists Holly Zanville and Amber Garrison Duncan are in the thick of these designs, and the Lab caught up with them recently to help us build a list of the most promising ways that institutions, students, and third parties are piecing together non-traditional paths to meaningful credentials. Here’s a take on our “Top 12,” but we welcome your tweaks, additions, and favorite examples.

 

Top-ranked coding bootcamp, Fullstack Academy, launches first alumni startup investment fund — from prnewswire.com
Will provide seed funding for its graduates to launch their own startups

Excerpt:

NEW YORK, June 15, 2016 /PRNewswire/ — Fullstack Academy, the Y Combinator-backed top coding bootcamp in the U.S.,  today announced  Fullstack Fund, a new initiative to invest in promising startups created by its graduates.  “Students who complete our software engineering program go on to work for great companies like Google and Amazon, but some have opted for the entrepreneurial startup environment,” said David Yang, CEO and co-founder of Fullstack Academy. “So we asked ourselves — how can we better support alumni with a strong entrepreneurial slant? The Fullstack Fund  will empower some of the amazing teams and products that are coming out of our school.”

 

 

 


 

Addendum on 6/27/16:

 


 

Addendum on 6/30/16:

 

Noodle-June2016

 

Uncollege-June2016

 

CodingDojo-June2016

 

And a somewhat related posting:

More than 90% of institutions offer alternative credentials — from campustechnology.com by Sri Ravipati
The same study to report this statistic also found that millennial students prefer badging and certificates to traditional degrees.

Excerpt:

Millennial students seem to prefer badging and certificate programs to traditional bachelor’s degrees, according to a new study from University Professional and Continuing Education Association (UPCEA), Pennsylvania State University and Pearson that explored the role that alternative credentials play in higher education.

Demographic Shifts in Educational Demand and the Rise of Alternative Credentials” includes responses from 190 institutions, including community colleges (11 percent), baccalaureate colleges (12 percent), master’s colleges or universities (27 percent) and doctorate-granting universities (50 percent). Of the 190 institutions surveyed, 61 percent were public entities. Across the board, research revealed that programs offering alternative credentialing have become widespread in higher education, with 94 percent of the institutions reporting they offer alternative credentials. Alternative credentials can take the form of digital badges, certificates and micro-credentials.

 

 

 


 

Addendum on 7/11/16:

A model for higher education where all learning counts — from marketplace.org by Amy Scott

Excerpt:

Imagine it’s 2026, and you’re one of a billion people using a new digital platform called the Ledger.

So begins a new video from the Institute for the Future and ACT Foundation, envisioning a future system that would reward any kind of learning – from taking a course, to reading a book, to completing a project at work.

“Your Ledger account tracks everything you’ve ever learned in units called Edublocks,” the video’s narrator explains. “Each Edublock represents one hour of learning in a particular subject. Anyone can grant Edublocks to anyone else.”

The Ledger would use the same technology that powers bitcoin, the virtual currency, to create a verifiable record of every learning transaction, said Jane McGonigal, director of game research and development at the Institute for the Future, a think tank in Palo Alto, California.

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 

TechCrunch Disrupt 2016 – 7 edtech startups that are changing the education industry — from goodcall.com by Carrie Wiley

Excerpt:

…find out how the EdTech startups we met at TechCrunch Disrupt 2016 are transforming the education landscape and how three education technology startups are already changing education as we know it.

 

 

 

Million-dollar babies — from economist.com by
As Silicon Valley fights for talent, universities struggle to hold on to their stars

 

 

Excerpt:

THAT a computer program can repeatedly beat the world champion at Go, a complex board game, is a coup for the fast-moving field of artificial intelligence (AI). Another high-stakes game, however, is taking place behind the scenes, as firms compete to hire the smartest AI experts. Technology giants, including Google, Facebook, Microsoft and Baidu, are racing to expand their AI activities. Last year they spent some $8.5 billion on deals, says Quid, a data firm. That was four times more than in 2010.

In the past universities employed the world’s best AI experts. Now tech firms are plundering departments of robotics and machine learning (where computers learn from data themselves) for the highest-flying faculty and students, luring them with big salaries similar to those fetched by professional athletes.

 

 

Experts in machine learning are most in demand. Big tech firms use it in many activities, from basic tasks such as spam-filtering and better targeting of online advertisements, to futuristic endeavours such as self-driving cars or scanning images to identify disease.

 

 

Also from The Economist, see:

Excerpt:

AI is already starting to generate big financial gains for companies, which helps explain firms’ growing investment in developing AI capabilities. Machine-learning, in which computers become smarter by processing large data-sets, currently has many profitable consumer-facing applications, including image recognition in photographs, spam filtering and those that help to better target advertisements to web surfers. Many of tech firms’ most ambitious projects, including building self-driving cars and designing virtual personal assistants that can understand and execute complex tasks, also rely on artificial intelligence, especially machine-learning and robotics. This has prompted tech firms to try to hire up as much of the top talent as they can from universities, where the best AI experts research and teach. Some worry about the potential of a brain drain from academia into the private sector.

The biggest concern, however, is that one firm corners the majority of the talent in artificial intelligence, creating an intellectual monopoly of sorts.

 

Now anyone can use Google’s deep learning techniques — from futurism.com by Sarah Marquart

In Brief:

Google announced a new machine learning platform for developers. The company is also open-sourcing tools such as Tensorflow to allow the community to take its internal tools, adapt them for their own uses, and improve them.

Google has announced a new machine learning platform for developers at its NEXT Google Cloud Platform user conference. Eric Schmidt, Google’s chairman, explained that Google believes machine learning is “what’s next.”

 

GoogleNEXT16

 

 

Using artificial intelligence in the classroom — from educationdive.com by Erin McIntyre

Dive Brief:

  • After Google’s artificially intelligent (AI) computer system beat world champion “Go” player Lee Sedol of South Korea, some are wondering if man-made neural networks can be applied in educational settings to benefit learning.
  • Companies like Pearson have begun to examine the subject; the company recently released a pamphlet called Intelligence Unleashed: An argument for AI in Education that argues software may soon be able to provide instant and deeper feedback regarding student progress, eliminating traditional standardized testing.
  • Pearson also conceptualized something called a “lifelong learning companion” for students, which essentially could be seen as an interactive cloud that asked questions, provided encouragement, offered suggestions and connected learners to resources.

 

 

AI-in-Education--2016

Excerpts:

ALGORITHM
A defined list of steps for solving a problem. A computer program can be viewed as an elaborate algorithm. In AI, an algorithm is usually a small procedure that solves a recurrent problem.

MACHINE LEARNING
Computer systems that learn from data, enabling them to make increasingly better predictions.

DECISION THEORY
The mathematical study of strategies for optimal decision-making between options involving different risks or expectations of gain or loss depending on the outcome.

 

It can be difficult to define artificial intelligence (AI), even for experts. One reason is that what AI includes is constantly shifting. As Nick Bostrom, a leading AI expert from Oxford University, explains: “[a] lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it is not labeled AI anymore.” Instead, it is considered a computer program, or an algorithm, or an app, but not AI.

Another reason for the difficulty in defining AI is the interdisciplinary nature of the field. Anthropologists, biologists, computer scientists, linguists, philosophers, psychologists, and neuroscientists all contribute to the field of AI, and each group brings their own perspective and terminology.

For our purposes, we define AI as computer systems that have been designed to interact with the world through capabilities (for example, visual perception and speech recognition) and intelligent behaviours (for example, assessing the available information and then taking the most sensible action to achieve a stated goal) that we would think of as essentially human.

 

 

IBM’s Rometty wants you to know they’re a ‘cognitive solutions cloud platform company’ — from barrons.com by Tiernan Ray

Excerpt:

Rometty says she is the first IBM chief to ever offer a keynote at the show [CES]. Her framework for everything this evening is that the “future is cognitive,” and we’re headed to a “cognitive IoT.”

What happens when everyone becomes digital, she asks. What will differentiate people is understanding all that data. That is the “cognitive era.” “Cognitive is an era of business and an era of technology,” she says. 80% of data out here is “black, invisible,” and “that’s what’s changing,” she says.

Rometty clarifies cognitive is not synonymous with A.I. It is not about systems you program. It is about systems that learn.

Her point is that all that “vast IoT data is going to do nothing for you unless you can bring cognitive to it.”

Her big point: “IBM is no longer a hardware, software company,” but a “cognitive solutions cloud platform company.”

 

 

 

SoftBank’s Pepper robot to get even brainier with IBM’s Watson technology — from thenextweb.com by Natt Garun

Excerpt:

When it launched last year, SoftBank’s emotion-reading robot Pepper sold out in just one minute despite its limited utility. Now, Pepper’s about to get smarter thanks to a partnership with IBM to integrate Watson cognitive system into its brains.

With Watson, developers hope to help Pepper understand human emotions more thoroughly to appropriately respond and engage with its users. IBM and SoftBank say the collaboration will also allow Pepper to gather new information from social media to learn how people interact with brands so it knows how to personally reach out to people.

 

 

 

10 promising technologies assisting the future of medicine and healthcare — by Bertalan Meskó, MD, PhD

Excerpt:

Technology will not solve the problems that healthcare faces globally today. And the human touch alone is not enough any more, therefore a new balance is needed between using disruptive innovations but still keeping the human interaction between patients and caregivers. Here are 10 technologies and trends that could enable this.

 

 

NMCHorizonReport2016

 

New Media Consortium (NMC) & Educause Learning Initiative (ELI) release the NMC Horizon Report > 2016 Higher Ed Edition — from nmc.org

Excerpt:

The New Media Consortium (NMC) and EDUCAUSE Learning Initiative (ELI) are jointly releasing the NMC Horizon Report > 2016 Higher Education Edition at the 2016 ELI Annual Meeting. This 13th edition describes annual findings from the NMC Horizon Project, an ongoing research project designed to identify and describe emerging technologies likely to have an impact on learning, teaching, and creative inquiry in higher education.

The report identifies six key trends, six significant challenges, and six important developments in educational technology across three adoption horizons spanning over the next one to five years, giving campus leaders, educational technologists, and faculty a valuable guide for strategic technology planning. The report provides higher education leaders with in-depth insight into how trends and challenges are accelerating and impeding the adoption of educational technology, along with their implications for policy, leadership, and practice.

 

NMCHorizonReport2016-toc

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part III in a series of such postings that illustrate how quickly things are moving (Part I and Part II) and to ask:

  • How do we collectively start talking about the future that we want?
  • Then, how do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved?

As I mentioned in Part I, I want to again refer to Gerd Leonhard’s work as it is relevant here, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

Looking at several items below, ask yourself…is this the kind of future that we want?  There are some things mentioned below that could likely prove to be very positive and helpful. However, there are also some very troubling advancements and developments as well.

The point here is that we had better start talking and discussing the pros and cons of each one of these areas — and many more I’m not addressing here — or our dreams will turn into our nightmares and we will have missed what Edward Cornish and the World Future Society are often trying to get at.

 


 

Google’s Artificial Intelligence System Masters Game of ‘Go’ — from abcnews.go.com by Alyssa Newcomb

Excerpt:

Google just mastered one of the biggest feats in artificial intelligence since IBM’s Deep Blue beat Gary Kasparov at chess in 1997.

The search giant’s AlphaGo computer program swept the European champion of Go, a complex game with trillions of possible moves, in a five-game series, according Demis Hassabis, head of Google’s machine learning, who announced the feat in a blog post that coincided with an article in the journal Nature.

While computers can now compete at the grand master level in chess, teaching a machine to win at Go has presented a unique challenge since the game has trillions of possible moves.

Along these lines, also see:
Mastering the game of go with deep neural networks and tree search — from deepmind.com

 

 

 

Harvard is trying to build artificial intelligence that is as fast as the human brain — from futurism.com
Harvard University and IARPA are working together to study how AI can work as efficiently and effectively as the human brain.

Excerpt:

Harvard University has been given $28M by the Intelligence Advanced Projects Activity (IARPA) to study why the human brain is significantly better at learning and retaining information than artificial intelligence (AI). The investment into this study could potentially help researchers develop AI that’s faster, smarter, and more like human brains.

 

 

Digital Ethics: The role of the CIO in balancing the risks and rewards of digital innovation — from mis-asia.com by Kevin Wo; with thanks to Gerd Leonhard for this posting

What is digital ethics?
In our hyper-connected world, an explosion of data is combining with pattern recognition, machine learning, smart algorithms, and other intelligent software to underpin a new level of cognitive computing. More than ever, machines are capable of imitating human thinking and decision-making across a raft of workflows, which presents exciting opportunities for companies to drive highly personalized customer experiences, as well as unprecedented productivity, efficiency, and innovation. However, along with the benefits of this increased automation comes a greater risk for ethics to be compromised and human trust to be broken.

According to Gartner, digital ethics is the system of values and principles a company may embrace when conducting digital interactions between businesses, people and things. Digital ethics sits at the nexus of what is legally required; what can be made possible by digital technology; and what is morally desirable.  

As digital ethics is not mandated by law, it is largely up to each individual organisation to set its own innovation parameters and define how its customer and employee data will be used.

 

 

New algorithm points the way towards regrowing limbs and organs — from sciencealert.com by David Nield

Excerpt:

An international team of researchers has developed a new algorithm that could one day help scientists reprogram cells to plug any kind of gap in the human body. The computer code model, called Mogrify, is designed to make the process of creating pluripotent stem cells much quicker and more straightforward than ever before.

A pluripotent stem cell is one that has the potential to become any type of specialised cell in the body: eye tissue, or a neural cell, or cells to build a heart. In theory, that would open up the potential for doctors to regrow limbs, make organs to order, and patch up the human body in all kinds of ways that aren’t currently possible.

 

 

 

The world’s first robot-run farm will harvest 30,000 heads of lettuce daily — from techinsider.io by Leanna Garfield

Excerpt (from DSC):

The Japanese lettuce production company Spread believes the farmers of the future will be robots.

So much so that Spread is creating the world’s first farm manned entirely by robots. Instead of relying on human farmers, the indoor Vegetable Factory will employ robots that can harvest 30,000 heads of lettuce every day.

Don’t expect a bunch of humanoid robots to roam the halls, however; the robots look more like conveyor belts with arms. They’ll plant seeds, water plants, and trim lettuce heads after harvest in the Kyoto, Japan farm.

 

 

 

Drone ambulances may just be the future of emergency medical vehicles — from interestingengineering.com by Gabrielle Westfield

Excerpt:

Drones are advancing everyday. They are getting larger, faster and more efficient to control. Meanwhile the medical field keeps facing major losses from emergency response vehicles not being able to reach their destination fast enough. Understandable so, I mean especially in the larger cities where traffic is impossible to move swiftly through. Red flashing lights atop or not, sometimes the roads are just not capable of opening up. It makes total sense that the future of ambulances would be paved in the open sky rather than unpredictable roads.

.

 

 

 

Phone shop will be run entirely by Pepper robots — from telegraph.co.uk by

Excerpt (emphasis DSC):

Creator company SoftBank said it planned to open the pop-up mobile store employing only Pepper robots by the end of March, according to Engadget.

The four foot-tall robots will be on hand to answer questions, provide directions and guide customers in taking out phone contracts until early April. It’s currently unknown what brands of phone Pepper will be selling.

 

 

 

Wise.io introduces first intelligent auto reply functionality for customer support organizations — from consumerelectronicsnet.com
Powered by Machine Learning, Wise Auto Response Frees Up Agent Time, Boosting Productivity, Accelerating Response Time and Improving the Customer Experience

Excerpt:

BERKELEY, CA — (Marketwired) — 01/27/16 — Wise.io, which delivers machine learning applications to help enterprises provide a better customer experience, today announced the availability of Wise Auto Response, the first intelligent auto reply functionality for customer support organizations. Using machine learning to understand the intent of an incoming ticket and determine the best available response, Wise Auto Response automatically selects and applies the appropriate reply to address the customer issue without ever involving an agent. By helping customer service teams answer common questions faster, Wise Auto Response removes a high percentage of tickets from the queue, freeing up agents’ time to focus on more complex tickets and drive higher levels of customer satisfaction.

 

 

Video game for treating ADHD looks to 2017 debut — from educationnews.org

Excerpt:

Akili Interactive Labs out of Boston has created a video game that they hope will help treat children diagnosed with attention-deficit hyperactivity disorder by teaching them to focus in a distracting environment.

The game, Project: EVO, is meant to be prescribed to children with ADHD as a medical treatment.  And after gaining $30.5 million in funding, investors appear to believe in it.  The company plans to use the funding to run clinical trials with plans to gain approval from the US Food and Drug Administration in order to be able to launch the game in late 2017.

Players will enter a virtual world filled with colorful distractions and be required to focus on specific tasks such as choosing certain objects while avoiding others.  The game looks to train the portion of the brain designed to manage and prioritize all the information taken in at one time.

 

Addendum on 1/29/16:

 

 

 

 

Report from Davos: 5 million jobs to be lost by 2020 because of tech advances — from siliconbeat.com by Levi Sumagaysay

Excerpt (emphasis DSC):

A new report predicts a loss of 5 million jobs in the next five years because of technological advances, but don’t blame it all on the robots.

The other culprits: artificial intelligence, 3-D printers and advances in genetics, biotech and more.

The World Economic Forum, which is holding its annual meeting in Davos this week, in its report details the effects of modern technology on the labor market, for better or for worse.  It says “the fourth industrial revolution” will be “more comprehensive and all-encompassing than anything we have ever seen.”

The report actually estimates a loss of 7 million jobs in 15 economies that today have 1.86 billion workers, or about 65 percent of the world’s workforce, but it also expects 2 million new jobs to be created.

 

From DSC:
If this turns out to be true, how should this affect our curricula?  What should we be emphasizing and seeking to build within our students?

 

 
Paging Dr. Robot: The coming AI health care boom — from fastcompany.com by Sean Captain
Use of artificial intelligence in health care to grow tenfold in 5 years, say analysts—for everything from cancer diagnosis to diet tips.

Excerpt:

More than six billion dollars: That’s how much health care providers and consumers will be spending every year on artificial intelligence tools by 2021—a tenfold increase from today—according to a new report from research firm Frost & Sullivan. (Specifically, it will be a growth from $633.8 million in 2014 to $6,662.2 million in 2021.)

Computer-aided diagnosis can weigh more factors than a doctor could on their own, such as reviewing all of a patient’s history in an instant and weighing risk factors such as age, previous diseases, and residence (if it’s in a heavily polluted area) to come up with a short list of possible diagnoses, even a percent confidence rating that it’s disease X or syndrome Y. Much of this involves processing what’s called “unstructured data,” such as notes from previous exams, scan images, or photos. Taking a first pass on x-rays and other radiology scans is one of the big applications for AI that Frost & Sullivan expects.

 

Babylon, the U.K. digital doctor app, scores $25M to develop AI-driven health advice — from techcrunch.com by Steve O’Hear

Excerpt:

Hot on the heels of PushDoctor’s $8.2 million Series A, another U.K. startup playing in the digital health app space has picked up funding. Babylon Health, which like PushDoctor, lets you have video consultations with a doctor (and a lot more), has raised a $25 million Series A round led by Investment AB Kinnevik, the Swedish listed investment fund.

 

 

Under Armour and IBM to transform personal health and fitness, powered by IBM Watson — from ibm.com
New Cognitive Coaching System Will Apply Machine Learning to the World’s Largest Digital Health and Fitness Community

 

 

IBM Watson bets $1 billion on healthcare with Merge acquisition — from techrepublic.com by Conner Forrest
[Back in August 2015] IBM ponied up $1 billion for medical imaging company Merge Healthcare. Here’s what it means for the future of IBM’s cognitive computing system.

 

The emergence of precision algorithms in healthcare — from Gartner

Summary:

Recent announcements that several medical institutions intend to publish extensive portfolios of advanced algorithms via an open marketplace serve as an early indicator that interest in sharing clinical algorithms is increasing. We explore the impact of this trend and offer recommendations to HDOs.

 

 

Somewhat related postings:

 

Automation potential and wages for US jobs — from McKinsey Global Institute
McKinsey analyzed the detailed work activities for 750+ occupations in the US to estimate the percentage of time that could be automated by adapting currently demonstrated technology.

 

AutomationPotential-McKinsey-Jan2016

 

 

Also see:

  • Four fundamentals of workplace automation — from mckinsey.com by Michael Chui, James Manyika, and Mehdi Miremadi
    As the automation of physical and knowledge work advances, many jobs will be redefined rather than eliminated—at least in the short term.


 

 

Special Report: 2016 Top Tech to Watch
Spectrum’s annual special report for the technologies to watch this year

 

IEEESpectrum-TechsToWatch2016

 

 
© 2025 | Daniel Christian