Reflections on “Inside Amazon’s artificial intelligence flywheel” [Levy]

Inside Amazon’s artificial intelligence flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt (emphasis DSC):

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

 

 

From DSC:
When will we begin to see more mainstream recommendation engines for learning-based materials? With the demand for people to reinvent themselves, such a next generation learning platform can’t come soon enough!

  • Turning over control to learners to create/enhance their own web-based learner profiles; and allowing people to say who can access their learning profiles.
  • AI-based recommendation engines to help people identify curated, effective digital playlists for what they want to learn about.
  • Voice-driven interfaces.
  • Matching employees to employers.
  • Matching one’s learning preferences (not styles) with the content being presented as one piece of a personalized learning experience.
  • From cradle to grave. Lifelong learning.
  • Multimedia-based, interactive content.
  • Asynchronously and synchronously connecting with others learning about the same content.
  • Online-based tutoring/assistance; remote assistance.
  • Reinvent. Staying relevant. Surviving.
  • Competency-based learning.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

 

 

We’re about to embark on a period in American history where career reinvention will be critical, perhaps more so than it’s ever been before. In the next decade, as many as 50 million American workers—a third of the total—will need to change careers, according to McKinsey Global Institute. Automation, in the form of AI (artificial intelligence) and RPA (robotic process automation), is the primary driver. McKinsey observes: “There are few precedents in which societies have successfully retrained such large numbers of people.”

Bill Triant and Ryan Craig

 

 

 

Also relevant/see:

Online education’s expansion continues in higher ed with a focus on tech skills — from educationdive.com by James Paterson

Dive Brief:

  • Online learning continues to expand in higher ed with the addition of several online master’s degrees and a new for-profit college that offers a hybrid of vocational training and liberal arts curriculum online.
  • Inside Higher Ed reported the nonprofit learning provider edX is offering nine master’s degrees through five U.S. universities — the Georgia Institute of Technology, the University of Texas at Austin, Indiana University, Arizona State University and the University of California, San Diego. The programs include cybersecurity, data science, analytics, computer science and marketing, and they cost from around $10,000 to $22,000. Most offer stackable certificates, helping students who change their educational trajectory.
  • Former Harvard University Dean of Social Science Stephen Kosslyn, meanwhile, will open Foundry College in January. The for-profit, two-year program targets adult learners who want to upskill, and it includes training in soft skills such as critical thinking and problem solving. Students will pay about $1,000 per course, though the college is waiving tuition for its first cohort.

 

 

 

 

In the 2030 and beyond world, employers will no longer be a separate entity from the education establishment. Pressures from both the supply and demand side are so large that employers and learners will end up, by default, co-designing new learning experiences, where all learning counts.

 

OBJECTIVES FOR CONVENINGS

  • Identify the skills everyone will need to navigate the changing relationship between machine intelligence and people over the next 10-12 years.
  • Develop implications for work, workers, students, working learners, employers, and policymakers.
  • Identify a preliminary set of actions that need to be taken now to best prepare for the changing work + learn ecosystem.

Three key questions guided the discussions:

  1. What are the LEAST and MOST essential skills needed for the future?
  2. Where and how will tomorrow’s workers and learners acquire the skills they really need?
  3. Who is accountable for making sure individuals can thrive in this new economy?

This report summarizes the experts’ views on what skills will likely be needed to navigate the work + learn ecosystem over the next 10–15 years—and their suggested steps for better serving the nation’s future needs.

 

In a new world of work, driven especially by AI, institutionally-sanctioned curricula could give way to AI-personalized learning. This would drastically change the nature of existing social contracts between employers and employees, teachers and students, and governments and citizens. Traditional social contracts would need to be renegotiated or revamped entirely. In the process, institutional assessment and evaluation could well shift from top-down to new bottom-up tools and processes for developing capacities, valuing skills, and managing performance through new kinds of reputation or accomplishment scores.

 

In October 2017, Chris Wanstrath, CEO of Github, the foremost code-sharing and social networking resource for programmers today, made a bold statement: “The future of coding is no coding at all.” He believes that the writing of code will be automated in the near future, leaving humans to focus on “higher-level strategy and design of software.” Many of the experts at the convenings agreed. Even creating the AI systems of tomorrow, they asserted, will likely require less human coding than is needed today, with graphic interfaces turning AI programming into a drag-and-drop operation.

Digital fluency does not mean knowing coding languages. Experts at both convenings contended that effectively “befriending the machine” will be less about teaching people to code and more about being able to empathize with AIs and machines, understanding how they “see the world” and “think” and “make decisions.” Machines will create languages to talk to one another.

Here’s a list of many skills the experts do not expect to see much of—if at all—in the future:

  • Coding. Systems will be self-programming.
  • Building AI systems. Graphic interfaces will turn AI programming into drag-and-drop operations.
  • Calendaring, scheduling, and organizing. There won’t be need for email triage.
  • Planning and even decision-making. AI assistants will pick this up.
  • Creating more personalized curricula. Learners may design more of their own personalized learning adventure.
  • Writing and reviewing resumes. Digital portfolios, personal branding, and performance reputation will replace resumes.
  • Language translation and localization. This will happen in real time using translator apps.
  • Legal research and writing. Many of our legal systems will be automated.
  • Validation skills. Machines will check people’s work to validate their skills.
  • Driving. Driverless vehicles will replace the need to learn how to drive.

Here’s a list of the most essential skills needed for the future:

  • Quantitative and algorithmic thinking.  
  • Managing reputation.  
  • Storytelling and interpretive skills.  
  • First principles thinking.  
  • Communicating with machines as machines.  
  • Augmenting high-skilled physical tasks with AI.
  • Optimization and debugging frame of mind.
  • Creativity and growth mindset.
  • Adaptability.
  • Emotional intelligence.
  • Truth seeking.
  • Cybersecurity.

 

The rise of machine intelligence is just one of the many powerful social, technological, economic, environmental, and political forces that are rapidly and disruptively changing the way everyone will work and learn in the future. Because this largely tech-driven force is so interconnected with other drivers of change, it is nearly impossible to understand the impact of intelligent agents on how we will work and learn without also imagining the ways in which these new tools will reshape how we live.

 

 

 

MIT plans $1B computing college, AI research effort — from educationdive.com by James Paterson

Dive Brief (emphasis DSC):

  • The Massachusetts Institute of Technology is creating a College of Computing with the help of a $350 million gift from billionaire investor Stephen A. Schwarzman, who is the CEO and co-founder of the private equity firm Blackstone, in a move the university said is its “most significant reshaping” since 1950.
  • Featuring 50 new faculty positions and a new headquarters building, the $1 billion interdisciplinary initiative will bring together computer science, artificial intelligence (AI), data science and related programs across the institution. MIT will establish a new deanship for the college.
  • The new college…will explore and promote AI’s use in non-technology disciplines with a focus on ethical considerations, which are a growing concern as the technology becomes embedded in many fields.

 

Also see:

Alexa Sessions You Won’t Want to Miss at AWS re:Invent 2018 — from developer.amazon.com

Excerpts — with an eye towards where this might be leading in terms of learning spaces:

Alexa and AWS IoT — Voice is a natural interface to interact not just with the world around us, but also with physical assets and things, such as connected home devices, including lights, thermostats, or TVs. Learn how you can connect and control devices in your home using the AWS IoT platform and Alexa Skills Kit.

Connect Any Device to Alexa and Control Any Feature with the Updated Smart Home Skill API — Learn about the latest update to the Smart Home Skill API, featuring new capability interfaces you can use as building blocks to connect any device to Alexa, including those that fall outside of the traditional smart home categories of lighting, locks, thermostats, sensors, cameras, and audio/video gear. Start learning about how you can create a smarter home with Alexa.

Workshop: Build an Alexa Skill with Multiple Models — Learn how to build an Alexa skill that utilizes multiple interaction models and combines functionality into a single skill. Build an Alexa smart home skill from scratch that implements both custom interactions and smart home functionality within a single skill. Check out these resources to start learning:

 

What will be important in the learn and work ecosystem in 2030? How do we prepare? — from evolllution.com by Holly Zanville | Senior Advisor for Credentialing and Workforce Development, Lumina Foundation

Excerpt:

These seven suggested actions—common to all scenarios—especially resonated with Lumina:

  1. Focus on learning: All learners will need a range of competencies and skills, most critically: learning how to learn; having a foundation in math, science, IT and cross-disciplines; and developing the behaviors of grit, empathy and effective communication.
  2. Prepare all “systems”: Schools will continue to be important places to teach competencies and skills. Parents will be important teachers for children. Workplaces will also be important places for learning, and many learners will need instruction on how to work effectively as part of human/machine teams.
  3. Integrate education and work: Education systems will need to be integrated with work in an education/work ecosystem. To enable movement within the ecosystem, credentials will be useful, but only if they are transparent and portable. The competencies and skills that stand behind credentials will need to be identifiable, using a common language to enable (a) credential providers to educate/train for an integrated education/work system; (b) employers to hire people and upgrade their skills; and (c) governments (federal/state/local) to incentivize and regulate programs and policies that support the education/work system.
  4. Assess learning: Assessing competencies and skills acquired in multiple settings and modes (including artificial reality and virtual reality tools), will be essential. AI will enable powerful new assessment tools to collect and analyze data about what humans know and can do.
  5. Build fair, moral AI: There will be a high priority on ensuring that AI has built-in checks and balances that reflect moral values and honor different cultural perspectives.
  6. Prepare for human/machine futures: Machines will join humans in homes, schools and workplaces. Machines will likely be viewed as citizens with rights. Humans must prepare for side-by-side “relationships” with machines, especially in situations in which machines will be managing aspects of education, work and life formerly managed by humans. Major questions will also arise about the ownership of AI structures—what ownership looks like, and who profits from ubiquitous AI structures.
  7. Build networks for readiness/innovation: Open and innovative partnerships will be needed for whatever future scenarios emerge. In a data-rich world, we won’t solve problems alone; networks, partnerships and communities will be key.

 

 

Also see:

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 
 

This is how the Future Today Institute researches, models & maps the future & develops strategies

 

This is how the Future Today Institute researches, models & maps the future & develops strategies

 

Also see what the Institute for the Future does in this regard

Foresight Tools
IFTF has pioneered tools and methods for building foresight ever since its founding days. Co-founder Olaf Helmer was the inventor of the Delphi Method, and early projects developed cross-impact analysis and scenario tools. Today, IFTF is methodologically agnostic, with a brimming toolkit that includes the following favorites…

 

 

From DSC:
How might higher education use this foresight workflow? How might we better develop a future-oriented mindset?

From my perspective, I think that we need to be pulse-checking a variety of landscapes, looking for those early signals. We need to be thinking about what should be on our radars. Then we need to develop some potential scenarios and strategies to deal with those potential scenarios if they occur. Graphically speaking, here’s an excerpted slide from my introductory piece for a NGLS 2017 panel that we did.

 

 

 

This resource regarding their foresight workflow was mentioned in  a recent e-newsletter from the FTI where they mentioned this important item as well:

  • Climate change: a megatrend that impacts us all
    Excerpt:
    Earlier this week, the United Nations’ scientific panel on climate change issued a dire report [PDF]. To say the report is concerning would be a dramatic understatement. Models built by the scientists show that at our current rate, the atmosphere will warm as much as 1.5 degrees Celsius, leading to a dystopian future of food shortages, wildfires, extreme winters, a mass die-off of coral reefs and more –– as soon as 2040. That’s just 20 years away from now.

 

But China also decided to ban the import of foreign plastic waste –– which includes trash from around the U.S. and Europe. The U.S. alone could wind up with an extra 37 million metric tons of plastic waste, and we don’t have a plan for what to do with it all.

 

Immediate Futures Scenarios: Year 2019

  • Optimistic: Climate change is depoliticized. Leaders in the U.S., Brazil and elsewhere decide to be the heroes, and invest resources into developing solutions to our climate problem. We understand that fixing our futures isn’t only about foregoing plastic straws, but about systemic change. Not all solutions require regulation. Businesses and everyday people are incentivized to shift behavior. Smart people spend the next two decades collaborating on plausible solutions.
  • Pragmatic: Climate change continues to be debated, while extreme weather events cause damage to our power grid, wreak havoc on travel, cause school cancellations, and devastate our farms. The U.S. fails to work on realistic scenarios and strategies to combat the growing problem of climate change. More countries elect far-right leaders, who shun global environmental accords and agreements. By 2029, it’s clear that we’ve waited too long, and that we’re running out of time to adapt.
  • Catastrophic: A chorus of voices calling climate change a hoax grows ever louder in some of the world’s largest economies, whose leaders choose immediate political gain over longer-term consequences. China builds an environmental coalition of 100 countries within the decade, developing both green infrastructure while accumulating debt service. Beijing sets global standards emissions––and it locks the U.S out of trading with coalition members. Trash piles up in the U.S., which didn’t plan ahead for waste management. By 2040, our population centers have moved inland and further north, our farms are decimated, our lives are miserable.

Watchlist: United Nations’ Intergovernmental Panel on Climate Change; European Geosciences Union; National Oceanic and Atmospheric Administration (NOAA); NASA; Department of Energy; Department of Homeland Security; House Armed Services Sub-committee on Emerging Threats and Capabilities; Environmental Justice Foundation; Columbia University’s Earth Institute; University of North Carolina at Wilmington; Potsdam Institute for Climate Impact Research; National Center for Atmospheric Research.

 

3 trends shaping the future world of work — from hrtechnologist.com by Becky Frankiewicz, President of Manpower Group North America

Excerpt:

In a world of constant change, continuity has given way to adaptability. It’s no secret the world of work has changed. Yet today it’s changing faster than ever before.

The impact of technology means new skills and new roles are emerging as fast as others become extinct.

My career path is a case in point. When I entered high school, I intended to follow a linear career path similar to generations before me. Pick a discipline, get a degree, commit to it, retire. Now in my fourth career, that’s not how it worked out, and I’m glad. In fact, the only true constant I’ve had is constant learning. Because success in the future won’t be defined by performance, but by potential and the ability to learn, apply and adapt.

 

From Jobs for Life to Skills for Life
Each day we see firsthand technology’s impact on jobs. 65% of the jobs my three daughters will do don’t even exist yet. Employability is less about what you already know and more about your capacity to learn. It requires a new mindset for us to develop a workforce with the right skillsets, and for individuals seeking to advance their careers. We need to be ready to help upskill and reskill people for new jobs and new roles. 

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian