State of AI — from stateof.ai

Excerpt:

In this report, we set out to capture a snapshot of the exponential progress in AI with a focus on developments in the past 12 months. Consider this report as a compilation of the most interesting things we’ve seen that seeks to trigger informed conversation about the state of AI and its implication for the future.

We consider the following key dimensions in our report:

  • Research: Technology breakthroughs and their capabilities.
  • Talent: Supply, demand and concentration of talent working in the field.
  • Industry: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
  • Politics: Public opinion of AI, economic implications and the emerging geopolitics of AI.

 

definitions of terms involved in AI

definitions of terms involved in AI

 

hard to say how AI is impacting jobs yet -- but here are 2 perspectives

 

 

There’s nothing artificial about how AI is changing the workplace — from forbes.com by Eric Yuan

Excerpt:

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

 

AI can now ‘listen’ to machines to tell if they’re breaking down — from by Rebecca Campbell

Excerpt:

Sound is everywhere, even when you can’t hear it.

It is this noiseless sound, though, that says a lot about how machines function.

Helsinki-based Noiseless Acoustics and Amsterdam-based OneWatt are relying on artificial intelligence (AI) to better understand the sound patterns of troubled machines. Through AI they are enabling faster and easier problem detection.

 

Making sound visible even when it can’t be heard. With the aid of non-invasive sensors, machine learning algorithms, and predictive maintenance solutions, failing components can be recognized at an early stage before they become a major issue.

 

 

 

Chinese university uses facial recognition for campus entry — from cr80news.com by Andrew Hudson

Excerpt:

A number of higher education institutions in China have deployed biometric solutions for access and payments in recent months, and adding to the list is Peking University. The university has now installed facial recognition readers at perimeter access gates to control access to its Beijing campus.

As reported by the South China Morning Post, anyone attempting to enter through the southwestern gate of the university will no longer have to provide a student ID card. Starting this month, students will present their faces to a camera as part of a trial run of the system ahead of full-scale deployment.

From DSC:
I’m not sure I like this one at all — and the direction that this is going in. 

 

 

 

Will We Use Big Data to Solve Big Problems? Why Emerging Technology is at a Crossroads — from blog.hubspot.com by Justin Lee

Excerpt:

How can we get smarter about machine learning?
As I said earlier, we’ve reached an important crossroads. Will we use new technologies to improve life for everyone, or to fuel the agendas of powerful people and organizations?

I certainly hope it’s the former. Few of us will run for president or lead a social media empire, but we can all help to move the needle.

Consume information with a critical eye.
Most people won’t stop using Facebook, Google, or social media platforms, so proceed with a healthy dose of skepticism. Remember that the internet can never be objective. Ask questions and come to your own conclusions.

Get your headlines from professional journalists.
Seek credible outlets for news about local, national and world events. I rely on the New York Times and the Wall Street Journal. You can pick your own sources, but don’t trust that the “article” your Aunt Marge just posted on Facebook is legit.

 

 

 

 

Computers that never forget a face — from Future Today Institute

Excerpts:

In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.

Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.

Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.

It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.

Near-Futures Scenarios (2018 – 2028):

OptimisticFaceprints make us safer, and they bring us back to physical offices and stores.  

Pragmatic: As faceprint adoption grows, legal challenges mount. 
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.

CatastrophicFaceprints are used for widespread surveillance and authoritative control.

 

 

 

How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent
Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.

 

 

 

Preparing students for workplace of the future  — from educationdive.com by Shalina Chatlani

Excerpt:

The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.

This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.

In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.

“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.

 

 

Amazon’s A.I. camera could help people with memory loss recognize old friends and family — from cnbc.com by Christina Farr

  • Amazon’s DeepLens is a smart camera that can recognize objects in front of it.
  • One software engineer, Sachin Solkhan, is trying to figure out how to use it to help people with memory loss.
  • Users would carry the camera to help them recognize people they know.

 

 

Microsoft acquired an AI startup that helps it take on Google Duplex — from qz.com by Dave Gershgorn

Excerpt:

We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.

Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.

 

 

Researchers developed an AI to detect DeepFakes — from thenextweb.com by Tristan Greene

Excerpt:

A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.

The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.

 

 

Bringing It Down To Earth: Four Ways Pragmatic AI Is Being Used Today — from forbes.com by Carlos Melendez

Excerpt:

Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.

Below are four key categories of pragmatic AI and ways they are being applied today.

1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots

 

 

Billable Hour ‘Makes No Sense’ in an AI World — from biglawbusiness.com by Helen Gunnarsson

Excerpt:

Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

 

 

Deep Learning Tool Tops Dermatologists in Melanoma Detection — from healthitanalytics.com
A deep learning tool achieved greater accuracy than dermatologists when detecting melanoma in dermoscopic images.

 

 

Apple’s plans to bring AI to your phone — from wired.com by Tom Simonite

Excerpt:

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

 

 

 

 

 

Google’s robot assistant now makes eerily lifelike phone calls for you — from theguardian.com by Olivia Solon
Google Duplex contacts hair salon and restaurant in demo, adding ‘er’ and ‘mmm-hmm’ so listeners think it’s human

Excerpt:

Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.

The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.

The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.

 

 

Google employees quit over the company’s military AI project — from thenextweb.com by Tristan Greene

Excerpt:

About a dozen Google employees reportedly left the company over its insistence on developing AI for the US military through a program called Project Maven. Meanwhile 4,000 others signed a petition demanding the company stop.

It looks like there’s some internal confusion over whether the company’s “Don’t Be Evil” motto covers making machine learning systems to aid warfare.

 

 

 

The link between big tech and defense work — from wired.com by Nitasha Tiku

Except:

FOR MONTHS, A growing faction of Google employees has tried to force the company to drop out of a controversial military program called Project Maven. More than 4,000 employees, including dozens of senior engineers, have signed a petition asking Google to cancel the contract. Last week, Gizmodo reported that a dozen employees resigned over the project. “There are a bunch more waiting for job offers (like me) before we do so,” one engineer says. On Friday, employees communicating through an internal mailing list discussed refusing to interview job candidates in order to slow the project’s progress.

Other tech giants have recently secured high-profile contracts to build technology for defense, military, and intelligence agencies. In March, Amazon expanded its newly launched “Secret Region” cloud services supporting top-secret work for the Department of Defense. The same week that news broke of the Google resignations, Bloomberg reported that Microsoft locked down a deal with intelligence agencies. But there’s little sign of the same kind of rebellion among Amazon and Microsoft workers.

 

 

Amazon urged not to sell facial recognition tool to police — from wpxi.com by Gene Johnson

Excerpt:

Facebook SEATTLE (AP) – The American Civil Liberties Union and other privacy advocates are asking Amazon to stop marketing a powerful facial recognition tool to police, saying law enforcement agencies could use the technology to “easily build a system to automate the identification and tracking of anyone.”

The tool, called Rekognition, is already being used by at least one agency – the Washington County Sheriff’s Office in Oregon – to check photographs of unidentified suspects against a database of mug shots from the county jail, which is a common use of such technology around the country.

 

 

From DSC:
Google’s C-Suite — as well as the C-Suites at Microsoft, Amazon, and other companies — needs to be very careful these days, as they could end up losing the support/patronage of a lot of people — including more of their own employees. It’s not an easy task to know how best to build and use technologies in order to make the world a better place…to create a dream vs. a nightmare for our future. But just because we can build something, doesn’t mean we should.

 

 

The Complete Guide to Conversational Commerce | Everything you need to know. — from chatbotsmagazine.com by Matt Schlicht

Excerpt:

What is conversational commerce? Why is it such a big opportunity? How does it work? What does the future look like? How can I get started? These are the questions I’m going to answer for you right now.

The guide covers:

  • An introduction to conversational commerce.
  • Why conversational commerce is such a big opportunity.
  • Complete breakdown of how conversational commerce works.
  • Extensive examples of conversational commerce using chatbots and voicebots.
  • How artificial intelligence impacts conversational commerce.
  • What the future of conversational commerce will look like.

 

Definition: Conversational commerce is an automated technology, powered by rules and sometimes artificial intelligence, that enables online shoppers and brands to interact with one another via chat and voice interfaces.

 

 

 

Notes from the AI frontier: Applications and value of deep learning — from mckinsey.com by Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra

Excerpt:

Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.

It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.

  1. Mapping AI techniques to problem types
  2. Insights from use cases
  3. Sizing the potential value of AI
  4. The road to impact and value

 

 

 

AI for Good — from re-work.co by Ali Shah, Head of Emerging Technology and Strategic Direction – BBC

 

 

 

Algorithms are making the same mistakes assessing credit scores that humans did a century ago — from qz.com by Rachel O’Dwyer

 

 

 

 

Microsoft’s meeting room of the future is wild — from theverge.com by Tom Warren
Transcription, translation, and identification

Excerpts:

Microsoft just demonstrated a meeting room of the future at the company’s Build developer conference.

It all starts with a 360-degree camera and microphone array that can detect anyone in a meeting room, greet them, and even transcribe exactly what they say in a meeting regardless of language.

Microsoft takes the meeting room scenario even further, though. The company is using its artificial intelligence tools to then act on what meeting participants say.

 

 

From DSC:
Whoa! Many things to think about here. Consider the possibilities for global/blended/online-based learning (including MOOCs) with technologies associated with translation, transcription, and identification.

 

 

Creating continuous, frictionless learning with new technologies — from clomedia.com by Karen Hebert-Maccaro
Point-of-need and on-the-job learning experiences are about to get a lot more creative.

Excerpt:

Technology has conditioned workers to expect quick and easy experiences — from Google searches to help from voice assistants — so they can get the answers they need and get back to work. While the concept of “on-demand” learning is not new, it’s been historically tough to deliver, and though most learning and development departments have linear e-learning modules or traditional classroom experiences, today’s learners are seeking more performance-adjacent, “point-of-need” models that fit into their busy, fast-paced work environments.

Enter emerging technologies. Artificial intelligence, voice interfaces and augmented reality, when applied correctly, have the potential to radically change the nature of how we learn at work. What’s more, these technologies are emerging at a consumer-level, meaning HR’s lift in implementing them into L&D may not be substantial. Consider the technologies we already use regularly — voice assistants like Alexa, Siri and Google Assistant may be available in 55 percent of homes by 2022, providing instant, seamless access to information we need on the spot. While asking a home assistant for the weather, the best time to leave the house to beat traffic or what movies are playing at a local theater might not seem to have much application in the workplace, this nonlinear, point-of-need interaction is already playing out across learning platforms.

 

Artificial intelligence, voice interfaces and augmented reality, when applied correctly, have the potential to radically change the nature of how we learn at work.

 

 

The rise of newsroom smart machines: Optimizing workflow with artificial intelligence — from mediablog.prnewswire.com by Julian Dossett

Excerpts:

As computer algorithms become more advanced, artificial intelligence (AI) increasingly has grown prominent in the workplace.  Top news organizations now use AI for a variety of newsroom tasks.

But current AI systems largely are still dependent on humans to function correctly, and the most pressing concern is understanding how to correctly operate these systems as they continue to thrive in a variety of media-related industries.

So, while [Machine Learning] systems soon will become ubiquitous in many professions, they won’t replace the professionals working in those fields for some time — rather, they will become an advanced tool that will aid in decision making. This is not to say that AI will never endanger human jobs. Automation always will find a way.

 

 

 
AI and Chatbots in Education: What Does The FutureHold? — from chatbotsmagazine.com by Robin Singh

From DSC:
While I don’t find this  article to be exemplary, I post this one mainly to encourage innovative thinking about how we might use some of these technologies in our future learning ecosystems. 

 

 

 

 

Home Voice Control — See Josh Micro in Action!

 

 

From DSC:
Along these lines, will faculty use their voices to control their room setups (i.e., the projection, shades, room lighting, what’s shown on the LMS, etc.)?

Or will machine-to-machine communications, the Internet of Things, sensors, mobile/cloud-based apps, and the like take care of those items automatically when a faculty member walks into the room?

 

 

 

From DSC:
Check out the 2 items below regarding the use of voice as it pertains to using virtual assistants: 1 involves healthcare and the other involves education (Canvas).


1) Using Alexa to go get information from Canvas:

“Alexa Ask Canvas…”

Example questions as a student:

  • What grades am I getting in my courses?
  • What am I missing?

Example question as a teacher:

  • How many submissions do I need to grade?

See the section on asking Alexa questions…roughly between http://www.youtube.com/watch?v=e-30ixK63zE &t=38m18s through http://www.youtube.com/watch?v=e-30ixK63zE &t=46m42s

 

 

 

 


 

2) Why voice assistants are gaining traction in healthcare — from samsungnext.com by Pragati Verma

Excerpt (emphasis DSC):

The majority of intelligent voice assistant platforms today are built around smart speakers, such as the Amazon Echo and Google Home. But that might change soon, as several specialized devices focused on the health market are slated to be released this year.

One example is ElliQ, an elder care assistant robot from Samsung NEXT portfolio company Intuition Robotics. Powered by AI cognitive technology, it encourages an active and engaged lifestyle. Aimed at older adults aging in place, it can recognizing their activity level and suggest activities, while also making it easier to connect with loved ones.

Pillo is an example of another such device. It is a robot that combines machine learning, facial recognition, video conferencing, and automation to work as a personal health assistant. It can dispense vitamins and medication, answer health and wellness questions in a conversational manner, securely sync with a smartphone and wearables, and allow users to video conference with health care professionals.

“It is much more than a smart speaker. It is HIPAA compliant and it recognizes the user; acknowledges them and delivers care plans,” said Rogers, whose company created the voice interface for the platform.

Orbita is now working with toSense’s remote monitoring necklace to track vitals and cardiac fluids as a way to help physicians monitor patients remotely. Many more seem to be on their way.

“Be prepared for several more devices like these to hit the market soon,” Rogers predicted.

 

 


From DSC:

I see the piece about Canvas and Alexa as a great example of where a piece of our future learning ecosystems are heading towards — in fact, it’s been a piece of my Learning from the Living [Class] Room vision for a while now. The use of voice recognition/NLP is only picking up steam; look for more of this kind of functionality in the future. 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


 

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.

Description:

The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader

 


 

01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?

 


 

 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian