Computers that never forget a face — from Future Today Institute

Excerpts:

In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.

Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.

Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.

It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.

Near-Futures Scenarios (2018 – 2028):

OptimisticFaceprints make us safer, and they bring us back to physical offices and stores.  

Pragmatic: As faceprint adoption grows, legal challenges mount. 
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.

CatastrophicFaceprints are used for widespread surveillance and authoritative control.

 

 

 

How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent
Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.

 

 

 

Preparing students for workplace of the future  — from educationdive.com by Shalina Chatlani

Excerpt:

The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.

This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.

In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.

“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.

 

 

Amazon’s A.I. camera could help people with memory loss recognize old friends and family — from cnbc.com by Christina Farr

  • Amazon’s DeepLens is a smart camera that can recognize objects in front of it.
  • One software engineer, Sachin Solkhan, is trying to figure out how to use it to help people with memory loss.
  • Users would carry the camera to help them recognize people they know.

 

 

Microsoft acquired an AI startup that helps it take on Google Duplex — from qz.com by Dave Gershgorn

Excerpt:

We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.

Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.

 

 

Researchers developed an AI to detect DeepFakes — from thenextweb.com by Tristan Greene

Excerpt:

A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.

The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.

 

 

Bringing It Down To Earth: Four Ways Pragmatic AI Is Being Used Today — from forbes.com by Carlos Melendez

Excerpt:

Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.

Below are four key categories of pragmatic AI and ways they are being applied today.

1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots

 

 

Billable Hour ‘Makes No Sense’ in an AI World — from biglawbusiness.com by Helen Gunnarsson

Excerpt:

Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

 

 

Deep Learning Tool Tops Dermatologists in Melanoma Detection — from healthitanalytics.com
A deep learning tool achieved greater accuracy than dermatologists when detecting melanoma in dermoscopic images.

 

 

Apple’s plans to bring AI to your phone — from wired.com by Tom Simonite

Excerpt:

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

 

 

 

Skill shift: Automation and the future of the workforce — from mckinsey.com by Jacques Bughin, Eric Hazan, Susan Lund, Peter Dahlström, Anna Wiesinger, and Amresh Subramaniam
Demand for technological, social and emotional, and higher cognitive skills will rise by 2030. How will workers and organizations adapt?

Excerpt:

Skill shifts have accompanied the introduction of new technologies in the workplace since at least the Industrial Revolution, but adoption of automation and artificial intelligence (AI) will mark an acceleration over the shifts of even the recent past. The need for some skills, such as technological as well as social and emotional skills, will rise, even as the demand for others, including physical and manual skills, will fall. These changes will require workers everywhere to deepen their existing skill sets or acquire new ones. Companies, too, will need to rethink how work is organized within their organizations.

This briefing, part of our ongoing research on the impact of technology on the economy, business, and society, quantifies time spent on 25 core workplace skills today and in the future for five European countries—France, Germany, Italy, Spain, and the United Kingdom—and the United States and examines the implications of those shifts.

Topics include:
How will demand for workforce skills change with automation?
Shifting skill requirements in five sectors
How will organizations adapt?
Building the workforce of the future

 

 

 

 

Google’s robot assistant now makes eerily lifelike phone calls for you — from theguardian.com by Olivia Solon
Google Duplex contacts hair salon and restaurant in demo, adding ‘er’ and ‘mmm-hmm’ so listeners think it’s human

Excerpt:

Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.

The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.

The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.

 

 

Google employees quit over the company’s military AI project — from thenextweb.com by Tristan Greene

Excerpt:

About a dozen Google employees reportedly left the company over its insistence on developing AI for the US military through a program called Project Maven. Meanwhile 4,000 others signed a petition demanding the company stop.

It looks like there’s some internal confusion over whether the company’s “Don’t Be Evil” motto covers making machine learning systems to aid warfare.

 

 

 

The link between big tech and defense work — from wired.com by Nitasha Tiku

Except:

FOR MONTHS, A growing faction of Google employees has tried to force the company to drop out of a controversial military program called Project Maven. More than 4,000 employees, including dozens of senior engineers, have signed a petition asking Google to cancel the contract. Last week, Gizmodo reported that a dozen employees resigned over the project. “There are a bunch more waiting for job offers (like me) before we do so,” one engineer says. On Friday, employees communicating through an internal mailing list discussed refusing to interview job candidates in order to slow the project’s progress.

Other tech giants have recently secured high-profile contracts to build technology for defense, military, and intelligence agencies. In March, Amazon expanded its newly launched “Secret Region” cloud services supporting top-secret work for the Department of Defense. The same week that news broke of the Google resignations, Bloomberg reported that Microsoft locked down a deal with intelligence agencies. But there’s little sign of the same kind of rebellion among Amazon and Microsoft workers.

 

 

Amazon urged not to sell facial recognition tool to police — from wpxi.com by Gene Johnson

Excerpt:

Facebook SEATTLE (AP) – The American Civil Liberties Union and other privacy advocates are asking Amazon to stop marketing a powerful facial recognition tool to police, saying law enforcement agencies could use the technology to “easily build a system to automate the identification and tracking of anyone.”

The tool, called Rekognition, is already being used by at least one agency – the Washington County Sheriff’s Office in Oregon – to check photographs of unidentified suspects against a database of mug shots from the county jail, which is a common use of such technology around the country.

 

 

From DSC:
Google’s C-Suite — as well as the C-Suites at Microsoft, Amazon, and other companies — needs to be very careful these days, as they could end up losing the support/patronage of a lot of people — including more of their own employees. It’s not an easy task to know how best to build and use technologies in order to make the world a better place…to create a dream vs. a nightmare for our future. But just because we can build something, doesn’t mean we should.

 

 

The Complete Guide to Conversational Commerce | Everything you need to know. — from chatbotsmagazine.com by Matt Schlicht

Excerpt:

What is conversational commerce? Why is it such a big opportunity? How does it work? What does the future look like? How can I get started? These are the questions I’m going to answer for you right now.

The guide covers:

  • An introduction to conversational commerce.
  • Why conversational commerce is such a big opportunity.
  • Complete breakdown of how conversational commerce works.
  • Extensive examples of conversational commerce using chatbots and voicebots.
  • How artificial intelligence impacts conversational commerce.
  • What the future of conversational commerce will look like.

 

Definition: Conversational commerce is an automated technology, powered by rules and sometimes artificial intelligence, that enables online shoppers and brands to interact with one another via chat and voice interfaces.

 

 

 

Notes from the AI frontier: Applications and value of deep learning — from mckinsey.com by Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra

Excerpt:

Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.

It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.

  1. Mapping AI techniques to problem types
  2. Insights from use cases
  3. Sizing the potential value of AI
  4. The road to impact and value

 

 

 

AI for Good — from re-work.co by Ali Shah, Head of Emerging Technology and Strategic Direction – BBC

 

 

 

Algorithms are making the same mistakes assessing credit scores that humans did a century ago — from qz.com by Rachel O’Dwyer

 

 

 

 

Microsoft’s meeting room of the future is wild — from theverge.com by Tom Warren
Transcription, translation, and identification

Excerpts:

Microsoft just demonstrated a meeting room of the future at the company’s Build developer conference.

It all starts with a 360-degree camera and microphone array that can detect anyone in a meeting room, greet them, and even transcribe exactly what they say in a meeting regardless of language.

Microsoft takes the meeting room scenario even further, though. The company is using its artificial intelligence tools to then act on what meeting participants say.

 

 

From DSC:
Whoa! Many things to think about here. Consider the possibilities for global/blended/online-based learning (including MOOCs) with technologies associated with translation, transcription, and identification.

 

 

Educause Releases 2018 Horizon Report Preview — from campustechnology.com by Rhea Kelly

Excerpt:

After acquiring the rights to the New Media Consortium’s Horizon project earlier this year, Educause has now published a preview of the 2018 Higher Education Edition of the Horizon Report — research that was in progress at the time of NMC’s sudden dissolution. The report covers the key technology trends, challenges and developments expected to impact higher ed in the short-, mid- and long-term future.

 

Also see:

 

 

 

Amazon CEO Jeff Bezos’ personal AV tech controls the future of AV and doesn’t even realize it — from ravepubs.com by Gary Kayye

Excerpt:

But it dawned on me today that the future of our very own AV market may not be in the hands of any new product, new technology or even an AV company at all. In fact, there’s likely one AV guy (or girl) out there, today, that controls the future of AV for all of us, but doesn’t even know it, yet.

I’m talking about Jeff Bezos’ personal AV technician. Yes, that Jeff Bezos — the one who started Amazon.

Follow my logic.

The Amazon Alexa is AMAZING. Probably the most amazing thing since Apple’s iPhone. And, maybe even more so. The iPhone was revolutionary as it was a handheld phone, an email client, notes taker, voice recorder, calendar, to-do list, wrist watch and flashlight — all in one. It replaced like 10 things I was using every single day. And I didn’t even mention the camera!

Alexa seamlessly and simply connects to nearly everything you want to connect it to. And, it’s updated weekly — yes, weekly — with behind-the-scenes Friday-afternoon firmware and software upgrades. So, just when you think Alexa doesn’t do something you want it to do, she can — you just have to wait until an upcoming Friday — as someone will add that functionality. And, at any time, you can add Alexa SKILLS to yours and have third-party control of your Lutron lighting system, your shades and blinds, your HVAC, your TV, your DVR, your CableTV box, your SONOS, your home security system, your cameras and even your washer and dryer (yes, I have that functionality — even though I can’t find a use for it yet). It can even call people, play any radio station in the world, play movie previews, play Jeopardy!, play Sirius/XM radio — I mean, it can do nearly anything. It’s squarely aimed at the average consumer or home application — all to simplify your life.

But it could EASILY be upgraded to control everything. I mean everything. Projectors, digital signage networks, AV-over-IP systems, scalers, switchers, audio systems, commercial-grade lighting systems, rooms, buildings, etc. — you get the idea.

 

 

 

From DSC:
By the way, I wouldn’t be so sure that Bezos doesn’t realize this; there’s very little that gets by that guy.

 

 

Also see:

Crestron to Ship AirBoard Whiteboarding Solution — from ravepubs.com by Sara Abrons

Excerpt:

Crestron will soon be shipping the Crestron AirBoard PoE electronic whiteboard technology. Crestron AirBoard enables viewing of electronic whiteboard content on any display device, thereby solving the problem of meeting participants — remote participants, especially — not being able to see the whiteboard unless they’re seated with a direct line of sight.

With Crestron AirBoard, annotations can be saved and then posted, emailed, or texted to either a central web page (education applications) or to invited participants (corporate applications). Meeting participants simply choose “whiteboard” as a source on the in-room Crestron TSW or Crestron Mercury touch screen to start the session. When “end meeting” is selected, the user is prompted to save and send the file.

 

 

 

Home Voice Control — See Josh Micro in Action!

 

 

From DSC:
Along these lines, will faculty use their voices to control their room setups (i.e., the projection, shades, room lighting, what’s shown on the LMS, etc.)?

Or will machine-to-machine communications, the Internet of Things, sensors, mobile/cloud-based apps, and the like take care of those items automatically when a faculty member walks into the room?

 

 

 

An AI Bot for the Teacher — with thanks to Karthik Reddy for this resource

Artificial intelligence is the stuff of science fiction – if you are old enough, you will remember those Terminator movies a good few years ago, where mankind was systematically being wiped out by computers.

The truth is that AI, though not quite at Terminator level yet, is already a fact and something that most of us have encountered already. If you have ever used the virtual assistant on your phone or the Ask Google feature, you have used AI.

Some companies are using it as part of their sales and marketing strategies. An interesting example is Lowe’s Home Improvement that, instead of chatbots, uses actual robots into their physical stores. These robots are capable of helping customers locate products that they’re interested in, taking a lot of the guesswork out of the entire shopping experience.

Of course, there are a lot of different potential applications for AI that are very interesting. Imagine an AI teaching assistant, for example. They could help grade papers, fact check and assist with lesson planning, etc., all to make our harassed teachers’ lives a little easier.

Chatbots could be programmed as tutors to help kids better understand core topics if they are struggling with them, ensuring that they don’t hold the rest of the class up. And, for kids who have a real affinity with the subject, help them learn more about what they are interested in.

It could also help enhance long distance training.  Imagine if your students could get instant answers to basic questions through a simple chatbot. Sure, if they were still not getting it, they would come through to you – the chatbot cannot replace a real, live, teacher after all. But it could save you a lot of time and frustration.

Here, of course, we have only skimmed the surface of what artificial intelligence is capable of. Why not look through this infographic to see how different brands have been using this tech, and see what possible applications of it we might expect.

 

Brands that use AI to enhance marketing (infographic) 2018
From 16best.net with thanks to Karthik Reddy for this resource

 

 

 

The Law Firm Disrupted: Walmart Won’t Pay You to Cut and Paste — from law.com by Roy Strom
The world’s largest retailer, locked in a battle over the future of its business, has developed a tool to help make its many outside lawyers more efficient.

Excerpt (emphasis DSC):

Earlier this week, Walmart Inc. announced it would be rolling out 500 more giant vending machines in its stores to deliver online orders in seconds. The tool is designed to compete with online delivery services from Amazon.com Inc.

The world’s largest retailer also announced this week a tool that will compete (in some sense) with its outside counsel. Walmart has licensed a product from LegalMation that automatically drafts responses and initial discovery requests for employment and slip-and-fall suits filed in California. By this fall, the product should cover those cases in all 50 states.

LegalMation says it takes under two minutes to drag and drop a PDF of a suit into its product and receive a response to that case, in addition to a set of targeted requests for documents, form interrogatories and special interrogatories. That work has traditionally been handled by junior lawyers at Walmart’s outside firms, and LegalMation claims it can take them up to 10 hours to do. The savings on preparing an answer to these complaints is as much as 80 percent, LegalMation said.

“You’re still reviewing the outcome and reviewing the affirmative defenses,” said LegalMation co-founder Thomas Suh, a longtime legal technology advocate. “You’re eliminating the brainless cutting and pasting.”

 

About six months after the Harvard program, Lee and Suh had drilled down on where to apply AI, and they teamed up with IBM’s Watson to build their product. They also had to develop their own neural network that they said is the “secret sauce” to LegalMation’s ability to parse legalese.  “We would not be able to do this without an AI engine like Watson, and likewise I don’t think a product like this would be doable without our neural network,” Lee said.

 

 

Also see:

 

Also see:

Automation in the Legal Industry: How Will It Affect Recent Law School Grads? — from nationaljurist.com by Martin Pritikin

Excerpt:

A 2017 study by McKinsey Global Institute found that roughly half of all work activities globally have the potential to be automated by technology. A follow-on study (also from McKinsey in 2017) concluded that up to one-third of work activities could be displaced by 2030. What, if any, impacts do these eye-popping findings have on the future on the legal profession, especially for recent law school graduates embarking on their careers?

Recently, it was announced that ROSS, a legal research artificial intelligence platform powered in part by IBM’s Watson technology, was unveiling a new product, EVA, which will not only find applicable cases, but quality check case citations and history. As usual, this latest development has gotten people worried that human lawyers—and, in particular, recent law grads who have traditionally been tasked with legal research—may be on a path to extinction.

Obviously, no one can predict the future with certainty. But if history is any guide, these new technological developments will shift the type of work new lawyers are expected to do, but won’t necessarily eliminate it.

We may not be facing a future without lawyers. But it is going to be a future that requires lawyers to learn how to utilize technology effectively to serve their clients—something we should all welcome, not fear.

 

College of Law Announces the Launch of the Nation’s First Live Online J.D. Program — from law.syr.edu

Excerpt:

The American Bar Association has granted the Syracuse University College of Law a variance to offer a fully interactive online juris doctor program. The online J.D. program will be the first in the nation to combine real-time and self-paced online classes, on-campus residential classes, and experiential learning opportunities.


The online J.D. was subject to intense scrutiny and review by legal education experts before the College was granted the variance. Students in the online program will be taught by College of Law faculty, will be held to the same high admission and academic standards as students in the College’s residential program, and will take all courses required by its residential J.D. program.

 

Also see:

 

 

Per Catie Chase from BestColleges.com:

As you know, online education is rapidly expanding. At BestColleges.com we believe it’s important to evaluate the latest trends in distance education and measure the impact to both students and academic institutions. This is an industry that evolves quickly and these results offer relevant, current insights we are excited to learn from and share with the online learning community.

To keep up with these trends, we surveyed 1,800 online students and university administrators and published two reports based on our findings:

  • 2018 Online Education Trends Report – Synthesizing all of the data we gathered in our study, this academic report provides a holistic look at the current state of online education and offers predictions for where it’s headed.
  • The Student’s Guide to Online Education – Most students we spoke with wished they’d known more about online education and how to choose a quality online program prior to enrolling. We built this guide as a launching point for prospective students to gain that knowledge and make informed decisions on their education.

 

In an effort to develop a broader understanding of how common perceptions of online education are changing, we added several questions for both students and school administrators to the study this year. A majority of students (79%) felt that online learning is either “better than” or “equal to” on-campus learning. They felt their employers (61%), future employers (61%), and the general public (58%) also had a similarly positive perception of online learning.

 

 

From DSC:
It is highly likely that in the very near future, the question won’t even be asked anymore what employers think of online-based learning and whether they will hire someone that’s taken a significant portion of their coursework online. They won’t have a choice. This is especially true if and when more advanced technologies and capabilities get further baked into online-based learning — i.e., truly personalized/customized learning (which most faculty members — including myself — and teachers can’t deliver), virtual reality, artificial intelligence, chatbots, personal digital assistants, Natural Language Processing (NLP), and more. 

The better question could become:

To what extent will campus-based learning be impacted when truly personalized/customized learning is offered via online-based means?

My guess?  There will continue to be a significant amount of people who want to learn in a physical campus-based setting — and that’s great! But the growth of online learning will grow even more (a lot more) if truly personalized learning occurs via online-based means.

 


 

99% of administrators found that demand for online education has increased or stayed the same over the past few years. Almost 40% of respondents plan to increase their online program budgets in the next year.

 


 

This year, 34% of schools reported that their online students are younger than in previous years, falling into the “traditional” college age range of 18-25, and even younger as high school students take college courses before graduating. Several schools noted that recent high school graduates are entering the workforce while also pursuing a college education.

 


 

 

From DSC:
Check out the 2 items below regarding the use of voice as it pertains to using virtual assistants: 1 involves healthcare and the other involves education (Canvas).


1) Using Alexa to go get information from Canvas:

“Alexa Ask Canvas…”

Example questions as a student:

  • What grades am I getting in my courses?
  • What am I missing?

Example question as a teacher:

  • How many submissions do I need to grade?

See the section on asking Alexa questions…roughly between http://www.youtube.com/watch?v=e-30ixK63zE &t=38m18s through http://www.youtube.com/watch?v=e-30ixK63zE &t=46m42s

 

 

 

 


 

2) Why voice assistants are gaining traction in healthcare — from samsungnext.com by Pragati Verma

Excerpt (emphasis DSC):

The majority of intelligent voice assistant platforms today are built around smart speakers, such as the Amazon Echo and Google Home. But that might change soon, as several specialized devices focused on the health market are slated to be released this year.

One example is ElliQ, an elder care assistant robot from Samsung NEXT portfolio company Intuition Robotics. Powered by AI cognitive technology, it encourages an active and engaged lifestyle. Aimed at older adults aging in place, it can recognizing their activity level and suggest activities, while also making it easier to connect with loved ones.

Pillo is an example of another such device. It is a robot that combines machine learning, facial recognition, video conferencing, and automation to work as a personal health assistant. It can dispense vitamins and medication, answer health and wellness questions in a conversational manner, securely sync with a smartphone and wearables, and allow users to video conference with health care professionals.

“It is much more than a smart speaker. It is HIPAA compliant and it recognizes the user; acknowledges them and delivers care plans,” said Rogers, whose company created the voice interface for the platform.

Orbita is now working with toSense’s remote monitoring necklace to track vitals and cardiac fluids as a way to help physicians monitor patients remotely. Many more seem to be on their way.

“Be prepared for several more devices like these to hit the market soon,” Rogers predicted.

 

 


From DSC:

I see the piece about Canvas and Alexa as a great example of where a piece of our future learning ecosystems are heading towards — in fact, it’s been a piece of my Learning from the Living [Class] Room vision for a while now. The use of voice recognition/NLP is only picking up steam; look for more of this kind of functionality in the future. 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


 

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

AI Helps Figure Out Where Students Go Wrong on Math Problems — from steamuniverse.com by Joshua Bolkan

Excerpt:

Researchers at Cornell University are working on software that will help math teachers understand what their students were thinking that led them to finding incorrect answers.

Erik Andersen, assistant professor of computer science at Cornell, said that teachers spend a lot of time grading math homework because grading is more complicated than just marking an answer as right or wrong.

“What the teachers are spending a lot of time doing is assigning partial credit and working individually to figure out what students are doing wrong,” Andersen said in a prepared statement. “We envision a future in which educators spend less time trying to reconstruct what their students are thinking and more time working directly with their students.”

 

 

 

 

Also see:

 

 

 
 

2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.

Description:

The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader

 


 

01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?

 


 

 
© 2024 | Daniel Christian