AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.

Description:

The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader

 


 

01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?

 


 

 

From DSC:
Why aren’t we further along with lecture recording within K-12 classrooms?

That is, I as a parent — or much better yet, our kids themselves who are still in K-12 — should be able to go online and access whatever talks/lectures/presentations were given on a particular day. When our daughter is sick and misses several days, wouldn’t it be great for her to be able to go out and see what she missed? Even if we had the time and/or the energy to do so (which we don’t), my wife and I can’t present this content to her very well. We would likely explain things differently — and perhaps incorrectly — thus, potentially muddying the waters and causing more confusion for our daughter.

There should be entry level recording studios — such as the One Button Studio from Penn State University — in each K-12 school for teachers to record their presentations. At the end of each day, the teacher could put a checkbox next to what he/she was able to cover that day. (No rushing intended here — as education is enough of a run-away train often times!) That material would then be made visible/available on that day as links on an online-based calendar. Administrators should pay teachers extra money in the summer times to record these presentations.

Also, students could use these studios to practice their presentation and communication skills. The process is quick and easy:

 

 

 

 

I’d like to see an option — ideally via a brief voice-driven Q&A at the start of each session — that would ask the person where they wanted to put the recording when it was done: To a thumb drive, to a previously assigned storage area out on the cloud/Internet, or to both destinations?

Providing automatically generated close captioning would be a great feature here as well, especially for English as a Second Language (ESL) students.

 

 

 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

What is Artificial Intelligence, Machine Learning and Deep Learning — from geospatialworld.net by Meenal Dhande

 

 

 

 

 


 

What is the difference between AI, machine learning and deep learning? — from geospatialworld.net by Meenal Dhande

Excerpt:

In the first part of this blog series, we gave you simple and elaborative definitions of what is artificial intelligence (AI), machine learning and deep learning. This is the second part of the series; here we are elucidating our readers with – What is the difference between AI, machine learning, and deep learning.

You can think of artificial intelligence (AI), machine learning and deep learning as a set of a matryoshka doll, also known as a Russian nesting doll. Deep learning is a subset of machine learning, which is a subset of AI.

 

 

 

 

 


Chatbot for College Students: 4 Chatbots Tips Perfect for College Students — from chatbotsmagazine.com by Zevik Farkash

Excerpts:

1. Feed your chatbot with information your students don’t have.
Your institute’s website can be as elaborate as it gets, but if your students can’t find a piece of information on it, it’s as good as incomplete. Say, for example, you offer certain scholarships that students can voluntarily apply for. But the information on these scholarships are tucked away on a remote page that your students don’t access in their day-to-day usage of your site.

So Amy, a new student, has no idea that there’s a scholarship that can potentially make her course 50% cheaper. She can scour your website for details when she finds the time. Or she can ask your university’s chatbot, “Where can I find information on your scholarships?”

And the chatbot can tell her, “Here’s a link to all our current scholarships.”

The best chatbots for colleges and universities tend to be programmed with even more detail, and can actually strike up a conversation by saying things like:

“Please give me the following details so I can pull out all the scholarships that apply to you.
“Which department are you in? (Please select one.)
“Which course are you enrolled in? (Please select one.)
“Which year of study are you in? (Please select one.)
“Thank you for the details! Here’s a list of all applicable scholarships. Please visit the links for detailed information and let me know if I can be of further assistance.”

2. Let it answer all the “What do I do now?” questions.

3. Turn it into a campus guide.

4. Let it take care of paperwork.

 

From DSC:
This is the sort of thing that I was trying to get at last year at the NGLS 2017 Conference:

 

 

 

 


18 Disruptive Technology Trends For 2018 — from disruptionhub.com by Rob Prevett

Excerpts:

1. Mobile-first to AI-first
A major shift in business thinking has placed Artificial Intelligence at the very heart of business strategy. 2017 saw tech giants including Google and Microsoft focus on an“AI first” strategy, leading the way for other major corporates to follow suit. Companies are demonstrating a willingness to use AI and related tools like machine learning to automate processes, reduce administrative tasks, and collect and organise data. Understanding vast amounts of information is vital in the age of mass data, and AI is proving to be a highly effective solution. Whilst AI has been vilified in the media as the enemy of jobs, many businesses have undergone a transformation in mentalities, viewing AI as enhancing rather than threatening the human workforce.

7. Voice based virtual assistants become ubiquitous
Google HomeThe wide uptake of home based and virtual assistants like Alexa and Google Home have built confidence in conversational interfaces, familiarising consumers with a seamless way of interacting with tech. Amazon and Google have taken prime position between brand and customer, capitalising on conversational convenience. The further adoption of this technology will enhance personalised advertising and sales, creating a direct link between company and consumer.

 


 

5 Innovative Uses for Machine Learning — from entrepreneur.com
They’ll be coming into your life — at least your business life — sooner than you think.

 


 

Philosophers are building ethical algorithms to help control self-driving cars – from qz.com by Olivia Goldhill

 


 

Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It — from nytimes.com by Natasha Singerfeb

Excerpt:

PALO ALTO, Calif. — The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.

This semester, Harvard University and the Massachusetts Institute of Technology are jointly offering a new course on the ethics and regulation of artificial intelligence. The University of Texas at Austin just introduced a course titled “Ethical Foundations of Computer Science” — with the idea of eventually requiring it for all computer science majors.

And at Stanford University, the academic heart of the industry, three professors and a research fellow are developing a computer science ethics course for next year. They hope several hundred students will enroll.

The idea is to train the next generation of technologists and policymakers to consider the ramifications of innovations — like autonomous weapons or self-driving cars — before those products go on sale.

 


 

 

 

The Future of Design, Part II — from 99u.adobe.com by Madeleine Morley; with thanks to Keesa V. Johnson for posting this on Twitter
For the second straight year, we asked 10 creatives to predict what is coming up in the world of design and how they will prepare for it. This year’s installment includes designing for voice-controlled tech, holograms, and the rise of the hybrid designer.

 

Design is always changing, and wider changes are often spearheaded by design itself. Now with tech and the creative industry increasingly aligning, we’re on the precipice of a truly momentous period in the history of design, something unprecedented that is difficult to predict and prepare for.

 

Excerpts:

With quickly evolving tools, tumultuous shifts in the economy, the relentless growth of the gig and freelance lifestyle, and global networks, the working landscape for young designers is a tremendously uncertain one. There’s no model to follow: The known and well-trodden career path of previous generations is overgrown.

It’s an uncertain time for design, but in its difficulty and complexity, it is an inspiring and crucial one: Those with the skills will help decide the way that innovations in tech not only look but function, too, and influence our daily lives.

Although we can’t predict the future, we can speak to those with experience who think about what’s in store. We asked each participant to give us their advice: What does their future of design look like? What will it do to the very idea of design. And how can we prepare for it?

 

Design will be for ears and not eyes.

 

We’re always getting our heads around designing for the latest technology, methodology, application, media, or format. It’s a fascinating time to be a designer. There will always be space for experts, for those who specialize in the things they are really, really good at, but for others there is the need to diversify.

 

We won’t tell stories; we’ll live them.

 

 

 

You can now build Amazon Music playlists using voice commands on Alexa devices — from theverge.com by Natt Garun

Excerpt:

Amazon today announced that Amazon Music listeners can now build playlists using voice commands via Alexa. For example, if they’re streaming music from an app or listening to the radio on an Alexa-enabled device, they can use voice commands to add the current song to a playlist, or start a new playlist from scratch.

 

From DSC:
I wonder how long it will be before we will be able to create and share learning-based playlists for accessing digitally-based resources…? Perhaps AI will be used to offer a set of playlists on any given topic…?

With the exponential pace of change that we’re starting to experience — plus the 1/2 lives of information shrinking — such features could come in handy.

 

 

 

 

 

Where You’ll Find Virtual Reality Technology in 2018 — from avisystems.com by Alec Kasper-Olson

Excerpt:

The VR / AR / MR Breakdown
This year will see growth in a variety of virtual technologies and uses. There are differences and similarities between virtual, augmented, and mixed reality technologies. The technology is constantly evolving and even the terminology around it changes quickly, so you may hear variations on these terms.

Augmented reality is what was behind the Pokémon Go craze. Players could see game characters on their devices superimposed over images of their physical surroundings. Virtual features seemed to exist in the real world.

Mixed reality combines virtual features and real-life objects. So, in this way it includes AR but it also includes environments where real features seem to exist in a virtual world.

The folks over at Recode explain mixed reality this way:

In theory, mixed reality lets the user see the real world (like AR) while also seeing believable, virtual objects (like VR). And then it anchors those virtual objects to a point in real space, making it possible to treat them as “real,” at least from the perspective of the person who can see the MR experience.

And, virtual reality uses immersive technology to seemingly place a user into a simulated lifelike environment.

Where You’ll Find These New Realities
Education and research fields are at the forefront of VR and AR technologies, where an increasing number of students have access to tools. But higher education isn’t the only place you see this trend. The number of VR companies grew 250 percent between 2012 and 2017. Even the latest iPhones include augmented reality capabilities. Aside from the classroom and your pocket, here are some others places you’re likely to see VR and AR pop up in 2018.

 

 

 

Top AR apps that make learning fun — from bmsinnolabs.wordpress.com

Excerpt:

Here is a list of a few amazing Augmented Reality mobile apps for children:

  • Jigspace
  • Elements 4D
  • Arloon Plants
  • Math alive
  • PlanetAR Animals
  • FETCH! Lunch Rush
  • Quiver
  • Zoo Burst
  • PlanetAR Alphabets & Numbers

Here are few of the VR input devices include:

  • Controller Wands
  • Joysticks
  • Force Balls/Tracking Balls
  • Data Gloves
  • On-Device Control Buttons
  • Motion Platforms (Virtuix Omni)
  • Trackpads
  • Treadmills
  • Motion Trackers/Bodysuits

 

 

 

HTC VIVE and World Economic Forum Partner For The Future Of The “VR/AR For Impact” Initiative — from blog.vive.com by Matthew Gepp

Excerpt:

VR/AR for Impact experiences shown this week at WEF 2018 include:

  • OrthoVR aims to increase the availability of well-fitting prosthetics in low-income countries by using Virtual Reality and 3D rapid prototyping tools to increase the capacity of clinical staff without reducing quality. VR allows current prosthetists and orthosists to leverage their hands-on and embodied skills within a digital environment.
  • The Extraordinary Honey Bee is designed to help deepen our understanding of the honey bee’s struggle and learn what is at stake for humanity due to the dying global population of the honey bee. Told from a bee’s perspective, The Extraordinary Honey Bee harnesses VR to inspire change in the next generation of honey bee conservationists.
  • The Blank Canvas: Hacking Nature is an episodic exploration of the frontiers of bioengineering as taught by the leading researchers within the field. Using advanced scientific visualization techniques, the Blank Canvas will demystify the cellular and molecular mechanisms that are being exploited to drive substantial leaps such as gene therapy.
  • LIFE (Life-saving Instruction For Emergencies) is a new mobile and VR platform developed by the University of Oxford that enables all types of health worker to manage medical emergencies. Through the use of personalized simulation training and advanced learning analytics, the LIFE platform offers the potential to dramatically extend access to life-saving knowledge in low-income countries.
  • Tree is a critically acclaimed virtual reality experience to immerse viewers in the tragic fate that befalls a rainforest tree. The experience brings to light the harrowing realities of deforestation, one of the largest contributors to global warming.
  • For the Amazonian Yawanawa, ‘medicine’ has the power to travel you in a vision to a place you have never been. Hushuhu, the first woman shaman of the Yawanawa, uses VR like medicine to open a portal to another way of knowing. AWAVENA is a collaboration between a community and an artist, melding technology and transcendent experience so that a vision can be shared, and a story told of a people ascending from the edge of extinction.

 

 

 

Everything You Need To Know About Virtual Reality Technology — from yeppar.com

Excerpt:

Types of Virtual Reality Technology
We can segregate the type of Virtual Reality Technology according to their user experience

Non-Immersive
Non-immersive simulations are the least immersion implementation of Virtual Reality Technology.
In this kind of simulation, only a subset of the user’s senses is replicated, allowing for marginal awareness of the reality outside the VR simulation. A user enters into 3D virtual environments through a portal or window by utilizing standard HD monitors typically found on conventional desktop workstations.

Semi Immersive
In this simulation, users experience a more rich immersion, where a user partly, not fully involved in a virtual environment. Semi immersive simulations are based on high-performance graphical computing, which is often coupled with large screen projector systems or multiple TV projections to properly simulate the user’s visuals.

Fully immersive
Offers the full immersive experience to the user of Virtual Reality Technology, in this phase of VR head-mounted displays and motion sensing devices are used to simulate all of the user senses. In this situation, a user can experience the realistic virtual environment, where a user can experience a wide view field, high resolutions, increased refresh rates and a high quality of visualization through HMD.

 

 

 

 

 

 

This Is What A Mixed Reality Hard Hat Looks Like — from vrscout.com by  Alice Bonasio
A Microsoft-endorsed hard hat solution lets construction workers use holograms on site.

Excerpt:

These workers already routinely use technology such as tablets to access plans and data on site, but going from 2D to 3D at scale brings that to a whole new level. “Superimposing the digital model on the physical environment provides a clear understanding of the relations between the 3D design model and the actual work on a jobsite,” explained Olivier Pellegrin, BIM manager, GA Smart Building.

The application they are using is called Trimble Connect. It turns data into 3D holograms, which are then mapped out to scale onto the real-world environment. This gives workers an instant sense of where and how various elements will fit and exposes mistakes early on in the process.

 

Also see:

Trimble Connect for HoloLens is a mixed reality solution that improves building coordination by combining models from multiple stakeholders such as structural, mechanical and electrical trade partners. The solution provides for precise alignment of holographic data on a 1:1 scale on the job site, to review models in the context of the physical environment. Predefined views from Trimble Connect further simplify in-field use with quick and easy access to immersive visualizations of 3D data. Users can leverage mixed reality for training purposes and to compare plans against work completed. Advanced visualization further enables users to view assigned tasks and capture data with onsite measurement tools. Trimble Connect for HoloLens is available now through the Microsoft Windows App Store. A free trial option is available enabling integration with HoloLens. Paid subscriptions support premium functionality allowing for precise on-site alignment and collaboration. Trimble’s Hard Hat Solution for Microsoft HoloLens extends the benefits of HoloLens mixed reality into areas where increased safety requirements are mandated, such as construction sites, offshore facilities, and mining projects. The solution, which is ANSI-approved, integrates the HoloLens holographic computer with an industry-standard hard hat. Trimble’s Hard Hat Solution for HoloLens is expected to be available in the first quarter of 2018. To learn more, visit mixedreality.trimble.com.

 

From DSC:
Combining voice recognition / Natural Language Processing (NLP) with Mixed Reality should provide some excellent, powerful user experiences. Doing so could also provide some real-time understanding as well as highlight potential issues in current designs. It will be interesting to watch this space develop. If there were an issue, wouldn’t it be great to remotely ask someone to update the design and then see the updated design in real-time? (Or might there be a way to make edits via one’s voice and/or with gestures?)

I could see where these types of technologies could come in handy when designing / enhancing learning spaces.

 

 

 

Web-Powered Augmented Reality: a Hands-On Tutorial — from medium.com by Uri Shaked
A Guided Journey Into the Magical Worlds of ARCore, A-Frame, 3D Programming, and More!

Excerpt:

There’s been a lot of cool stuff happening lately around Augmented Reality (AR), and since I love exploring and having fun with new technologies, I thought I would see what I could do with AR and the Web?—?and it turns out I was able to do quite a lot!

Most AR demos are with static objects, like showing how you can display a cool model on a table, but AR really begins to shine when you start adding in animations!

With animated AR, your models come to life, and you can then start telling a story with them.

 

 

 

Art.com adds augmented reality art-viewing to its iOS app — from techcrunch.com by Lucas Matney

Excerpt:

If you’re in the market for some art in your house or apartment, Art.com will now let you use AR to put digital artwork up on your wall.

The company’s ArtView feature is one of the few augmented reality features that actually adds a lot to the app it’s put in. With the ARKit-enabled tech, the artwork is accurately sized so you can get a perfect idea of how your next purchase could fit on your wall. The feature can be used for the two million pieces of art on the site and can be customized with different framing types.

 

 

 

 

Experience on Demand is a must-read VR book — from venturebeat.com by Ian Hamilton

Excerpts:

Bailenson’s newest book, Experience on Demand, builds on that earlier work while focusing more clearly — even bluntly — on what we do and don’t know about how VR affects humans.

“The best way to use it responsibly is to be educated about what it is capable of, and to know how to use it — as a developer or a user — responsibly,” Bailenson wrote in the book.

Among the questions raised:

  • “How educationally effective are field trips in VR? What are the design principles that should guide these types of experiences?”
  • How many individuals are not meeting their potential because they lack the access to good instruction and learning tools?”
  • “When we consider that the subjects were made uncomfortable by the idea of administering fake electric shocks, what can we expect people will feel when they are engaging all sorts of fantasy violence and mayhem in virtual reality?”
  • “What is the effect of replacing social contact with virtual social contact over long periods of time?”
  • “How do we walk the line and leverage what is amazing about VR, without falling prey to the bad parts?”

 

 

 

 

From DSC:
DC: Will Amazon get into delivering education/degrees? Is is working on a next generation learning platform that could highly disrupt the world of higher education? Hmmm…time will tell.

But Amazon has a way of getting into entirely new industries. From its roots as an online bookseller, it has branched off into numerous other arenas. It has the infrastructure, talent, and the deep pockets to bring about the next generation learning platform that I’ve been tracking for years. It is only one of a handful of companies that could pull this type of endeavor off.

And now, we see articles like these:


Amazon Snags a Higher Ed Superstar — from insidehighered.com by Doug Lederman
Candace Thille, a pioneer in the science of learning, takes a leave from Stanford to help the ambitious retailer better train its workers, with implications that could extend far beyond the company.

Excerpt:

A major force in the higher education technology and learning space has quietly begun working with a major corporate force in — well, in almost everything else.

Candace Thille, a pioneer in learning science and open educational delivery, has taken a leave of absence from Stanford University for a position at Amazon, the massive (and getting bigger by the day) retailer.

Thille’s title, as confirmed by an Amazon spokeswoman: director of learning science and engineering. In that capacity, the spokeswoman said, Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon.”

No further details were forthcoming, and Thille herself said she was “taking time away” from Stanford to work on a project she was “not really at liberty to discuss.”

 

Amazon is quietly becoming its own university — from qz.com by Amy Wang

Excerpt:

Jeff Bezos’ Amazon empire—which recently dabbled in home security, opened artificial intelligence-powered grocery stores, and started planning a second headquarters (and manufactured a vicious national competition out of it)—has not been idle in 2018.

The e-commerce/retail/food/books/cloud-computing/etc company made another move this week that, while nowhere near as flashy as the above efforts, tells of curious things to come. Amazon has hired Candace Thille, a leader in learning science, cognitive science, and open education at Stanford University, to be “director of learning science and engineering.” A spokesperson told Inside Higher Ed that Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon”; Thille herself said she is “not really at liberty to discuss” her new project.

What could Amazon want with a higher education expert? The company already has footholds in the learning market, running several educational resource platforms. But Thille is famous specifically for her data-driven work, conducted at Stanford and Carnegie Mellon University, on nontraditional ways of learning, teaching, and training—all of which are perfect, perhaps even necessary, for the education of employees.

 


From DSC:
It could just be that Amazon is simply building its own corporate university and will stay focused on developing its own employees and its own corporate learning platform/offerings — and/or perhaps license their new platform to other corporations.

But from my perspective, Amazon continues to work on pieces of a powerful puzzle, one that could eventually involve providing learning experiences to lifelong learners:

  • Personal assistants
  • Voice recognition / Natural Language Processing (NLP)
  • The development of “skills” at an incredible pace
  • Personalized recommendation engines
  • Cloud computing and more

If Alexa were to get integrated into a AI-based platform for personalized learning — one that features up-to-date recommendation engines that can identify and personalize/point out the relevant critical needs in the workplace for learners — better look out higher ed! Better look out if such a platform could interactively deliver (and assess) the bulk of the content that essentially does the heavy initial lifting of someone learning about a particular topic.

Amazon will be able to deliver a cloud-based platform, with cloud-based learner profiles and blockchain-based technologies, at a greatly reduced cost. Think about it. No physical footprints to build and maintain, no lawns to mow, no heating bills to pay, no coaches making $X million a year, etc.  AI-driven recommendations for digital playlists. Links to the most in demand jobs — accompanied by job descriptions, required skills & qualifications, and courses/modules to take in order to master those jobs.

Such a solution would still need professors, instructional designers, multimedia specialists, copyright experts, etc., but they’ll be able to deliver up-to-date content at greatly reduced costs. That’s my bet. And that’s why I now call this potential development The New Amazon.com of Higher Education.

[Microsoft — with their purchase of Linked In (who had previously
purchased Lynda.com) — is
another such potential contender.]

 

 

 

AI plus human intelligence is the future of work — from forbes.com by Jeanne Meister

Excerpts:

  • 1 in 5 workers will have AI as their co worker in 2022
  • More job roles will change than will be become totally automated so HR needs to prepare today


As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.

How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…

The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”

 

…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.

 

 

From DSC:
Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.

Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.

 

 

 

Artificial intelligence is going to supercharge surveillance – from theverge.com by James Vincent
What happens when digital eyes get the brains to match?

From DSC:
Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent is a bit unnerving.

 

 

 

Artificial intelligence | 2018 AI predictions — from thomsonreuters.com

Excerpts:

  • AI brings a new set of rules to knowledge work
  • Newsrooms embrace AI
  • Lawyers assess the risks of not using AI
  • Deep learning goes mainstream
  • Smart cars demand even smarter humans
  • Accountants audit forward
  • Wealth managers look to AI to compete and grow

 

 

 

Chatbots and Virtual Assistants in L&D: 4 Use Cases to Pilot in 2018 —  from bottomlineperformance.com by Steven Boller

Excerpt:

  1. Use a virtual assistant like Amazon Alexa or Google Assistant to answer spoken questions from on-the-go learners.
  2. Answer common learner questions in a chat window or via SMS.
  3. Customize a learning path based on learners’ demographic information.
  4. Use a chatbot to assess learner knowledge.

 

 

 

Suncorp looks to augmented reality for insurance claims — from itnews.com.au by Ry Crozier with thanks to Woontack Woo for this resource

Excerpts:

Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.

The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.

“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.

 

 

 

6 important AI technologies to look out for in 2018 — from itproportal.com by  Olga Egorsheva
Will businesses and individuals finally make AI a part of their daily lives?

 

 

 

 

 

Microsoft Education unveils new Windows 10 devices starting at $189, Office 365 tools for personalized learning, and curricula to ignite a passion for STEM — from blogs.windows.com by Yusuf Mehdi

Excerpt:

This week at Bett, we’ll show new Windows 10 and Windows 10 S devices from Lenovo and JP, starting at just $189, providing more options for schools who don’t want to compromise on Chromebooks. We’ll add new capabilities to our free Office 365 for Education software, enabling any student to write a paper using only their voice and making it easier to access Teams via mobile devices. And we’re making STEM learning fun with a new Chemistry update to Minecraft: Education Edition and new mixed reality and video curricula from partners like BBC Worldwide Learning, LEGO®* Education, PBS, NASA, and Pearson.

 

 

 


Starting in February, we will introduce dictation in Office 365
to help students write more easily by using their voice.

 

 

In regards to mixed reality for immersive learning:

  • Pearson – the world’s largest education company – will begin rolling out in March curriculum that will work on both HoloLens and Windows Mixed Reality immersive VR headsets. These six new applications will deliver seamless experiences across devices and further illustrate the value of immersive educational experiences.
  • We are expanding our mixed media reality curriculum offerings through a new partnership with WGBH’s Bringing the Universe to America’s Classrooms project****, for distribution nationally on PBS LearningMedia™. This effort brings cutting-edge Earth and Space Science content into classrooms through digital learning resources that increase student engagement with science phenomena and practices.
  • To keep up with growing demand for HoloLens in the classroom we are committed to providing affordable solutions. Starting on January 22, we are making available a limited-time academic pricing offer for HoloLens. To take advantage of the limited-time academic pricing offer, please visit, hololens.com/edupromo.

 

 

 

“Rise of the machines” — from January 2018 edition of InAVate magazine
AI is generating lots of buzz in other verticals, but what can AV learn from those? Tim Kridel reports.

 

 


From DSC:
Learning spaces are relevant as well in the discussion of AI and AV-related items.


 

Also in their January 2018 edition, see
an incredibly detailed project at the London Business School.

Excerpt:

A full-width frosted glass panel sits on the desk surface, above it fixed in the ceiling is a Wolfvision VZ-C12 visualiser. This means the teaching staff can write on the (wipeclean) surface and the text appears directly on two 94-in screens behind them, using Christie short-throw laser 4,000 lumens projectors. When the lecturer is finished or has filled up the screen with text, the image can be saved on the intranet or via USB. Simply wipe with a cloth and start again. Not only is the technology inventive, but it allows the teaching staff to remain in face-to-face contact with the students at all times, instead of students having to stare at the back of the lecturer’s head whilst they write.

 


 

Also relevant, see:

 


 

 

TV is (finally) an app: The goods, the bads and the uglies for learning — from thejournal.com by Cathie Norris, Elliot Soloway

Excerpts:

Television. TV. There’s an app for that. Finally! TV — that is, live shows such as the news, specials, documentaries (and reality shows, if you must) — is now just like Candy Crunch and Facebook. TV apps (e.g., DirecTV Now) are available on all devices — smartphones, tablets, laptops, Chromebooks. Accessing streams upon streams of videos is, literally, now just a tap away.

Plain and simple: readily accessible video can be a really valuable resource for learners and learning.

Not everything that needs to be learned is on video. Instruction will need to balance the use of video with the use of printed materials. That balance, of course, needs to take in cost and accessibility.

Now for the 800 pound gorilla in the room: Of course, that TV app could be a huge distraction in the classroom. The TV app has just piled yet another classroom management challenge onto a teacher’s back.

That said, it is early days for TV as an app. For example, HD (High Definition) TV demands high bandwidth — and we can experience stuttering/skipping at times. But, when 5G comes around in 2020, just two years from now, POOF, that stuttering/skipping will disappear. “5G will be as much as 1,000 times faster than 4G.”  Yes, POOF!

 

From DSC:
Learning via apps is here to stay. “TV” as apps is here to stay. But what’s being described here is but one piece of the learning ecosystem that will be built over the next 5-15 years and will likely be revolutionary in its global impact on how people learn and grow. There will be opportunities for social-based learning, project-based learning, and more — with digital video being a component of the ecosystem, but is and will be insufficient to completely move someone through all of the levels of Bloom’s Taxonomy.

I will continue to track this developing learning ecosystem, but voice-driven personal assistants are already here. Algorithm-based recommendations are already here. Real-time language translation is already here.  The convergence of the telephone/computer/television continues to move forward.  AI-based bots will only get better in the future. Tapping into streams of up-to-date content will continue to move forward. Blockchain will likely bring us into the age of cloud-based learner profiles. And on and on it goes.

We’ll still need teachers, professors, and trainers. But this vision WILL occur. It IS where things are heading. It’s only a matter of time.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

CES 2018 preview — from jwtintelligence.com

Excerpt:

The Innovation Group rounds up a preview of what’s in store at CES.

Next week, the electronics industry will descend on Las Vegas for the annual Consumer Electronics Show, the country’s top showcase for technology and innovation. Vendors will show off everything from new gadgets (drones, TVs) to the future of cities and transportation (smart robots, self-driving cars).

What can attendees expect from this year’s CES? And what trends will rise to the top this year? Below, the Innovation Group rounds up our top picks.

 

 

This year, expect to see more enhanced voice and image recognition technology, allowing seamless interactions between consumer, lifestyle and retail.

 

 

 
© 2024 | Daniel Christian