Three AI and machine learning predictions for 2019 — from forbes.com by Daniel Newman

Excerpt:

What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.

 

2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.

 

 

DeepMind AI matches health experts at spotting eye diseases — from endgadget.com by Nick Summers

Excerpt:

DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.

 

 

Microsoft and Amazon launch Alexa-Cortana public preview for Echo speakers and Windows 10 PCs — from venturebeat.com by Khari Johnson

Excerpt:

Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.

The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.

Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.

 

 

What Alexa can and cannot do on a PC — from venturebeat.com by Khari Johnson

Excerpt:

Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.

Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.

 

 

2018 NMC Horizon Report: The trends, challenges, and developments likely to influence ed tech

2018 NMC Horizon Report — from library.educause.edu

Excerpt:

What is on the five-year horizon for higher education institutions? Which trends and technology developments will drive educational change? What are the critical challenges and how can we strategize solutions? These questions regarding technology adoption and educational change steered the discussions of 71 experts to produce the NMC Horizon Report: 2018 Higher Education Edition brought to you by EDUCAUSE. This Horizon Report series charts the five-year impact of innovative practices and technologies for higher education across the globe. With more than 16 years of research and publications, the Horizon Project can be regarded as one of education’s longest-running explorations of emerging technology trends and uptake.

Six key trends, six significant challenges, and six developments in educational technology profiled in this higher education report are likely to impact teaching, learning, and creative inquiry in higher education. The three sections of this report constitute a reference and technology planning guide for educators, higher education leaders, administrators, policymakers, and technologists.

 

2018 NMC Horizon Report -- a glance at the trends, challenges, and developments likely to influence ed tech -- visual graphic

 

Also see:

 

 

10 steps to achieving active learning on a budget — from campustechnology.com by Dian Schaffhauser
Active learning often means revamping classrooms to enable more collaborative, hands-on student work — and that can be costly. Here’s how to achieve the pedagogical change without the high expense.

Excerpt:

Active learning is a great way to increase student excitement and participation, facilitate different kinds of learning activities, help people develop skills in small group work, promote discussion, boost attendance and give an outlet for technology usage that stays on track. It also requires remaking classrooms to enable that hands-on, collaborative student work — and that can often mean a six-figure price tag. But at Saint Anselm College in Manchester, NH, a $12,000 experiment proved successful enough that the institution now sports two permanent active learning classrooms as well as a brand-new active learning lab. Here are the 10 steps this school with just under 2,000 students followed on its road to active learning victory.

 

the article being linked here is entitled 10 steps to achieving active learning on a budget

 

 

Also see:

 

 

 

The title of this article being linked to is: Augmented and virtual reality mean business: Everything you need to know

 

Augmented and virtual reality mean business: Everything you need to know — from zdnet by Greg Nichols
An executive guide to the technology and market drivers behind the hype in AR, VR, and MR.

Excerpt:

Overhyped by some, drastically underestimated by others, few emerging technologies have generated the digital ink like virtual reality (VR), augmented reality (AR), and mixed reality (MR).  Still lumbering through the novelty phase and roller coaster-like hype cycles, the technologies are only just beginning to show signs of real world usefulness with a new generation of hardware and software applications aimed at the enterprise and at end users like you. On the line is what could grow to be a $108 billion AR/VR industry as soon as 2021. Here’s what you need to know.

 

The reason is that VR environments by nature demand a user’s full attention, which make the technology poorly suited to real-life social interaction outside a digital world. AR, on the other hand, has the potential to act as an on-call co-pilot to everyday life, seamlessly integrating into daily real-world interactions. This will become increasingly true with the development of the AR Cloud.

The AR Cloud
Described by some as the world’s digital twin, the AR Cloud is essentially a digital copy of the real world that can be accessed by any user at any time.

For example, it won’t be long before whatever device I have on me at a given time (a smartphone or wearable, for example) will be equipped to tell me all I need to know about a building just by training a camera at it (GPS is operating as a poor-man’s AR Cloud at the moment).

What the internet is for textual information, the AR Cloud will be for the visible world. Whether it will be open source or controlled by a company like Google is a hotly contested issue.

 

Augmented reality will have a bigger impact on the market and our daily lives than virtual reality — and by a long shot. That’s the consensus of just about every informed commentator on the subject.

 

 

 

Mixed reality will transform learning (and Magic Leap joins act one) — from edsurge.com by Maya Georgieva

Excerpt:

Despite all the hype in recent years about the potential for virtual reality in education, an emerging technology known as mixed reality has far greater promise in and beyond the classroom.

Unlike experiences in virtual reality, mixed reality interacts with the real world that surrounds us. Digital objects become part of the real world. They’re not just digital overlays, but interact with us and the surrounding environment.

If all that sounds like science fiction, a much-hyped device promises some of those features later this year. The device is by a company called Magic Leap, and it uses a pair of goggles to project what the company calls a “lightfield” in front of the user’s face to make it look like digital elements are part of the real world. The expectation is that Magic Leap will bring digital objects in a much more vivid, dynamic and fluid way compared to other mixed-reality devices such as Microsoft’s Hololens.

 

The title of the article being linked to here is Mixed reality will transform learning (and Magic Leap joins act one)

 

Now think about all the other things you wished you had learned this way and imagine a dynamic digital display that transforms your environment and even your living room or classroom into an immersive learning lab. It is learning within a highly dynamic and visual context infused with spatial audio cues reacting to your gaze, gestures, gait, voice and even your heartbeat, all referenced with your geo-location in the world. Unlike what happens with VR, where our brain is tricked into believing the world and the objects in it are real, MR recognizes and builds a map of your actual environment.

 

 

 

Also see:

virtualiteach.com
Exploring The Potential for the Vive Focus in Education

 

virtualiteach.com

 

 

 

Digital Twins Doing Real World Work — from stambol.com

Excerpt:

On the big screen it’s become commonplace to see a 3D rendering or holographic projection of an industrial floor plan or a mechanical schematic. Casual viewers might take for granted that the technology is science fiction and many years away from reality. But today we’re going to outline where these sophisticated virtual replicas – Digital Twins – are found in the real world, here and now. Essentially, we’re talking about a responsive simulated duplicate of a physical object or system. When we first wrote about Digital Twin technology, we mainly covered industrial applications and urban infrastructure like transit and sewers. However, the full scope of their presence is much broader, so now we’re going to break it up into categories.

 

The title of the article being linked to here is Digital twins doing real world work

 

Digital twin — from Wikipedia

Digital twin refers to a digital replica of physical assets (physical twin), processes and systems that can be used for various purposes.[1] The digital representation provides both the elements and the dynamics of how an Internet of Things device operates and lives throughout its life cycle.[2]

Digital twins integrate artificial intelligence, machine learning and software analytics with data to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition or position. This learning system, learns from itself, using sensor data that conveys various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment in which it may be a part of. A digital twin also integrates historical data from past machine usage to factor into its digital model.

In various industrial sectors, twins are being used to optimize the operation and maintenance of physical assets, systems and manufacturing processes.[3] They are a formative technology for the Industrial Internet of Things, where physical objects can live and interact with other machines and people virtually.[4]

 

 

Disney to debut its first VR short next month — from techcrunch.com by Sarah Wells

Excerpt:

Walt Disney Animation Studio is set to debut its first VR short film, Cycles, this August in Vancouver, the Association for Computing Machinery announced today. The plan is for it to be a headliner at the ACM’s computer graphics conference (SIGGRAPH), joining other forms of VR, AR and MR entertainment in the conference’s designated Immersive Pavilion.

This film is a first for both Disney and its director, Jeff Gipson, who joined the animation team in 2013 to work as a lighting artist on films like Frozen, Zootopia and Moana. The objective of this film, Gipson said in the statement released by ACM, is to inspire a deep emotional connection with the story.

“We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story,” said Gipson.

 

 

 

 

 

Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras — from nytimes.com by Paul Mozur

Excerpts:

ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station.

In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival.

In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor.

With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

 

In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States.

Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.

 

 

A very slippery slope has now been setup in China with facial recognition infrastructures

 

From DSC:
A veeeeery slippery slope here. The usage of this technology starts out as looking for criminals, but then what’s next? Jail time for people who disagree w/ a government official’s perspective on something? Persecution for people seen coming out of a certain place of worship?  

Very troubling stuff here….

 

 

 

State of AI — from stateof.ai

Excerpt:

In this report, we set out to capture a snapshot of the exponential progress in AI with a focus on developments in the past 12 months. Consider this report as a compilation of the most interesting things we’ve seen that seeks to trigger informed conversation about the state of AI and its implication for the future.

We consider the following key dimensions in our report:

  • Research: Technology breakthroughs and their capabilities.
  • Talent: Supply, demand and concentration of talent working in the field.
  • Industry: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
  • Politics: Public opinion of AI, economic implications and the emerging geopolitics of AI.

 

definitions of terms involved in AI

definitions of terms involved in AI

 

hard to say how AI is impacting jobs yet -- but here are 2 perspectives

 

 

There’s nothing artificial about how AI is changing the workplace — from forbes.com by Eric Yuan

Excerpt:

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

 

AI can now ‘listen’ to machines to tell if they’re breaking down — from by Rebecca Campbell

Excerpt:

Sound is everywhere, even when you can’t hear it.

It is this noiseless sound, though, that says a lot about how machines function.

Helsinki-based Noiseless Acoustics and Amsterdam-based OneWatt are relying on artificial intelligence (AI) to better understand the sound patterns of troubled machines. Through AI they are enabling faster and easier problem detection.

 

Making sound visible even when it can’t be heard. With the aid of non-invasive sensors, machine learning algorithms, and predictive maintenance solutions, failing components can be recognized at an early stage before they become a major issue.

 

 

 

Chinese university uses facial recognition for campus entry — from cr80news.com by Andrew Hudson

Excerpt:

A number of higher education institutions in China have deployed biometric solutions for access and payments in recent months, and adding to the list is Peking University. The university has now installed facial recognition readers at perimeter access gates to control access to its Beijing campus.

As reported by the South China Morning Post, anyone attempting to enter through the southwestern gate of the university will no longer have to provide a student ID card. Starting this month, students will present their faces to a camera as part of a trial run of the system ahead of full-scale deployment.

From DSC:
I’m not sure I like this one at all — and the direction that this is going in. 

 

 

 

Will We Use Big Data to Solve Big Problems? Why Emerging Technology is at a Crossroads — from blog.hubspot.com by Justin Lee

Excerpt:

How can we get smarter about machine learning?
As I said earlier, we’ve reached an important crossroads. Will we use new technologies to improve life for everyone, or to fuel the agendas of powerful people and organizations?

I certainly hope it’s the former. Few of us will run for president or lead a social media empire, but we can all help to move the needle.

Consume information with a critical eye.
Most people won’t stop using Facebook, Google, or social media platforms, so proceed with a healthy dose of skepticism. Remember that the internet can never be objective. Ask questions and come to your own conclusions.

Get your headlines from professional journalists.
Seek credible outlets for news about local, national and world events. I rely on the New York Times and the Wall Street Journal. You can pick your own sources, but don’t trust that the “article” your Aunt Marge just posted on Facebook is legit.

 

 

 

 

 

 

Below are some excerpted slides from her presentation…

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Also see:

  • 20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
    Excerpt:
    Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.

 

“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.

 

 

 

 

 

 

10 Big Takeaways From Mary Meeker’s Widely-Read Internet Report — from fortune.com by  Leena Rao

 

 

 

 

The scary amount that college will cost in the future — from cnbc.com by Annie Nova

Excerpt:

Think college is expensive now? Then new parents will probably want to take a seat for this news.

In 2036, just 18 years from now, four years at a private university will be around $303,000, up from $167,000 today.

To get a degree at a public university you’ll need about $184,000, compared with $101,000 now.

These forecasts were provided by Wealthfront, an automated investment platform that offers college saving options. It uses Department of Education data on the current cost of schools along with expected annual inflation to come up with its projections.

 

Excerpted graphic:

 

From DSC:
We had better be at the end of the line of thinking that says these tuition hikes can continue. It’s not ok. More and more people will be shut out by this kind of societal gatekeeper. The ever-increasing cost of obtaining a degree has become a matter of social justice for me. Other solutions are needed. The 800 pound gorilla of debt that’s already being loaded onto more and more of our graduates will impact them for years…even for decades in many of our graduates’ cases.

It’s my hope that a variety of technologies will make learning more affordable, yet still provide a high quality of education. In fact, I’m hopeful that the personalization/customization of learning will take some major steps forward in the very near future. We will still need and want solid teachers, professors, and trainers, but I’m hopeful that those folks will be aided by the heavy lifting that will be done by some powerful tools/technologies that will be aimed at helping people learn and grow…providing lifelong learners with more choice, more control.

I love the physical campus as much as anyone, and I hope that all students can have that experience if they want it. But I’ve seen and worked with the high costs of building and maintaining physical spaces — maintaining our learning spaces, dorms, libraries, gyms, etc. is very expensive.

I see streams of content becoming more prevalent in the future — especially for lifelong learners who need to reinvent themselves in order to stay marketable. We will be able to subscribe and unsubscribe to curated streams of content that we want to learn more about. For example, today, that could involve RSS feeds and Feedly (to aggregate those feeds). I see us using micro-learning to help us encode information and then practice recalling it (i.e., spaced practice), to help us stop or lessen the forgetting curves we all experience, to help us sort information into things we know and things that we need more assistance on (while providing links to resources that will help us obtain better mastery of the subject(s)).

 

 

Educause Releases 2018 Horizon Report Preview — from campustechnology.com by Rhea Kelly

Excerpt:

After acquiring the rights to the New Media Consortium’s Horizon project earlier this year, Educause has now published a preview of the 2018 Higher Education Edition of the Horizon Report — research that was in progress at the time of NMC’s sudden dissolution. The report covers the key technology trends, challenges and developments expected to impact higher ed in the short-, mid- and long-term future.

 

Also see:

 

 

 

From DSC regarding Virtual Reality-based apps:
If one can remotely select/change their seat at a game or change seats/views at a concert…how soon before we can do this with learning-related spaces/scenes/lectures/seminars/Active Learning Classrooms (ALCs)/stage productions (drama) and more?

Talk about getting someone’s attention and engaging them!

 

 

Excerpt:

(MAY 2, 2018) MelodyVR, the world’s first dedicated virtual reality music platform that enables fans to experience music performances in a revolutionary new way, is now available.

The revolutionary MelodyVR app offers music fans an incredible selection of immersive performances from today’s biggest artists. Fans are transported all over the world to sold-out stadium shows, far-flung festivals and exclusive VIP sessions, and experience the music they love.

What MelodyVR delivers is a unique and world-class set of original experiences, created with multiple vantage points, to give fans complete control over what they see and where they stand at a performance. By selecting different Jump Spots, MelodyVR users can choose to be in the front row, deep in the crowd, or up-close-and-personal with the band on stage.

 

See their How it Works page.

 

 

With standalone VR headsets like the Oculus Go now available at an extremely accessible price point ($199), the already vibrant VR market is set to grow exponentially over the coming years. Current market forecasts suggest over 350 million users by 2021 and last year saw $3 billion invested in virtual and alternative reality.

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian