Artificial intelligence will transform universities. Here’s how. — from weforum.org by Mark Dodgson & David Gann

Excerpt:

The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.

We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.

Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.

As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.

The implications of AI for university research extend beyond science and technology.

When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.

Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.

 

 

Also see:

Digital audio assistants in teaching and learning — from blog.blackboard.com by Szymon Machajewski

Excerpts:

I built an Amazon Alexa skill called Introduction to Computing Flashcards. In using the skill, or Amazon Alexa app, students are able to listen to Alexa and then answer questions. Alexa helps students prepare for an exam by speaking definitions and then waiting for their identification. In addition to quizzing the student, Alexa is also keeping track of the correct answers. If a student answers five questions correctly, Alexa shares a game code, which is worth class experience points in the course gamification My Game app.

Certainly, exam preparation apps are one way to use digital assistants in education. As development and publishing of Amazon Alexa skills becomes easier, faculty will be able to produce such skills just as easily as they now create PowerPoints. Given the basic code available through Amazon tutorials, it takes 20 minutes to create a new exam preparation app. Basic voice experience Amazon Alexa skills can take as much as five minutes to complete.

Universities can publish their campus news through the Alexa Flash Briefing. This type of a skill can publish news, success stories, and other events associated with the campus.

If you are a faculty member, how can you develop your first Amazon Alexa skill? You can use any of the tutorials already available. You can also participate in an Amazon Alexa classroom training provided by Alexa Dev Days. It is possible that schools or maker spaces near you offer in-person developer sessions. You can use meetup.com to track these opportunities.

 

 

 

 

 

From DSC:
I’ve been thinking about Applicant Tracking Systems (ATSs) for a while now, but the article below made me revisit my reflections on them. (By the way, my thoughts below are not meant to be a slam on Google. I like Google and I use their tools daily.) I’ve included a few items below, but there were some other articles/vendors’ products that I had seen on this topic that focused specifically on ATSs, but I couldn’t locate them all.

 

How Google’s AI-Powered Job Search Will Impact Companies And Job Seekers — from forbes.com by Forbes Coaches Council

Excerpt:

In mid-June, Google announced the implementation of an AI-powered search function aimed at connecting job seekers with jobs by sorting through posted recruitment information. The system allows users to search for basic phrases, such as “jobs near me,” or perform searches for industry-specific keywords. The search results can include reviews from Glassdoor or other companies, along with the details of what skills the hiring company is looking to acquire.

As this is a relatively new development, what the system will mean is still an open question. To help, members from the Forbes Coaches Council offer their analysis on how the search system will impact candidates or companies. Here’s what they said…

 

 

5. Expect competition to increase.
Google jumping into the job search market may make it easier than ever to apply for a role online. For companies, this could likely tax the already strained-ATS system, and unless fixed, could mean many more resumes falling into that “black hole.” For candidates, competition might be steeper than ever, which means networking will be even more important to job search success. – Virginia Franco

 

 

10. Understanding keywords and trending topics will be essential.
Since Google’s AI is based on crowd-gathered metrics, the importance of keywords and understanding trending topics is essential for both employers and candidates. Standing out from the crowd or getting relevant results will be determined by how well you speak the expected language of the AI. Optimizing for the search engine’s results pages will make or break your search for a job or candidate. – Maurice Evans, IGROWyourBiz, Inc 

 

 

Also see:

In Unilever’s radical hiring experiment, resumes are out, algorithms are in — from foxbusiness.com by Kelsey Gee 

Excerpt:

Before then, 21-year-old Ms. Jaffer had filled out a job application, played a set of online games and submitted videos of herself responding to questions about how she’d tackle challenges of the job. The reason she found herself in front of a hiring manager? A series of algorithms recommended her.

 

 

The Future of HR: Is it Dying? — from hrtechnologist.com by Rhucha Kulkarni

Excerpt (emphasis DSC):

The debate is on, whether man or machine will win the race, as they are pitted against each other in every walk of life. Experts are already worried about the social disruption that is inevitable, as artificial intelligence (AI)-led robots take over the jobs of human beings, leaving them without livelihoods. The same is believed to happen to the HR profession, says a report by Career Builder. HR jobs are at threat, like all other jobs out there, as we can expect certain roles in talent acquisition, talent management, and mainstream business being automated over the next 10 years. To delve deeper into the imminent problem, Career Builder carried out a study of 719 HR professionals in the private sector, specifically looking for the rate of adoption of emerging technologies in HR and what HR professionals perceived about it.

The change is happening for real, though different companies are adopting technologies at varied paces. Most companies are turning to the new-age technologies to help carry out talent acquisition and management tasks that are time-consuming and labor-intensive.

 

 

 

From DSC:
Are you aware that if you apply for a job at many organizations nowadays, your resume has a significant chance of not ever making it in front of a human’s eyeballs for their review?  Were you aware that an Applicant Tracking System (an ATS) will likely syphon off and filter out your resume unless you have the exact right keywords in your resume and unless you mentioned those keywords the optimal number of times?

And were you aware that many advisors assert that you should use a 1 page resume — a 2 page resume at most? Well…assuming that you have to edit big time to get to a 1-2 page resume, how does that editing help you get past the ATSs out there? When you significantly reduce your resume’s size/information, you hack out numerous words that the ATS may be scanning for. (BTW, advisors recommend creating a Wordle from the job description to ascertain the likely keywords; but still, you don’t know which exact keywords the ATS will be looking for in your specific case/job application and how many times to use those keywords. Numerous words can be of similar size in the resulting Wordle graphic…so is that 1-2 page resume helping you or hurting you when you can only submit 1 resume for a position/organization?)

Vendors are hailing these ATS systems as being major productivity boosters for their HR departments…and that might be true in some cases. But my question is, at what cost?

At this point in time, I still believe that humans are better than software/algorithms at making judgement calls. Perhaps I’m giving hiring managers too much credit, but I’d rather have a human being make the call at this point. I want a pair of human eyeballs to scan my resume, not a (potentially) narrowly defined algorithm. A human being might see transferable skills better than a piece of code at this point.

Just so you know…in light of these keyword-based means of passing through the first layer of filtering, people are now playing games with their resumes and are often stretching the truth — if not outright lying:

 

85 Percent of Job Applicants Lie on Resumes. Here’s How to Spot a Dishonest Candidate — from inc.com by  J.T. O’Donnell
A new study shows huge increase in lies on job applications.

Excerpt (emphasis DSC):

Employer Applicant Tracking Systems Expect an Exact Match
Most companies use some form of applicant tracking system (ATS) to take in résumés, sort through them, and narrow down the applicant pool. With the average job posting getting more than 100 applicants, recruiters don’t want to go bleary-eyed sorting through them. Instead, they let the ATS do the dirty work by telling it to pass along only the résumés that match their specific requirements for things like college degrees, years of experience, and salary expectations. The result? Job seekers have gotten wise to the finicky nature of the technology and are lying on their résumés and applications in hopes of making the cut.

 

I don’t see this as being very helpful. But perhaps that’s because I don’t like playing games with people and/or with other organizations. I’m not a game player. What you see is what you get. I’ll be honest and transparent about what I can — and can’t — deliver.

But students, you should know that these ATS systems are in place. Those of us in higher education should know about these ATS systems, as many of us are being negatively impacted by the current landscape within higher education.

 

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

 

2017 Ed Tech Trends: The Halfway Point — from campustechnology.com by Rhea Kelly
Four higher ed IT leaders weigh in on the current state of education technology and what’s ahead.

This article includes some perspectives shared from the following 4 IT leaders:

  • Susan Aldridge, Senior Vice President for Online Learning, Drexel University (PA); President, Drexel University Online
  • Daniel Christian, Adjunct Faculty Member, Calvin College
  • Marci Powell, CEO/President, Marci Powell & Associates; Chair Emerita and Past President, United States Distance Learning Association
  • Phil Ventimiglia, Chief Innovation Officer, Georgia State University

 

 

Also see:

 

 

 

Making the future work for everyone — from blog.google by Jacquelline Fuller

Excerpt:

Help ensure training is as effective and as wide-reaching as possible.
Millions are spent each year on work skills and technical training programs, but there isn’t much visibility into how these programs compare, or if the skills being taught truly match what will be needed in the future. So some of our funding will go into research to better understand which trainings will be most effective in getting the most people the jobs of the future. Our grantee Social Finance is looking at which youth training programs most effectively use contributions from trainees, governments and future employers to give people the best chance of
success.

 

Helping prepare for the future of work

Excerpt (emphasis DSC):

The way we work is changing. As new technologies continue to unfold in the workplace, more than a third of jobs are likely to require skills that are uncommon in today’s workforce. Workers are increasingly working independently. Demographic changes and shifts in labor participation in developed countries will mean future generations will find new ways to sustain economic growth. These changes create opportunities to think about how work can continue to be a source of not just income, but purpose and meaning for individuals and communities.Technology can help seize these opportunities. We recently launched Google for Jobs, which is designed to help better connect people to jobs, and today we’re announcing Google.org’s $50 million commitment to help people prepare for the changing nature of work. We’ll support nonprofits who are taking innovative approaches to tackling this challenge in three ways: (1) training people with the skills they need, (2) connecting job-seekers with positions that match their skills and talents, and (3) supporting workers in low-wage employment. We’ll start by focusing on the US, Canada, Europe, and Australia, and hope to expand to other countries over time.

 

 

 

 

Campus Technology 2017: Virtual Reality Is More Than a New Medium — from edtechmagazine.com by Amy Burroughs
Experts weigh in on the future of VR in higher education.

Excerpts:

“It’s actually getting pretty exciting,” Georgieva said, noting that legacy companies and startups alike have projects in the works that will soon be on the market. Look for standalone, wireless VR headsets later this year from Facebook and Google.

“I think it’s going to be a universal device,” he said. “Eventually, we’ll end up with some kind of glasses where we can just dial in the level of immersion that we want.”

— Per Emery Craig, at Campus Technology 2017 Conference


“Doing VR for the sake of VR makes no sense whatsoever,” Craig said. “Ask when does it make sense to do this in VR? Does a sense of presence help this, or is it better suited to traditional media?”

 

 

Virtual Reality: The User Experience of Story — from blogs.adobe.com

Excerpt:

Solving the content problems in VR requires new skills that are only just starting to be developed and understood, skills that are quite different from traditional storytelling. VR is a nascent medium. One part story, one part experience. And while many of the concepts from film and theater can be used, storytelling through VR is not like making a movie or a play.

In VR, the user has to be guided through an experience of a story, which means many of the challenges in telling a VR story are closer to UX design than anything from film or theater.

Take the issue of frameless scenes. In a VR experience, there are no borders, and no guarantees where a user will look. Scenes must be designed to attract user attention, in order to guide them through the experience of a story.

Sound design, staging cues, lighting effects, and movement can all be used to draw a user’s attention.

However, it’s a fine balance between attraction to distraction.

“In VR, it’s easy to overwhelm the user. If you see a flashing light and in the background, you hear a sharp siren, and then something moves, you’ve given the user too many things to understand,” says Di Dang, User Experience Lead at POP, Seattle. “Be intentional and deliberate about how you grab audience attention.”

 

VR is a storytelling superpower. No other medium has the quite the same potential to create empathy and drive human connection. Because viewers are for all intents and purposes living the experience, they walk away with that history coded into their memory banks—easily accessible for future responses.

 

 

 

Google’s latest VR experiment is teaching people how to make coffee — from techradar.com by Parker Wilhelm
All in a quest to see how effective learning in virtual reality is

Excerpt:

Teaching with a simulation is no new concept, but Google’s Daydream Labs wants to see exactly how useful virtual reality can be for teaching people practical skills.

In a recent experiment, Google ran a simulation of an interactive espresso machine in VR. From there, it had a group of people try their virtual hand at brewing a cup of java before being tasked to make the real thing.

 

 



 

Addendum on 7/26/17:

 



 

 

 

Google’s AI Guru Says That Great Artificial Intelligence Must Build on Neuroscience — from technologyreview.com by Jamie Condliffe
Inquisitiveness and imagination will be hard to create any other way.

Excerpt:

Demis Hassabis knows a thing or two about artificial intelligence: he founded the London-based AI startup DeepMind, which was purchased by Google for $650 million back in 2014. Since then, his company has wiped the floor with humans at the complex game of Go and begun making steps towards crafting more general AIs.

But now he’s come out and said that be believes the only way for artificial intelligence to realize its true potential is with a dose of inspiration from human intellect.

Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. But different types of machine learning, such as speech recognition or identifying objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks.

Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.

 

First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. 

 

From DSC:
Glory to God! I find it very interesting to see how people and organizations — via very significant costs/investments — keep trying to mimic the most amazing thing — the human mind. Turns out, that’s not so easy:

But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem…

Therefore, some scripture comes to my own mind here:

Psalm 139:14 New International Version (NIV)

14 I praise you because I am fearfully and wonderfully made;
    your works are wonderful,
    I know that full well.

Job 12:13 (NIV)

13 “To God belong wisdom and power;
    counsel and understanding are his.

Psalm 104:24 (NIV)

24 How many are your works, Lord!
    In wisdom you made them all;
    the earth is full of your creatures.

Revelation 4:11 (NIV)

11 “You are worthy, our Lord and God,
    to receive glory and honor and power,
for you created all things,
    and by your will they were created
    and have their being.”

Yes, the LORD designed the human mind by His unfathomable and deep wisdom and understanding.

Glory to God!

Thanks be to God!

 

 

 

The Business of Artificial Intelligence — from hbr.org by Erik Brynjolfsson & Andrew McAfee

Excerpts (emphasis DSC):

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

The machine learns from examples, rather than being explicitly programmed for a particular outcome.

 

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. …For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards. 

 

 

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. 

 

 

You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names.

 

 

 

Google is turning Street View imagery into pro-level landscape photographs using artificial intelligence — from businessinsider.com by Edoardo Maggio

Excerpt:

A new experiment from Google is turning imagery from the company’s Street View service into impressive digital photographs using nothing but artificial intelligence (AI).

Google is using machine learning algorithms to train a deep neural network to roam around places such as Canada’s and California’s national parks, look for potentially suitable landscape images, and then work on them with special post-processing techniques.

The idea is to “mimic the workflow of a professional photographer,” and to do so Google is relying on so-called generative adversarial networks (GAN), which essentially pit two neural networks against one another.

 

See also:

Using Deep Learning to Create Professional-Level Photographs — from research.googleblog.com by Hui Fang, Software Engineer, Machine Perception

 

 
© 2017 | Daniel Christian