From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?



From page 45 of the PDF available here:


Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?




Scientists Are Turning Alexa into an Automated Lab Helper — from by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.


Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…


Also see:




What is Artificial Intelligence, Machine Learning and Deep Learning — from by Meenal Dhande







What is the difference between AI, machine learning and deep learning? — from by Meenal Dhande


In the first part of this blog series, we gave you simple and elaborative definitions of what is artificial intelligence (AI), machine learning and deep learning. This is the second part of the series; here we are elucidating our readers with – What is the difference between AI, machine learning, and deep learning.

You can think of artificial intelligence (AI), machine learning and deep learning as a set of a matryoshka doll, also known as a Russian nesting doll. Deep learning is a subset of machine learning, which is a subset of AI.






Chatbot for College Students: 4 Chatbots Tips Perfect for College Students — from by Zevik Farkash


1. Feed your chatbot with information your students don’t have.
Your institute’s website can be as elaborate as it gets, but if your students can’t find a piece of information on it, it’s as good as incomplete. Say, for example, you offer certain scholarships that students can voluntarily apply for. But the information on these scholarships are tucked away on a remote page that your students don’t access in their day-to-day usage of your site.

So Amy, a new student, has no idea that there’s a scholarship that can potentially make her course 50% cheaper. She can scour your website for details when she finds the time. Or she can ask your university’s chatbot, “Where can I find information on your scholarships?”

And the chatbot can tell her, “Here’s a link to all our current scholarships.”

The best chatbots for colleges and universities tend to be programmed with even more detail, and can actually strike up a conversation by saying things like:

“Please give me the following details so I can pull out all the scholarships that apply to you.
“Which department are you in? (Please select one.)
“Which course are you enrolled in? (Please select one.)
“Which year of study are you in? (Please select one.)
“Thank you for the details! Here’s a list of all applicable scholarships. Please visit the links for detailed information and let me know if I can be of further assistance.”

2. Let it answer all the “What do I do now?” questions.

3. Turn it into a campus guide.

4. Let it take care of paperwork.


From DSC:
This is the sort of thing that I was trying to get at last year at the NGLS 2017 Conference:





18 Disruptive Technology Trends For 2018 — from by Rob Prevett


1. Mobile-first to AI-first
A major shift in business thinking has placed Artificial Intelligence at the very heart of business strategy. 2017 saw tech giants including Google and Microsoft focus on an“AI first” strategy, leading the way for other major corporates to follow suit. Companies are demonstrating a willingness to use AI and related tools like machine learning to automate processes, reduce administrative tasks, and collect and organise data. Understanding vast amounts of information is vital in the age of mass data, and AI is proving to be a highly effective solution. Whilst AI has been vilified in the media as the enemy of jobs, many businesses have undergone a transformation in mentalities, viewing AI as enhancing rather than threatening the human workforce.

7. Voice based virtual assistants become ubiquitous
Google HomeThe wide uptake of home based and virtual assistants like Alexa and Google Home have built confidence in conversational interfaces, familiarising consumers with a seamless way of interacting with tech. Amazon and Google have taken prime position between brand and customer, capitalising on conversational convenience. The further adoption of this technology will enhance personalised advertising and sales, creating a direct link between company and consumer.



5 Innovative Uses for Machine Learning — from
They’ll be coming into your life — at least your business life — sooner than you think.



Philosophers are building ethical algorithms to help control self-driving cars – from by Olivia Goldhill



Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It — from by Natasha Singerfeb


PALO ALTO, Calif. — The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.

This semester, Harvard University and the Massachusetts Institute of Technology are jointly offering a new course on the ethics and regulation of artificial intelligence. The University of Texas at Austin just introduced a course titled “Ethical Foundations of Computer Science” — with the idea of eventually requiring it for all computer science majors.

And at Stanford University, the academic heart of the industry, three professors and a research fellow are developing a computer science ethics course for next year. They hope several hundred students will enroll.

The idea is to train the next generation of technologists and policymakers to consider the ramifications of innovations — like autonomous weapons or self-driving cars — before those products go on sale.





Where You’ll Find Virtual Reality Technology in 2018 — from by Alec Kasper-Olson


The VR / AR / MR Breakdown
This year will see growth in a variety of virtual technologies and uses. There are differences and similarities between virtual, augmented, and mixed reality technologies. The technology is constantly evolving and even the terminology around it changes quickly, so you may hear variations on these terms.

Augmented reality is what was behind the Pokémon Go craze. Players could see game characters on their devices superimposed over images of their physical surroundings. Virtual features seemed to exist in the real world.

Mixed reality combines virtual features and real-life objects. So, in this way it includes AR but it also includes environments where real features seem to exist in a virtual world.

The folks over at Recode explain mixed reality this way:

In theory, mixed reality lets the user see the real world (like AR) while also seeing believable, virtual objects (like VR). And then it anchors those virtual objects to a point in real space, making it possible to treat them as “real,” at least from the perspective of the person who can see the MR experience.

And, virtual reality uses immersive technology to seemingly place a user into a simulated lifelike environment.

Where You’ll Find These New Realities
Education and research fields are at the forefront of VR and AR technologies, where an increasing number of students have access to tools. But higher education isn’t the only place you see this trend. The number of VR companies grew 250 percent between 2012 and 2017. Even the latest iPhones include augmented reality capabilities. Aside from the classroom and your pocket, here are some others places you’re likely to see VR and AR pop up in 2018.




Top AR apps that make learning fun — from


Here is a list of a few amazing Augmented Reality mobile apps for children:

  • Jigspace
  • Elements 4D
  • Arloon Plants
  • Math alive
  • PlanetAR Animals
  • FETCH! Lunch Rush
  • Quiver
  • Zoo Burst
  • PlanetAR Alphabets & Numbers

Here are few of the VR input devices include:

  • Controller Wands
  • Joysticks
  • Force Balls/Tracking Balls
  • Data Gloves
  • On-Device Control Buttons
  • Motion Platforms (Virtuix Omni)
  • Trackpads
  • Treadmills
  • Motion Trackers/Bodysuits




HTC VIVE and World Economic Forum Partner For The Future Of The “VR/AR For Impact” Initiative — from by Matthew Gepp


VR/AR for Impact experiences shown this week at WEF 2018 include:

  • OrthoVR aims to increase the availability of well-fitting prosthetics in low-income countries by using Virtual Reality and 3D rapid prototyping tools to increase the capacity of clinical staff without reducing quality. VR allows current prosthetists and orthosists to leverage their hands-on and embodied skills within a digital environment.
  • The Extraordinary Honey Bee is designed to help deepen our understanding of the honey bee’s struggle and learn what is at stake for humanity due to the dying global population of the honey bee. Told from a bee’s perspective, The Extraordinary Honey Bee harnesses VR to inspire change in the next generation of honey bee conservationists.
  • The Blank Canvas: Hacking Nature is an episodic exploration of the frontiers of bioengineering as taught by the leading researchers within the field. Using advanced scientific visualization techniques, the Blank Canvas will demystify the cellular and molecular mechanisms that are being exploited to drive substantial leaps such as gene therapy.
  • LIFE (Life-saving Instruction For Emergencies) is a new mobile and VR platform developed by the University of Oxford that enables all types of health worker to manage medical emergencies. Through the use of personalized simulation training and advanced learning analytics, the LIFE platform offers the potential to dramatically extend access to life-saving knowledge in low-income countries.
  • Tree is a critically acclaimed virtual reality experience to immerse viewers in the tragic fate that befalls a rainforest tree. The experience brings to light the harrowing realities of deforestation, one of the largest contributors to global warming.
  • For the Amazonian Yawanawa, ‘medicine’ has the power to travel you in a vision to a place you have never been. Hushuhu, the first woman shaman of the Yawanawa, uses VR like medicine to open a portal to another way of knowing. AWAVENA is a collaboration between a community and an artist, melding technology and transcendent experience so that a vision can be shared, and a story told of a people ascending from the edge of extinction.




Everything You Need To Know About Virtual Reality Technology — from


Types of Virtual Reality Technology
We can segregate the type of Virtual Reality Technology according to their user experience

Non-immersive simulations are the least immersion implementation of Virtual Reality Technology.
In this kind of simulation, only a subset of the user’s senses is replicated, allowing for marginal awareness of the reality outside the VR simulation. A user enters into 3D virtual environments through a portal or window by utilizing standard HD monitors typically found on conventional desktop workstations.

Semi Immersive
In this simulation, users experience a more rich immersion, where a user partly, not fully involved in a virtual environment. Semi immersive simulations are based on high-performance graphical computing, which is often coupled with large screen projector systems or multiple TV projections to properly simulate the user’s visuals.

Fully immersive
Offers the full immersive experience to the user of Virtual Reality Technology, in this phase of VR head-mounted displays and motion sensing devices are used to simulate all of the user senses. In this situation, a user can experience the realistic virtual environment, where a user can experience a wide view field, high resolutions, increased refresh rates and a high quality of visualization through HMD.







This Is What A Mixed Reality Hard Hat Looks Like — from by  Alice Bonasio
A Microsoft-endorsed hard hat solution lets construction workers use holograms on site.


These workers already routinely use technology such as tablets to access plans and data on site, but going from 2D to 3D at scale brings that to a whole new level. “Superimposing the digital model on the physical environment provides a clear understanding of the relations between the 3D design model and the actual work on a jobsite,” explained Olivier Pellegrin, BIM manager, GA Smart Building.

The application they are using is called Trimble Connect. It turns data into 3D holograms, which are then mapped out to scale onto the real-world environment. This gives workers an instant sense of where and how various elements will fit and exposes mistakes early on in the process.


Also see:

Trimble Connect for HoloLens is a mixed reality solution that improves building coordination by combining models from multiple stakeholders such as structural, mechanical and electrical trade partners. The solution provides for precise alignment of holographic data on a 1:1 scale on the job site, to review models in the context of the physical environment. Predefined views from Trimble Connect further simplify in-field use with quick and easy access to immersive visualizations of 3D data. Users can leverage mixed reality for training purposes and to compare plans against work completed. Advanced visualization further enables users to view assigned tasks and capture data with onsite measurement tools. Trimble Connect for HoloLens is available now through the Microsoft Windows App Store. A free trial option is available enabling integration with HoloLens. Paid subscriptions support premium functionality allowing for precise on-site alignment and collaboration. Trimble’s Hard Hat Solution for Microsoft HoloLens extends the benefits of HoloLens mixed reality into areas where increased safety requirements are mandated, such as construction sites, offshore facilities, and mining projects. The solution, which is ANSI-approved, integrates the HoloLens holographic computer with an industry-standard hard hat. Trimble’s Hard Hat Solution for HoloLens is expected to be available in the first quarter of 2018. To learn more, visit


From DSC:
Combining voice recognition / Natural Language Processing (NLP) with Mixed Reality should provide some excellent, powerful user experiences. Doing so could also provide some real-time understanding as well as highlight potential issues in current designs. It will be interesting to watch this space develop. If there were an issue, wouldn’t it be great to remotely ask someone to update the design and then see the updated design in real-time? (Or might there be a way to make edits via one’s voice and/or with gestures?)

I could see where these types of technologies could come in handy when designing / enhancing learning spaces.




Web-Powered Augmented Reality: a Hands-On Tutorial — from by Uri Shaked
A Guided Journey Into the Magical Worlds of ARCore, A-Frame, 3D Programming, and More!


There’s been a lot of cool stuff happening lately around Augmented Reality (AR), and since I love exploring and having fun with new technologies, I thought I would see what I could do with AR and the Web?—?and it turns out I was able to do quite a lot!

Most AR demos are with static objects, like showing how you can display a cool model on a table, but AR really begins to shine when you start adding in animations!

With animated AR, your models come to life, and you can then start telling a story with them.
 adds augmented reality art-viewing to its iOS app — from by Lucas Matney


If you’re in the market for some art in your house or apartment, will now let you use AR to put digital artwork up on your wall.

The company’s ArtView feature is one of the few augmented reality features that actually adds a lot to the app it’s put in. With the ARKit-enabled tech, the artwork is accurately sized so you can get a perfect idea of how your next purchase could fit on your wall. The feature can be used for the two million pieces of art on the site and can be customized with different framing types.





Experience on Demand is a must-read VR book — from by Ian Hamilton


Bailenson’s newest book, Experience on Demand, builds on that earlier work while focusing more clearly — even bluntly — on what we do and don’t know about how VR affects humans.

“The best way to use it responsibly is to be educated about what it is capable of, and to know how to use it — as a developer or a user — responsibly,” Bailenson wrote in the book.

Among the questions raised:

  • “How educationally effective are field trips in VR? What are the design principles that should guide these types of experiences?”
  • How many individuals are not meeting their potential because they lack the access to good instruction and learning tools?”
  • “When we consider that the subjects were made uncomfortable by the idea of administering fake electric shocks, what can we expect people will feel when they are engaging all sorts of fantasy violence and mayhem in virtual reality?”
  • “What is the effect of replacing social contact with virtual social contact over long periods of time?”
  • “How do we walk the line and leverage what is amazing about VR, without falling prey to the bad parts?”





From DSC:
DC: Will Amazon get into delivering education/degrees? Is is working on a next generation learning platform that could highly disrupt the world of higher education? Hmmm…time will tell.

But Amazon has a way of getting into entirely new industries. From its roots as an online bookseller, it has branched off into numerous other arenas. It has the infrastructure, talent, and the deep pockets to bring about the next generation learning platform that I’ve been tracking for years. It is only one of a handful of companies that could pull this type of endeavor off.

And now, we see articles like these:

Amazon Snags a Higher Ed Superstar — from by Doug Lederman
Candace Thille, a pioneer in the science of learning, takes a leave from Stanford to help the ambitious retailer better train its workers, with implications that could extend far beyond the company.


A major force in the higher education technology and learning space has quietly begun working with a major corporate force in — well, in almost everything else.

Candace Thille, a pioneer in learning science and open educational delivery, has taken a leave of absence from Stanford University for a position at Amazon, the massive (and getting bigger by the day) retailer.

Thille’s title, as confirmed by an Amazon spokeswoman: director of learning science and engineering. In that capacity, the spokeswoman said, Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon.”

No further details were forthcoming, and Thille herself said she was “taking time away” from Stanford to work on a project she was “not really at liberty to discuss.”


Amazon is quietly becoming its own university — from by Amy Wang


Jeff Bezos’ Amazon empire—which recently dabbled in home security, opened artificial intelligence-powered grocery stores, and started planning a second headquarters (and manufactured a vicious national competition out of it)—has not been idle in 2018.

The e-commerce/retail/food/books/cloud-computing/etc company made another move this week that, while nowhere near as flashy as the above efforts, tells of curious things to come. Amazon has hired Candace Thille, a leader in learning science, cognitive science, and open education at Stanford University, to be “director of learning science and engineering.” A spokesperson told Inside Higher Ed that Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon”; Thille herself said she is “not really at liberty to discuss” her new project.

What could Amazon want with a higher education expert? The company already has footholds in the learning market, running several educational resource platforms. But Thille is famous specifically for her data-driven work, conducted at Stanford and Carnegie Mellon University, on nontraditional ways of learning, teaching, and training—all of which are perfect, perhaps even necessary, for the education of employees.


From DSC:
It could just be that Amazon is simply building its own corporate university and will stay focused on developing its own employees and its own corporate learning platform/offerings — and/or perhaps license their new platform to other corporations.

But from my perspective, Amazon continues to work on pieces of a powerful puzzle, one that could eventually involve providing learning experiences to lifelong learners:

  • Personal assistants
  • Voice recognition / Natural Language Processing (NLP)
  • The development of “skills” at an incredible pace
  • Personalized recommendation engines
  • Cloud computing and more

If Alexa were to get integrated into a AI-based platform for personalized learning — one that features up-to-date recommendation engines that can identify and personalize/point out the relevant critical needs in the workplace for learners — better look out higher ed! Better look out if such a platform could interactively deliver (and assess) the bulk of the content that essentially does the heavy initial lifting of someone learning about a particular topic.

Amazon will be able to deliver a cloud-based platform, with cloud-based learner profiles and blockchain-based technologies, at a greatly reduced cost. Think about it. No physical footprints to build and maintain, no lawns to mow, no heating bills to pay, no coaches making $X million a year, etc.  AI-driven recommendations for digital playlists. Links to the most in demand jobs — accompanied by job descriptions, required skills & qualifications, and courses/modules to take in order to master those jobs.

Such a solution would still need professors, instructional designers, multimedia specialists, copyright experts, etc., but they’ll be able to deliver up-to-date content at greatly reduced costs. That’s my bet. And that’s why I now call this potential development The New of Higher Education.

[Microsoft — with their purchase of Linked In (who had previously
purchased — is
another such potential contender.]




“Rise of the machines” — from January 2018 edition of InAVate magazine
AI is generating lots of buzz in other verticals, but what can AV learn from those? Tim Kridel reports.



From DSC:
Learning spaces are relevant as well in the discussion of AI and AV-related items.


Also in their January 2018 edition, see
an incredibly detailed project at the London Business School.


A full-width frosted glass panel sits on the desk surface, above it fixed in the ceiling is a Wolfvision VZ-C12 visualiser. This means the teaching staff can write on the (wipeclean) surface and the text appears directly on two 94-in screens behind them, using Christie short-throw laser 4,000 lumens projectors. When the lecturer is finished or has filled up the screen with text, the image can be saved on the intranet or via USB. Simply wipe with a cloth and start again. Not only is the technology inventive, but it allows the teaching staff to remain in face-to-face contact with the students at all times, instead of students having to stare at the back of the lecturer’s head whilst they write.



Also relevant, see:




TV is (finally) an app: The goods, the bads and the uglies for learning — from by Cathie Norris, Elliot Soloway


Television. TV. There’s an app for that. Finally! TV — that is, live shows such as the news, specials, documentaries (and reality shows, if you must) — is now just like Candy Crunch and Facebook. TV apps (e.g., DirecTV Now) are available on all devices — smartphones, tablets, laptops, Chromebooks. Accessing streams upon streams of videos is, literally, now just a tap away.

Plain and simple: readily accessible video can be a really valuable resource for learners and learning.

Not everything that needs to be learned is on video. Instruction will need to balance the use of video with the use of printed materials. That balance, of course, needs to take in cost and accessibility.

Now for the 800 pound gorilla in the room: Of course, that TV app could be a huge distraction in the classroom. The TV app has just piled yet another classroom management challenge onto a teacher’s back.

That said, it is early days for TV as an app. For example, HD (High Definition) TV demands high bandwidth — and we can experience stuttering/skipping at times. But, when 5G comes around in 2020, just two years from now, POOF, that stuttering/skipping will disappear. “5G will be as much as 1,000 times faster than 4G.”  Yes, POOF!


From DSC:
Learning via apps is here to stay. “TV” as apps is here to stay. But what’s being described here is but one piece of the learning ecosystem that will be built over the next 5-15 years and will likely be revolutionary in its global impact on how people learn and grow. There will be opportunities for social-based learning, project-based learning, and more — with digital video being a component of the ecosystem, but is and will be insufficient to completely move someone through all of the levels of Bloom’s Taxonomy.

I will continue to track this developing learning ecosystem, but voice-driven personal assistants are already here. Algorithm-based recommendations are already here. Real-time language translation is already here.  The convergence of the telephone/computer/television continues to move forward.  AI-based bots will only get better in the future. Tapping into streams of up-to-date content will continue to move forward. Blockchain will likely bring us into the age of cloud-based learner profiles. And on and on it goes.

We’ll still need teachers, professors, and trainers. But this vision WILL occur. It IS where things are heading. It’s only a matter of time.


The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV






DC: The next generation learning platform will likely offer us such virtual reality-enabled learning experiences such as this “flight simulator for teachers.”

Virtual reality simulates classroom environment for aspiring teachers — from by Charles Anzalone, University at Buffalo

Excerpt (emphasis DSC):

Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”

The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.

The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.


Also related/see:


    TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk.  This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.






This start-up uses virtual reality to get your kids excited about learning chemistry — from Lora Kolodny and Erin Black

  • MEL Science raised $2.2 million in venture funding to bring virtual reality chemistry lessons to schools in the U.S.
  • Eighty-two percent of science teachers surveyed in the U.S. believe virtual reality content can help their students master their subjects.


This start-up uses virtual reality to get your kids excited about learning chemistry from CNBC.



From DSC:
It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!

The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.

But the potentially disruptive piece of all of this is that this next generation learning platform could create an of what we now refer to as “higher education.”  It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.


I’m tracking these developments at:



The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV


*  Also see:

Blockchain, Bitcoin and the Tokenization of Learning — from by Sydney Johnson


In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.

A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.










Artificial Intelligence has leapt to the forefront of global discourse, garnering increased attention from practitioners, industry leaders, policymakers, and the general public. The diversity of opinions and debates gathered from news articles this year illustrates just how broadly AI is being investigated, studied, and applied. However, the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field.

Without the relevant data for reasoning about the state of AI technology, we are essentially “flying blind” in our conversations and decision-making related to AI.

Created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), the AI Index is an open, not-for-profit project to track activity and progress in AI. It aims to facilitate an informed conversation about AI that is grounded in data. This is the inaugural annual report of the AI Index, and in this report we look at activity and progress in Artificial Intelligence through a range of perspectives. We aggregate data that exists freely on the web, contribute original data, and extract new metrics from combinations of data series.

All of the data used to generate this report will be openly available on the AI Index website at Providing data, however, is just the beginning. To become truly useful, the AI Index needs support from a larger community. Ultimately, this report is a call for participation. You have the ability to provide data, analyze collected data, and make a wish list of what data you think needs to be tracked. Whether you have answers or questions to provide, we hope this report inspires you to reach out to the AI Index and become part of the effort to ground the conversation about AI.




AI: Embracing the promises and realities — from the Allegis Group


What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:

  • According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses.  (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
  • The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
  • 47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
  • In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.

Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30

While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.






The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.





Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.





Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian