From DSC:
I haven’t tried this app myself, but I was intrigued by the concept behind it. Machine learning/deep learning is very much at play here, as the app can recognize a particular problem and then present some potential assistance/answers to you. “Take a photo. Get instant help.”

 



 

Description
Take a PHOTO of your homework question and get explanations, videos, and step-by-step help instantly. Supports Math, Science, History, English, Econ and more. Completely free, NO in-app purchases. “This app is a lifesaver”

  • Fast – Take a photo, get instant results, no waiting
  • Explainers – Teaches you exactly what you need to learn
  • Videos – The best Youtube videos for your question
  • Powerful – Better than Google for homework help
  • Free – Free to use and always will be

~~ How it works ~~
Socratic is a homework app that combines cutting-edge Artificial Intelligence (AI) with amazing learning content to make learning on your phone easy.

Take a picture of a homework question and our AI instantly figures out which concepts you need to learn in order to answer it, and shows you simple, high-quality content designed to make learning easy.

Socratic’s AI combines cutting-edge computer vision technologies, which read questions from images, with machine learning classifiers built using millions of sample homework questions, to accurately predict which concepts will help you solve your question.

Socratic’s team of educators is creating highly-visual, jargon-free content to teach every important high school curriculum concept, and is curating the best online videos from sources like Khan Academy, Crash Course, and others.

Together, the Socratic app represents a huge improvement in how students learn on the Internet.

 



 

From DSC:
Again, this is the type of service that I could see the New Amazon.com of Higher Education featuring in their courses and/or microlearning-based offerings.

Also see:

.

  • Q&A: Artificial Intelligence Expert Shares His Vision of the Future of Education — edtechmagazine.com from by Amy Burroughs
    Artificial intelligence expert Joseph Qualls believes AI can solve some of the biggest challenges facing higher education — and the change is already underway.

    Excerpts:

    EDTECH: What AI applications might we see in higher education?

    QUALLS: You are going to see a massive change in education from K–12 to the university. The thought of having large universities and large faculties teaching students is probably going to go away — not in the short-term, but in the long-term. You will have a student interact with an AI system that will understand him or her and provide an educational path for that particular student. Once you have a personalized education system, education will become much faster and more enriching. You may have a student who can do calculus in the sixth grade because AI realized he had a mathematical sense. That personalized education is going to change everything.

 



 

 

 

Making sure the machines don’t take over — from raconteur.net by Mark Frary
Preparing economic players for the impact of artificial intelligence is a work in progress which requires careful handling

 

From DSC:
This short article presents a balanced approach, as it relays both the advantages and disadvantages of AI in our world.

Perhaps it will be one of higher education’s new tasks — to determine the best jobs to go into that will survive the next 5-10+ years and help you get up-to-speed in those areas. The liberal arts are very important here, as they lay a solid foundation that one can use to adapt to changing conditions and move into multiple areas. If the C-suite only sees the savings to the bottom line — and to *&^# with humanity (that’s their problem, not mine!) — then our society could be in trouble.

 

Also see:

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

From DSC:
First of all, let me say again that I’m not suggesting that we replace professors with artificial intelligence, algorithms, and such.

However, given a variety of trends, we need to greatly lower the price of obtaining a degree and these types of technologies will help us do just that — while at the same time significantly increasing the productivity of each professor and/or team of specialists offering an online-based course (something institutions of higher education are currently attempting to do…big time). Not only will these types of technologies find their place in the higher education landscape, I predict that they will usher in a “New Amazon.com of Higher Education” — a new organization that will cause major disruption for traditional institutions of higher education. AI-powered MOOCs will find their place on the higher ed landscape; just how big they become remains to be seen, but this area of the landscape should be on our radars from here on out.

This type of development again points the need for team-based
approaches; s
uch approaches will likely dominate the future.

 

 


 

California State University East Bay partners with Cognii to offer artificial intelligence powered online learning — from prnewswire.com
Cognii’s Virtual Learning Assistant technology will provide intelligent tutoring and assessments to students in a chatbot-style conversation

Excerpt:

HAYWARD, Calif., April 14, 2017 /PRNewswire/ — Cal State East Bay, a top-tier public university, and Cognii Inc., a leading provider of artificial intelligence-based educational technologies, today announced a partnership. Cognii will work with Cal State East Bay to develop a new learning and assessment experience, powered by Cognii’s Virtual Learning Assistant technology.

Winner of the 2016 EdTech Innovation of the Year Award from Mass Technology Leadership Council for its unique use of conversational AI and Natural Language Processing technologies in education, Cognii VLA provides automatic grading to students’ open-response answers along with qualitative feedback that guides them towards conceptual mastery. Compared to the multiple choice tests, open-response questions are considered pedagogically superior for measuring students’ critical thinking and problem solving skills, essential for 21st century jobs.

Students at Cal State East Bay will use the Cognii-powered interactive tutorials starting in summer as part of the online transfer orientation course. The interactive questions and tutorials will be developed collaboratively by Cognii team and the eLearning specialists from the university’s office of the Online Campus. Students will interact with the questions in a chatbot-style natural language conversation during the formative assessment stage. As students practice the tutorials, Cognii will generate rich learning analytics and proficiency measurements for the course leaders.

 

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 

The Hidden Costs of Active Learning — from by Thomas Mennella
Flipped and active learning truly are a better way for students to learn, but they also may be a fast track to instructor burnout.

Excerpt:

The time has come for us to have a discussion about the hidden cost of active learning in higher education. Soon, gone will be the days of instructors arriving to a lecture hall, delivering a 75-minute speech and leaving. Gone will be the days of midterms and finals being the sole forms of assessing student learning. For me, these days have already passed, and good riddance. These are largely ineffective teaching and learning strategies. Today’s college classroom is becoming dynamic, active and student-centered. Additionally, the learning never stops because the dialogue between student and instructor persists endlessly over the internet. Trust me when I say that this can be exhausting. With constant ‘touch-points,’ ‘personalized learning opportunities’ and the like, the notion of a college instructor having 12 contact hours per week that even remotely total 12 hours is beyond unreasonable.

We need to reevaluate how we measure, assign and compensate faculty teaching loads within an active learning framework. We need to recognize that instructors teaching in these innovative ways are doing more, and spending more hours, than their more traditional colleagues. And we must accept that a failure to recognize and remedy these ‘new normals’ risks burning out a generation of dedicated and passionate instructors. Flipped learning works and active learning works, but they’re very challenging ways to teach. I still say I will never teach another way again … I’m just not sure for how much longer that can be.

 

From DSC:
The above article prompted me to revisit the question of how we might move towards using more team-based approaches…? Thomas Mennella seems to be doing an incredible job — but grading 344 assignments each week or 3,784 assignments this semester is most definitely a recipe for burnout.

Then, pondering this situation, an article came to my mind that discusses Thomas Frey’s prediction that the largest internet-based company of 2030 will be focused on education.

I wondered…who will be the Amazon.com of the future of education? 

Such an organization will likely utilize a team-based approach to create and deliver excellent learning experiences — and will also likely leverage the power of artificial intelligence/machine learning/deep learning as a piece of their strategy.

 

 

 

Tech giants grapple with the ethical concerns raised by the AI boom — from technologyreview.com by Tom Simonite
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.

Excerpt:

With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations.

“We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”

Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 
 

Galvanize will teach students how to use IBM Watson APIs with new machine learning course — from techcrunch.com by John Mannes

Excerpt:

As part of IBM’s annual InterConnect conference in Las Vegas, the company is announcing a new machine learning course in partnership with workspace and education provider Galvanize to familiarize students with IBM’s suite of Watson APIs. These APIs simplify the process of building tools that rely on language, speech and vision analysis.

Going by the admittedly clunky name IBM Cognitive Course, the class will spend four weeks teaching the basics of machine learning and Watson’s capabilities. Students will be able to take the class directly within IBM’s Bluemix cloud platform.

 

 

 

 
© 2016 Learning Ecosystems