New Google Earth has exciting features for teachers — from thejournal.com by Richard Chang

Excerpt:

Google has recently released a brand new version of Google Earth for both Chrome and Android. This new version has come with a slew of nifty features teachers can use for educational purposes with students in class. Following is a quick overview of the most fascinating features…

 

 

 

 

 

 

Making sure the machines don’t take over — from raconteur.net by Mark Frary
Preparing economic players for the impact of artificial intelligence is a work in progress which requires careful handling

 

From DSC:
This short article presents a balanced approach, as it relays both the advantages and disadvantages of AI in our world.

Perhaps it will be one of higher education’s new tasks — to determine the best jobs to go into that will survive the next 5-10+ years and help you get up-to-speed in those areas. The liberal arts are very important here, as they lay a solid foundation that one can use to adapt to changing conditions and move into multiple areas. If the C-suite only sees the savings to the bottom line — and to *&^# with humanity (that’s their problem, not mine!) — then our society could be in trouble.

 

Also see:

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 

Federal Reserve Bank of New York: Press Briefing on Household Debt, with Focus on Student Debt — with thanks to Mr. Bryan Alexander for his post on this

 

 

 

 

 

Excerpts:

Student Debt Overview
  • Student debt was $1.3 trillion at the end of 2016, an increase of about 170% from 2006.
  • Aggregate student debt is increasing because:
    • More students are taking out loans
    • Loans are for larger amounts
    • Repayment rates have slowed down
  • About 5% of the borrowers have more than $100,000 debt in 2016, but they account for about 30% of the total debt.
  • Recent graduates with student loans leave school with about $34,000, up nearly 70% from 10 years ago.

 


 

  • While the total level of household debt has nearly returned to the 2008 peak, debt types and borrower profiles have changed.
    • Debt growth is now driven by non-housing sectors, and debt is held by older, more creditworthy borrowers.
  • Student debt has expanded significantly because of higher levels of borrowing and slower rates of repayment.
  • Student debt defaults peaked with the 2011 cohort and have improved somewhat since. However, payment progress has declined.
  • College attendance is associated with significantly higher homeownership rates regardless of debt status. Yet, student debt appears to dampen homeownership rates among those with the same level of education.
  • College attendance appears to mitigate the impact of economic background on homeownership rates.

 

 

 


 

Also see:

 


 

 

 

Retailers cut tens of thousands of jobs. Again. — from money.cnn.com by Paul R. La Monica
The dramatic reshaping of the American retail industry has, unfortunately, led to massive job losses in the sector.

Excerpt (emphasis DSC):

The federal government said Friday that retailers shed nearly 30,000 jobs in March. That follows a more than 30,000 decline in the number of retail jobs in the previous month.

So-called general merchandise stores are hurting the most.

That part of the sector, which includes struggling companies like Macy’s, Sears, and J.C. Penney, lost 35,000 jobs last month. Nearly 90,000 jobs have been eliminated since last October.

“There is no question that the Amazon effect is overwhelming,” said Scott Clemons, chief investment strategist of private banking for BBH. “There has been a shift in the way we buy things as opposed to a shift in the amount of money spent.”

To that end, Amazon just announced plans to hire 30,000 part-time workers.

 

From DSC:
One of the reasons that I’m posting this item is for those who say disruption isn’t real…it’s only a buzz word…

A second reason that I’m posting this item is because those of us working within higher education should take note of the changes in the world of retail and learn the lesson now before the “Next Amazon.com of Higher Education*” comes on the scene. Though this organization has yet to materialize, the pieces of its foundation are beginning to come together — such as the ingredients, trends, and developments that I’ve been tracking in my “Learning from the Living [Class] Room” vision.

This new organization will be highly disruptive to institutions of traditional higher education.

If you were in an influential position at Macy’s, Sears, and/or at J.C. Penney today, and you could travel back in time…what would you do?

We in higher education have the luxury of learning from what’s been happening in the retail business. Let’s be sure to learn our lesson.

 



 

* Effective today, what I used to call the “Forthcoming Walmart of Education — which has already been occurring to some degree with things such as MOOCs and collaborations/partnerships such as Georgia Institute of Technology, Udacity, and AT&T — I now call the “Next Amazon.com of Higher Education.”

Cost. Convenience. Selection. Offering a service on-demand (i.e., being quick, responsive, and available 24×7). <– These all are powerful forces.

 



 

P.S. Some will say you can’t possibly compare the worlds of retail and higher education — and that may be true as of 2017. However, if:

  • the costs of higher education keep going up and we continue to turn a deaf ear to the struggling families/students/adult learners/etc. out there
  • alternatives to traditional higher education continue to come on the landscape
  • the Federal Government continues to be more open to financially supporting such alternatives
  • technologies such as artificial intelligence, machine learning, deep learning continue to get better and more powerful — to the point that they can effectively deliver a personalized education (one that is likely to be fully online and that utilizes a team of specialists to create and deliver the learning experiences)
  • people lose their jobs to artificial intelligence, robotics, and automation and need to quickly reinvent themselves

…I can assure you that people will find other ways to make ends meet. The Next Amazon.com of Education will be just what they are looking for.

 



 

 

 

The 82 Hottest EdTech Tools of 2017 According to Education Experts — from tutora.co.uk by Giorgio Cassella

Excerpt:

If you work in education, you’ll know there’s a HUGE array of applications, services, products and tools created to serve a multitude of functions in education.

Tools for teaching and learning, parent-teacher communication apps, lesson planning software, home-tutoring websites, revision blogs, SEN education information, professional development qualifications and more.

There are so many companies creating new products for education, though, that it can be difficult to keep up – especially with the massive volumes of planning and marking teachers have to do, never mind finding the time to actually teach!

So how do you know which ones are the best?

Well, as a team of people passionate about education and learning, we decided to do a bit of research to help you out.

We’ve asked some of the best and brightest in education for their opinions on the hottest EdTech of 2017. These guys are the real deal – experts in education, teaching and new tech from all over the world from England to India, to New York and San Francisco.

They’ve given us a list of 82 amazing, tried and tested tools…


From DSC:
The ones that I mentioned that Giorgio included in his excellent article were:

  • AdmitHub – Free, Expert College Admissions Advice
  • Labster – Empowering the Next Generation of Scientists to Change the World
  • Unimersiv – Virtual Reality Educational Experiences
  • Lifeliqe – Interactive 3D Models to Augment Classroom Learning

 


 

 

 

 

From DSC:
It seems to me that we are right on the precipice of major changes — throughout the globe — that are being introduced by the growing use and presence of automation, robotics, and artificial intelligence/machine learning/deep learning, as well as other emerging technologies. But it’s not just the existence of these technologies, but it’s also that the pace of adoption of these technologies continues to increase.

These things made me wonder….what are the ramifications of the graphs below — and this new trajectory/pace of change that we’re on — for how we accredit new programs within higher education?

For me, it speaks to the need for those of us who are working within higher education to be more responsive, and we need to increase our efforts to provide more lifelong learning opportunities. People are going to need to reinvent themselves over and over again. In order for higher education to be of the utmost service to people, the time that it takes to accredit a program must be greatly reduced in cost and in time.


 

 

 

 

 

 

 

 

 

 

 


Somewhat relevant addendums:


 

A quote from “Response: What Teaching in the Year 2047 Might Look Like

To end the metaphor, what I am simply trying to say is that schools cannot afford to evolve at ¼ of the pace the world is around it and not face the possibility of becoming dangerously irrelevant. So, to answer the question – do I think the classrooms of 2040 look like the classrooms of today? Yes, I think they look more like them than they do not. Unfortunately, in my opinion, that is not the way to best serve our kids in our ever-changing world. Let me be clear, great teaching and instruction has not fundamentally changed in the past 2000 years and will not in the next 30. The context of learning and doing our best to meet the needs of the society we are preparing kids for is how and why schools must be revolutionized, not simply evolve at their own pace.

 


An excerpt from “
The global forces inspiring a new narrative of progress” (from mckinsey.com by Ezra Greenberg, Martin Hirt, and Sven Smit; emphasis DSC):

The next three tensions highlight accelerating industry disruption. Digitization, machine learning, and the life sciences are advancing and combining with one another to redefine what companies do and where industry boundaries lie. We’re not just being invaded by a few technologies, in other words, but rather are experiencing a combinatorial technology explosion. Customers are reaping some of the rewards, and our notions of value delivery are changing. In the words of Alibaba’s Jack Ma, B2C is becoming “C2B,” as customers enjoy “free” goods and services, personalization, and variety. And the terms of competition are changing: as interconnected networks of partners, platforms, customers, and suppliers become more important, we are experiencing a business ecosystem revolution.

 

38% of American Jobs Could be Replaced by Robots, According to PwC Report — from bigthink.com by David Ryan Polgar

Excerpt:

Nearly 4 out of 10 American jobs may be replaced through automation by the early 2030s, according to a new report by Price Waterhouse Cooper (PwC). In the report, the United States was viewed as the country most likely to lost jobs through automation–ahead of the UK, Germany, and Japan. This is probably not what the current administration had in mind with an “America First” policy.

 

 

 

 

Perfect marriage between universities and K12 public schools — from huffingtonpost.com by Dr. Rod Berger

 

Excerpt:

I sat down with Dr. Jeanice Kerr Swift at this year’s AASA conference in New Orleans to learn about the unique advantage of running a public school district that resides alongside one of our nation’s most prominent universities. The University of Michigan provides the district of Ann Arbor with rich partnerships that lift the learning experiences of the children in the community. Kerr Swift is delighted to have the enthusiasm of not only the University but the business community in reaching out to the students of Ann Arbor.

The implementation of real world projects matches University of Michigan scientists with teachers and students to enrich school learning environments. One example is the Woven Wind program that provides real life wind turbine applications. Students learn, and teachers have their classes bolstered by the input of advanced experimentation. Project Lead the Way is another example that is providing modules for classroom learning.

According to Jeanice Kerr Swift, technology should support and strengthen learning, not stand in the place of person-to-person engagement. Devices are there to serve and enhance, not replace teacher-student collaboration and critical thinking. Kerr Swift believes there is a balance of the “Cs” to consider: collaboration, connection, and community. If all the “Cs” are listening and working together, then a school district can thrive.

Jeanice Kerr Swift certainly makes the balance look easy and enviable in Ann Arbor Michigan.

 

 

 

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 
© 2016 Learning Ecosystems