The world is changing. Here’s how companies must adapt. — from weforum.org by Joe Kaeser, President and Chief Executive Officer, Siemens AG

Excerpts (emphasis DSC):

Although we have only seen the beginning, one thing is already clear: the Fourth Industrial Revolution is the greatest transformation human civilization has ever known. As far-reaching as the previous industrial revolutions were, they never set free such enormous transformative power.

The Fourth Industrial Revolution is transforming practically every human activity...its scope, speed and reach are unprecedented.

Enormous power (Insert from DSC: What I was trying to get at here) entails enormous risk. Yes, the stakes are high. 

 

“And make no mistake about it: we are now writing the code that will shape our collective future.” CEO of Siemens AG

 

 

Contrary to Milton Friedman’s maxim, the business of business should not just be business. Shareholder value alone should not be the yardstick. Instead, we should make stakeholder value, or better yet, social value, the benchmark for a company’s performance.

Today, stakeholders…rightfully expect companies to assume greater social responsibility, for example, by protecting the climate, fighting for social justice, aiding refugees, and training and educating workers. The business of business should be to create value for society.

This seamless integration of the virtual and the physical worlds in so-called cyber-physical systems – that is the giant leap we see today. It eclipses everything that has happened in industry so far. As in previous industrial revolutions but on a much larger scale, the Fourth Industrial Revolution will eliminate millions of jobs and create millions of new jobs.

 

“…because the Fourth Industrial Revolution runs on knowledge, we need a concurrent revolution in training and education.

If the workforce doesn’t keep up with advances in knowledge throughout their lives, how will the millions of new jobs be filled?” 

Joe Kaeser, President and Chief Executive Officer, Siemens AG

 

 


From DSC:
At least three critically important things jump out at me here:

  1. We are quickly approaching a time when people will need to be able to reinvent themselves quickly and cost-effectively, especially those with families and who are working in their (still existing) jobs. (Or have we already entered this period of time…?)
  2. There is a need to help people identify which jobs are safe to reinvent themselves to — at least for the next 5-10 years.
  3. Citizens across the globe — and their relevant legislatures, governments, and law schools — need to help close the gap between emerging technologies and whether those technologies should even be rolled out, and if so, how and with which features.

 


 

What freedoms and rights should individuals have in the digital age?

Joe Kaeser, President and Chief Executive Officer, Siemens AG

 

 

Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

Forecast 5.0 – The Future of Learning: Navigating the Future of Learning  — from knowledgeworks.org by Katherine Prince, Jason Swanson, and Katie King
Discover how current trends could impact learning ten years from now and consider ways to shape a future where all students can thrive.

 

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 
 

10 predictions for tech in 2019 — from enterprisersproject.com by Carla Rudder
IT leaders look at the road ahead and predict what’s next for containers, security, blockchain, and more

Excerpts:

We asked IT leaders and tech experts what they see on the horizon for the future of technology. We intentionally left the question open-ended, and as a result, the answers represent a broad range of what IT professionals may expect to face in the new year. Let’s dig in…

3. Security becomes must-have developer skill.
Developers who have job interviews next year will see a new question added to the usual list.

5. Ethics take center stage with tech talent
Robert Reeves, CTO and co-founder, Datical: “More companies (prompted by their employees) will become increasingly concerned about the ethics of their technology. Microsoft is raising concerns of the dangers of facial recognition technology; Google employees are very concerned about their AI products being used by the Department of Defense. The economy is good for tech right now and the job market is becoming tighter. Thus, I expect those companies to take their employees’ concerns very seriously. Of course, all bets are off when (not if) we dip into a recession. But, for 2019, be prepared for more employees of tech giants to raise ethical concerns and for those concerns to be taken seriously and addressed.”’

7. Customers expect instant satisfaction
All customers will be the customer of ‘now,’ with expectations of immediate and personalized service; single-click approval for loans, sales quotes on the spot, and deliveries in hours instead of days. The window of opportunity for customer satisfaction will keep closing and technology will evolve to keep pace. Real-time analytics will become faster and smarter as data that is external to the organization, such as social, news and weather, will be included for more insights. The move to the cloud will accelerate with the growing adoption of open-source vendors.”

 

From DSC:
Regarding #7 above…as the years progress, how do you suppose this type of environment where people expect instant satisfaction and personalized service will impact education/training?

 

 

 

The Insecurity of Things: a Brief History of US IoT Cybersecurity Legislation (Part 2) — from zone.com by Cate Lawrence
Check out these national attempts to legislate IoT security.

Excerpt:

There’s been a number of efforts over the last few years to legislate or provide a legal response to matters of cybersecurity. Part 1 of this article takes a look at recent efforts by California. This article examines the national attempts to legislate these poorly secured connected devices.

 

Global IoT technology market to reach $318 billion by 2023, says GlobalData — from which-50.com

Excerpt:

The global market for Internet of Things (IoT) technology, which consists of software, services, connectivity, and devices, reached $130 billion in 2018, and is projected to reach $318 billion by 2023, at a compound annual growth rate (CAGR) of 20 per cent, according to GlobalData.

GlobalData forecasts show that solutions for government, utilities and manufacturing dominate the market, with a total of 58 per cent of the opportunity in 2018 and a slighter smaller 55 per cent of the market in 2023, as others such as travel and leisure and retail grow their respective shares. Energy and transportation are other major verticals, with a combined 15 per cent of the market in both 2018 and 2023.

 

Also see:

  • As digital technology pervades the utility industry so too does the risk of cyber attacks — from by which-50.com by Joseph Brookes
    Excerpt:
    Smart metres and IoT have the potential to optimise performance and maintenance of the billions of dollars worth of infrastructure in Australian utilities. But each new device creates a potential access point to systems that are not designed with cyber security in mind and, in some cases, are already exposed.
 

Elon Musk receives FCC approval to launch over 7,500 satellites into space — from digitaltrends.com by Kelly Hodgkins

new satellite-based network would cover the entire globe -- is that a good thing?

Excerpt (emphasis DSC):

The FCC this week unanimously approved SpaceX’s ambitious plan to launch 7,518 satellites into low-Earth orbit. These satellites, along with 4,425 previously approved satellites, will serve as the backbone for the company’s proposed Starlink broadband network. As it does with most of its projects, SpaceX is thinking big with its global broadband network. The company is expected to spend more than $10 billion to build and launch a constellation of satellites that will provide high-speed internet coverage to just about every corner of the planet.

 

To put this deployment in perspective, there are currently only 1,886 active satellites presently in orbit. These new SpaceX satellites will increase the number of active satellites six-fold in less than a decade. 

 

 

New simulation shows how Elon Musk’s internet satellite network might work — from digitaltrends.com by Luke Dormehl

Excerpt:

From Tesla to Hyperloop to plans to colonize Mars, it’s fair to say that Elon Musk thinks big. Among his many visionary ideas is the dream of building a space internet. Called Starlink, Musk’s ambition is to create a network for conveying a significant portion of internet traffic via thousands of satellites Musk hopes to have in orbit by the mid-2020s. But just how feasible is such a plan? And how do you avoid them crashing into one another?

 



 

From DSC:
Is this even the FCC’s call to make?

One one hand, such a network could be globally helpful, positive, and full of pros. But on the other hand, I wonder…what are the potential drawbacks with this proposal? Will nations across the globe launch their own networks — each of which consists of thousands of satellites?

While I love Elon’s big thinking, the nations need to weigh in on this one.

 

 

LinkedIn Learning Opens Its Platform (Slightly) [Young]

LinkedIn Learning Opens Its Platform (Slightly) — from edsurge by Jeff Young

Excerpt (emphasis DSC):

A few years ago, in a move toward professional learning, LinkedIn bought Lynda.com for $1.5 billion, adding the well-known library of video-based courses to its professional social network. Today LinkedIn officials announced that they plan to open up their platform to let in educational videos from other providers as well—but with a catch or two.

The plan, announced Friday, is to let companies or colleges who already subscribe to LinkedIn Learning add content from a select group of other providers. The company or college will still have to subscribe to those other services separately, so it’s essentially an integration—but it does mark a change in approach.

For LinkedIn, the goal is to become the front door for employees as they look for micro-courses for professional development.

 

LinkedIn also announced another service for its LinkedIn Learning platform called Q&A, which will give subscribers the ability to pose a question they have about the video lessons they’re taking. The question will first be sent to bots, but if that doesn’t yield an answer the query will be sent on to other learners, and in some cases the instructor who created the videos.

 

 

Also see:

LinkedIn becomes a serious open learning experience platform — from clomedia.com by Josh Bersin
LinkedIn is becoming a dominant learning solution with some pretty interesting competitive advantages, according to one learning analyst.

Excerpt:

LinkedIn has become quite a juggernaut in the corporate learning market. Last time I checked the company had more than 17 million users, 14,000 corporate customers, more than 3,000 courses and was growing at high double-digit rates. And all this in only about two years.

And the company just threw down the gauntlet; it’s now announcing it has completely opened up its learning platform to external content partners. This is the company’s formal announcement that LinkedIn Learning is not just an amazing array of content, it is a corporate learning platform. The company wants to become a single place for all organizational learning content.

 

LinkedIn now offers skills-based learning recommendations to any user through its machine learning algorithms. 

 

 



Is there demand for staying relevant? For learning new skills? For reinventing oneself?

Well…let’s see.

 

 

 

 

 

 



From DSC:
So…look out higher ed and traditional forms of accreditation — your window of opportunity may be starting to close. Alternatives to traditional higher ed continue to appear on the scene and gain momentum. LinkedIn — and/or similar organizations in the future — along with blockchain and big data backed efforts may gain traction in the future and start taking away some major market share. If employers get solid performance from their employees who have gone this route…higher ed better look out. 

Microsoft/LinkedIn/Lynda.com are nicely positioned to be a major player who can offer society a next generation learning platform at an incredible price — offering up-to-date, microlearning along with new forms of credentialing. It’s what I’ve been calling the Amazon.com of higher ed (previously the Walmart of Education) for ~10 years. It will take place in a strategy/platform similar to this one.

 



Also, this is what a guerilla on the back looks like:

 

This is what a guerilla on the back looks like!

 



Also see:

  • Meet the 83-Year-Old App Developer Who Says Edtech Should Better Support Seniors — from edsurge.com by Sydney Johnson
    Excerpt (emphasis DSC):
    Now at age 83, Wakamiya beams with excitement when she recounts her journey, which has been featured in news outlets and even at Apple’s developer conference last year. But through learning how to code, she believes that experience offers an even more important lesson to today’s education and technology companies: don’t forget about senior citizens.Today’s education technology products overwhelmingly target young people. And while there’s a growing industry around serving adult learners in higher education, companies largely neglect to consider the needs of the elderly.

 

 
 

Introducing several new ideas to provide personalized, customized learning experiences for all kinds of learners! [Christian]

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

The global companies that failed to adapt to change. — from trainingmag.com by Professor M.S. Rao, Ph.D.

Excerpt:

Eastman Kodak, a leader for many years, filed for bankruptcy in 2012. Blockbuster Video became defunct in 2013. Similarly, Borders — one of the largest book retailers in the U.S. — went out of business in 2011. Why did these companies, which once had great brands, ultimately fail? It is because they failed to adapt to change. Additionally, they failed to unlearn and relearn.

Former GE CEO Jack Welch once remarked, “If the rate of change on the outside exceeds the rate of change on the inside, the end is near.” Thus, accept change before the change is thrust on you.

Leaders must adopt tools and techniques to adapt to change. Here is a blueprint to embrace change effectively:

  • Keep the vision right and straight, and articulate it effectively.
  • Create organizational culture conducive to bring about change.
  • Communicate clearly about the need to change.
  • Enlighten people about the implications of the status quo.
  • Show them benefits once the change is implemented.
  • Coordinate all stakeholders effectively.
  • Remove the roadblocks by allaying their apprehensions.
  • Show them small gains to ensure that entire change takes place smoothly without any resistance.

 

From DSC:
Though I’m not on board with all of the perspectives in that article, institutions of traditional higher education likely have something to learn from the failures of these companies….while there’s still time to change and to innovate. 

 

 

Robots won’t replace instructors, 2 Penn State educators argue. Instead, they’ll help them be ‘more human.’ — from edsurge.com by Tina Nazerian

Excerpt:

Specifically, it will help them prepare for and teach their courses through several phases—ideation, design, assessment, facilitation, reflection and research. The two described a few prototypes they’ve built to show what that might look like.

 

Also see:

The future of education: Online, free, and with AI teachers? — from fool.com by Simon Erickson
Duolingo is using artificial intelligence to teach 300 million people a foreign language for free. Will this be the future of education?

Excerpts:

While it might not get a lot of investor attention, education is actually one of America’s largest markets.

The U.S. has 20 million undergraduates enrolled in colleges and universities right now and another 3 million enrolled in graduate programs. Those undergrads paid an average of $17,237 for tuition, room, and board at public institutions in the 2016-17 school year and $44,551 for private institutions. Graduate education varies widely by area of focus, but the average amount paid for tuition alone was $24,812 last year.

Add all of those up, and America’s students are paying more than half a trillion dollars each year for their education! And that doesn’t even include the interest amassed for student loans, the college-branded merchandise, or all the money spent on beer and coffee.

Keeping the costs down
Several companies are trying to find ways to make college more affordable and accessible.

 

But after we launched, we have so many users that nowadays if the system wants to figure out whether it should teach plurals before adjectives or adjectives before plurals, it just runs a test with about 50,000 people. So for the next 50,000 people that sign up, which takes about six hours for 50,000 new users to come to Duolingo, to half of them it teaches plurals before adjectives. To the other half it teaches adjectives before plurals. And then it measures which ones learn better. And so once and for all it can figure out, ah it turns out for this particular language to teach plurals before adjectives for example.

So every week the system is improving. It’s making itself better at teaching by learning from our learners. So it’s doing that just based on huge amounts of data. And this is why it’s become so successful I think at teaching and why we have so many users.

 

 

From DSC:
I see AI helping learners, instructors, teachers, and trainers. I see AI being a tool to help do some of the heavy lifting, but people still like to learn with other people…with actual human beings. That said, a next generation learning platform could be far more responsive than what today’s traditional institutions of higher education are delivering.

 

 

New Virtual 3D Microscope Lab Program Offered for Online Students by Oregon State University — from virtuallyinspired.org
OSU solves degree completion issue for online biology students

Excerpt:

“We had to create an alternative that gives students the foundational experience of being in a lab where they can maneuver a microscope’s settings and adjust the images just as they would in a face-to-face environment,” said Shannon Riggs, the Ecampus director of course development and training.

Multimedia developers mounted a camera on top of an actual microscope and took pictures of what was on the slides. Using 3D modeling software, the photos were interweaved to create 3D animation. Using game development software enabled students to adjust lighting, zoom and manipulate the images, just like in a traditional laboratory. The images were programmed to create a virtual simulation.

The final product is “an interactive web application that utilizes a custom 3D microscope and incorporates animation and real-life slide photos,” according to Victor Yee, an Ecampus assistant director of course development and training.

 

Also see:

  • YouTube to Invest $20 Million in Educational Content — from campustechnology.com by Dian Schaffhauser
    Excerpt:
    YouTube, a Google company, has announced plans to invest $20 million in YouTube Learning, an initiative hinted at during the summer. The goal: “to support education-focused creators and expert organizations that create and curate high-quality learning content on the video site.” Funding will be spent on supporting video creators who want to produce education series and wooing other education video providers to the site.

 

 
© 2025 | Daniel Christian