The five most important new jobs in AI, according to KPMG — from qz.com by Cassie Werber

Excerpt:

Perhaps as a counter to the panic that artificial intelligence will destroy jobs, consulting firm KPMG published a list (on 1/8/19) of what it predicts will soon become the five most sought-after AI roles. The predictions are based on the company’s own projects and those on which it advises. They are:

  • AI Architect – Responsible for working out where AI can help a business, measuring performance and—crucially— “sustaining the AI model over time.” Lack of architects “is a big reason why companies cannot successfully sustain AI initiatives,” KMPG notes.
  • AI Product Manager – Liaises between teams, making sure ideas can be implemented, especially at scale. Works closely with architects, and with human resources departments to make sure humans and machines can all work effectively.
  • Data Scientist – Manages the huge amounts of available data and designs algorithms to make it meaningful.
  • AI Technology Software Engineer – “One of the biggest problems facing businesses is getting AI from pilot phase to scalable deployment,” KMPG writes. Software engineers need to be able both to build scalable technology and understand how AI actually works.
  • AI Ethicist – AI presents a host of ethical challenges which will continue to unfold as the technology develops. Creating guidelines and ensuring they’re upheld will increasingly become a full-time job.

 

While it’s all very well to list the jobs people should be training and hiring for, it’s another matter to actually create a pipeline of people ready to enter those roles. Brad Fisher, KPMG’s US lead on data and analytics and the lead author of the predictions, tells Quartz there aren’t enough people getting ready for these roles.

 

Fisher has a steer for those who are eyeing AI jobs but have yet to choose an academic path: business process skills can be “trained,” he said, but “there is no substitute for the deep technical skillsets, such as mathematics, econometrics, or computer science, which would prepare someone to be a data scientist or a big-data software engineer.”

 

From DSC:
I don’t think institutions of higher education (as well as several other types of institutions in our society) are recognizing that the pace of technological change has changed, and that there are significant ramifications to those changes upon society. And if these institutions have picked up on it, you can hardly tell. We simply aren’t used to this pace of change.

Technologies change quickly. People change slowly. And, by the way, that is not a comment on how old someone is…change is hard at almost any age.

 

 

 

 

 

 

Presentation Translator for PowerPoint — from Microsoft (emphasis below from DSC:)

Presentation Translator breaks down the language barrier by allowing users to offer live, subtitled presentations straight from PowerPoint. As you speak, the add-in powered by the Microsoft Translator live feature, allows you to display subtitles directly on your PowerPoint presentation in any one of more than 60 supported text languages. This feature can also be used for audiences who are deaf or hard of hearing.

 

Additionally, up to 100 audience members in the room can follow along with the presentation in their own language, including the speaker’s language, on their phone, tablet or computer.

 

From DSC:
Up to 100 audience members in the room can follow along with the presentation in their own language! Wow!

Are you thinking what I’m thinking?! If this could also address learners and/or employees outside the room as well, this could be an incredibly powerful piece of a next generation, global learning platform! 

Automatic translation with subtitles — per the learner’s or employee’s primary language setting as established in their cloud-based learner profile. Though this posting is not about blockchain, the idea of a cloud-based learner profile reminds me of the following graphic I created in January 2017.

A couple of relevant quotes here:

A number of players and factors are changing the field. Georgia Institute of Technology calls it “at-scale” learning; others call it the “mega-university” — whatever you call it, this is the advent of the very large, 100,000-plus-student-scale online provider. Coursera, edX, Udacity and FutureLearn (U.K.) are among the largest providers. But individual universities such as Southern New Hampshire, Arizona State and Georgia Tech are approaching the “at-scale” mark as well. One could say that’s evidence of success in online learning. And without question it is.

But, with highly reputable programs at this scale and tuition rates at half or below the going rate for regional and state universities, the impact is rippling through higher ed. Georgia Tech’s top 10-ranked computer science master’s with a total expense of less than $10,000 has drawn more than 10,000 qualified majors. That has an impact on the enrollment at scores of online computer science master’s programs offered elsewhere. The overall online enrollment is up, but it is disproportionately centered in affordable scaled programs, draining students from the more expensive, smaller programs at individual universities. The dominoes fall as more and more high-quality at-scale programs proliferate.

— Ray Schroeder

 

 

Education goes omnichannel. In today’s connected world, consumers expect to have anything they want available at their fingertips, and education is no different. Workers expect to be able to learn on-demand, getting the skills and knowledge they need in that moment, to be able to apply it as soon as possible. Moving fluidly between working and learning, without having to take time off to go to – or back to – school will become non-negotiable.

Anant Agarwal

 

From DSC:
Is there major change/disruption ahead? Could be…for many, it can’t come soon enough.

 

 

Cut the curriculum — from willrichardson.com by Will Richardson

Excerpt:

Here’s an idea: A Minimal Viable Curriculum (MVC). That’s what Christian Talbot over at Basecamp is proposing, and I have to say, I love the idea.

He writes: “What if we were to design MVCs: Minimum Viable Curricula centered on just enough content to empower learners to examine questions or pursue challenges with rigor? Then, as learners go deeper into a question or challenge, they update their MVC…which is pretty much how learning happens in the real world.”

The key there to me is that THEY update their MVC. That resonates so deeply; it feels like that’s what I’m doing with my learning each day as I read about and work with school leaders who are thinking deeply about change.

 

When we pursue questions that matter to us, rigor is baked in.

 

From DSC:
I love the idea of giving students — as they can handle it — more choice, more control. So anytime around 8th-12th grade, I say we turn much more control over to the students, and let them make more choices on what they want to learn about. We should at least try some experiments along these lines.

 

 

As everyone in the workforce is now required to be a lifelong learner, our quality of life goes much higher if we actually enjoy learning. As I think about it, I have often heard an adult (especially middle age and older) say something like, “I hated school, but now, I love to learn.”

Plus, I can easily imagine greater engagement with the materials that students choose for themselves, as well as increased attention spans and higher motivation levels.

Also, here’s a major shout out to Will Richardson, Bruce Dixon, Missy Emler and Lyn Hilt for the work they are doing at ModernLearners.com.

 

Check out the work over at Modern Learners dot com

 

 

Google, Facebook, and the Legal Mess Over Face Scanning — finance.yahoo.com by John Jeff Roberts

Excerpt:

When must companies receive permission to use biometric data like your fingerprints or your face? The question is a hot topic in Illinois where a controversial law has ensnared tech giants Facebook and Google, potentially exposing them to billions in dollars in liability over their facial recognition tools.

The lack of specific guidance from the Supreme Court has since produced ongoing confusion over what type of privacy violations can let people seek financial damages.

 

Also see:

 

 

From DSC:
The legal and legislative areas need to close the gap between emerging technologies and the law.

What questions should we be asking about the skillsets that our current and future legislative representatives need? Do we need some of our representatives to be highly knowledgeable, technically speaking? 

What programs and other types of resources should we be offering our representatives to get up to speed on emerging technologies? Which blogs, websites, journals, e-newsletters, listservs, and/or other communication vehicles and/or resources should they have access to?

Along these lines, what about our judges? Can we offer them some of these resources as well? 

What changes do our law schools need to make to address this?

 

 

 

 

Top six AI and automation trends for 2019 — from forbes.com by Daniel Newman

Excerpt:

If your company hasn’t yet created a plan for AI and automation throughout your enterprise, you have some work to do. Experts believe AI will add nearly $16 trillion to the global economy by 2030, and 20 % of companies surveyed are already planning to incorporate AI throughout their companies next year. As 2018 winds down, now is the time to take a look at some trends and predictions for AI and automation that I believe will dominate the headlines in 2019—and to think about how you may incorporate them into your own company.

 

Also see — and an insert here from DSC:

Kai-Fu has a rosier picture than I do in regards to how humanity will be impacted by AI. One simply needs to check out today’s news to see that humans have a very hard time creating unity, thinking about why businesses exist in the first place, and being kind to one another…

 

 

 

How AI can save our humanity 

 

 

 

The world is changing. Here’s how companies must adapt. — from weforum.org by Joe Kaeser, President and Chief Executive Officer, Siemens AG

Excerpts (emphasis DSC):

Although we have only seen the beginning, one thing is already clear: the Fourth Industrial Revolution is the greatest transformation human civilization has ever known. As far-reaching as the previous industrial revolutions were, they never set free such enormous transformative power.

The Fourth Industrial Revolution is transforming practically every human activity...its scope, speed and reach are unprecedented.

Enormous power (Insert from DSC: What I was trying to get at here) entails enormous risk. Yes, the stakes are high. 

 

“And make no mistake about it: we are now writing the code that will shape our collective future.” CEO of Siemens AG

 

 

Contrary to Milton Friedman’s maxim, the business of business should not just be business. Shareholder value alone should not be the yardstick. Instead, we should make stakeholder value, or better yet, social value, the benchmark for a company’s performance.

Today, stakeholders…rightfully expect companies to assume greater social responsibility, for example, by protecting the climate, fighting for social justice, aiding refugees, and training and educating workers. The business of business should be to create value for society.

This seamless integration of the virtual and the physical worlds in so-called cyber-physical systems – that is the giant leap we see today. It eclipses everything that has happened in industry so far. As in previous industrial revolutions but on a much larger scale, the Fourth Industrial Revolution will eliminate millions of jobs and create millions of new jobs.

 

“…because the Fourth Industrial Revolution runs on knowledge, we need a concurrent revolution in training and education.

If the workforce doesn’t keep up with advances in knowledge throughout their lives, how will the millions of new jobs be filled?” 

Joe Kaeser, President and Chief Executive Officer, Siemens AG

 

 


From DSC:
At least three critically important things jump out at me here:

  1. We are quickly approaching a time when people will need to be able to reinvent themselves quickly and cost-effectively, especially those with families and who are working in their (still existing) jobs. (Or have we already entered this period of time…?)
  2. There is a need to help people identify which jobs are safe to reinvent themselves to — at least for the next 5-10 years.
  3. Citizens across the globe — and their relevant legislatures, governments, and law schools — need to help close the gap between emerging technologies and whether those technologies should even be rolled out, and if so, how and with which features.

 


 

What freedoms and rights should individuals have in the digital age?

Joe Kaeser, President and Chief Executive Officer, Siemens AG

 

 

Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

On one hand XR-related technologies
show some promise and possibilities…

 

The AR Cloud will infuse meaning into every object in the real world — from venturebeat.com by Amir Bozorgzadeh

Excerpt:

Indeed, if you haven’t yet heard of the “AR Cloud”, it’s time to take serious notice. The term was coined by Ori Inbar, an AR entrepreneur and investor who founded AWE. It is, in his words, “a persistent 3D digital copy of the real world to enable sharing of AR experiences across multiple users and devices.”

 

Augmented reality invades the conference room — from zdnet.com by Ross Rubin
Spatial extends the core functionality of video and screen sharing apps to a new frontier.

 

 

The 5 most innovative augmented reality products of 2018 — from next.reality.news by Adario Strange

 

 

Augmented, virtual reality major opens at Shenandoah U. next fall — from edscoop.com by by Betsy Foresman

Excerpt:

“It’s not about how virtual reality functions. It’s about, ‘How does history function in virtual reality? How does biology function in virtual reality? How does psychology function with these new tools?’” he said.

The school hopes to prepare student for careers in a field with a market size projected to grow to $209.2 billion by 2022, according to Statista. Still at its advent, Whelan compared VR technology to the introduction of the personal computer.

 

VR is leading us into the next generation of sports media — from venturebeat.com by Mateusz Przepiorkowski

 

 

Accredited surgery instruction now available in VR — from zdnet.com by Greg Nichols
The medical establishment has embraced VR training as a cost-effective, immersive alternative to classroom time.

 

Toyota is using Microsoft’s HoloLens to build cars faster — from cnn.comby Rachel Metz

From DSC:
But even in that posting the message is mixed…some pros…some cons. Some things going well for XR-related techs…but for other things, things are not going very well.

 

 

…but on the other hand,
some things don’t look so good…

 

Is the Current Generation of VR Already Dead? — from medium.com by Andreas Goeldi

Excerpt:

Four years later, things are starting to look decidedly bleak. Yes, there are about 5 million Gear VR units and 3 million Sony Playstation VR headsets in market, plus probably a few hundred thousand higher-end Oculus and HTC Vive systems. Yes, VR is still being demonstrated at countless conferences and events, and big corporations that want to seem innovative love to invest in a VR app or two. Yes, Facebook just cracked an important low-end price point with its $200 Oculus Go headset, theoretically making VR affordable for mainstream consumers. Plus, there’s even more hype about Augmented Reality, which in a way could be a gateway drug to VR.

But it’s hard to ignore a growing feeling that VR is not developing as the industry hoped it would. So is that it again, we’ve seen this movie before, let’s all wrap it up and wait for the next wave of VR to come along about five years from now?

There are a few signs that are really worrying…

 

 

From DSC:
My take is that it’s too early to tell. We need to give things more time.

 

 

 

From DSC:
Not too long ago, I really enjoyed watching a program on PBS regarding America’s 100 most-loved books, entitled, “The Great American Read.”

 

Watch “The Grand Finale”

 

While that’s not the show I’m talking about, it got me to thinking of one similar to it — something educational, yet entertaining. But also, something more.

The program that came to my mind would be a program that’s focused on significant topics and issues within American society — offered up in a debate/presentation style format. 

For example, you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc.  These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

For example, how about this for a potential topic: Grades or no grades?
  • What are the pros and cons of using an A-F grading system?
  • What are the benefits and issues/drawbacks with using grades? 
  • How are we truly using grades Do we use them to rank and compare individuals, schools, school systems, communities? Do we use them to “weed people out” of a program?
  • With our current systems, what “product” do we get? Do we produce game-players or people who enjoy learning? (Apologies for some of my bias showing up here! But my son has become a major game-player and, likely, so did I at his age.)
  • How do grades jibe with Individualized Education Programs (IEPs)? On one hand…how do you keep someone moving forward, staying positive, and trying to keep learning/school enjoyable yet on the other hand, how do you have those grades mean something to those who obtain data to rank school systems, communities, colleges, programs, etc.?
  • How do grades impact one’s desire to learn throughout one’s lifetime?

Such debates could be watched by students and then they could have their own debates on subjects that they propose.

Or the show could have journalists debate college or high school teams. The format could sometimes involve professors and deans debating against researchers. Or practitioners/teachers debating against researchers/cognitive psychologists. 

Such a show could be entertaining, yet highly applicable and educational. We would probably all learn something. And perhaps have our eyes opened up to a new perspective on an issue.

Or better yet, we might actually resolve some more issues and then move on to address other ones!

 

 

 

From DSC:
When a professor walks into the room, the mobile device that the professor is carrying notifies the system to automatically establish his or her preferred settings for the room — and/or voice recognition allows a voice-based interface to adjust the room’s settings:

  • The lights dim to 50%
  • The projector comes on
  • The screen comes down
  • The audio is turned up to his/her liking
  • The LMS is logged into with his/her login info and launches the class that he/she is teaching at that time of day
  • The temperature is checked and adjusted if too high or low
  • Etc.
 

Merry Christmas!

 

From DSC:
How long before voice drives most appliances, thermostats, etc?

Hisense is bringing Android and AI smarts to its 2019 TV range — from techradar.com by Stephen Lambrechts
Some big announcements planned for CES 2019

Excerpt (emphasis DSC):

Hisense has announced that it will unveil the next evolution of its VIDAA smart TV platform at CES 2019 next month, promising to take full advantage of artificial intelligence with version 3.0.

Each television in Hisense’s 2019 ULED TV lineup will boast the updated VIDAA 3.0 AI platform, with Amazon Alexa functionality fully integrated into the devices, meaning you won’t need an Echo device to use Alexa voice control features.

 

 

 

Google Glass wasn’t a failure. It raised crucial concerns. — from wired.com by Rose Eveleth

Excerpts:

So when Google ultimately retired Glass, it was in reaction to an important act of line drawing. It was an admission of defeat not by design, but by culture.

These kinds of skirmishes on the front lines of surveillance might seem inconsequential — but they can not only change the behavior of tech giants like Google, they can also change how we’re protected under the law. Each time we invite another device into our lives, we open up a legal conversation over how that device’s capabilities change our right to privacy. To understand why, we have to get wonky for a bit, but it’s worth it, I promise.

 

But where many people see Google Glass as a cautionary tale about tech adoption failure, I see a wild success. Not for Google of course, but for the rest of us. Google Glass is a story about human beings setting boundaries and pushing back against surveillance…

 

IN THE UNITED States, the laws that dictate when you can and cannot record someone have a several layers. But most of these laws were written when smartphones and digital home assistants weren’t even a glimmer in Google’s eye. As a result, they are mostly concerned with issues of government surveillance, not individuals surveilling each other or companies surveilling their customers. Which means that as cameras and microphones creep further into our everyday lives, there are more and more legal gray zones.

 

From DSC:
We need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have.

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

Addendum on 12/27/18: — also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt (emphasis DSC):

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

From DSC:
First of all, an excerpt from an email from RetrievalPractice.org:

Last week, we talked about an activity we call Flash Forward. Simply ask your students these questions:

“Now that you’ve taken this class, what is one thing you want to remember 10 years from now (and why)?”

“How will you remember that one thing? What will you do to make sure you don’t forget?”

Second of all, the topic of remembering something 10 years from now (from some current learning) made me think about obtaining a long-term return on investment (ROI) from that learning.

In the online-based course that I’ve been teaching for a while now, I’m all about helping the students in my classes obtain long-term benefits from taking the class. Grades aren’t the key. The learning is the key!

The class is entitled, “Foundations of Information Technology” and I want them to be using the tools, technologies, services, and concepts (that we learned about) loooooong after they graduate from college! We work on things like RSS feeds, Twitter, LinkedIn, WordPress, building their network, building their personal brand, HTML/web design, Microsoft Excel, the Internet of Things and much more. I want them to be practicing those things, leveraging those tools, pulse-checking their surroundings, networking with others, serving others with their gifts, and building on the foundations that they put into place waaaay back in 201__.

 

 

 

Teachers urged to instill assignments with ‘choice and relevancy’ — from thejournal.com by Dian Schaffhauser

Excerpt (emphasis DSC):

new report from the Education Trust looks at the role of two “powerful levers” — choice and relevancy — in motivating and engaging students. This national nonprofit works on issues that disproportionately affect students from low-income families and students of color. In its new paper the organization offered guidance to help educators bring relevancy to their assignments and give students “authentic choices.”

 

As they concluded, “When teachers consistently offer assignments that include choice in content, product or process, students will find the learning ownership needed to stay engaged and achieve at high levels.”

 

 

From DSC:
At this stage in life, my wife and I have one of our kids in high school…our son, who is now in his junior year. He is a very intelligent young man who wants to go into acting/drama/the theater. But he is a game player. He knows how the game is played and he plays it (most of the time). He can’t wait until the next phase of life because he views so much of his current education as being forced down his throat. Many of the topics that his courses deal with are things that he doesn’t care anything about. He would much rather study topics that HE wants to learn about.

He shared two recent examples with me:

  1. In his acting class, his acting teacher gave some of his own personal background and how he came to be where he is today; every eye and ear were open my son said…it was completely quiet in the room
  2. In his econ class, the teacher shared about the value of time on investing and gave some examples involving the growth of some investments over time; again, every eye and ear was open according to my son.

These were items of extreme relevance to those classes/audiences. And the students were paying attention, big time.

Also, when given more choice, students are apt to be much more engaged in their learning — even, perhaps, developing more of an actual enjoyment of learning.

Funny (but not really) how many times we hear of adults who later on went on to love learning…but they hated school.

 

 

 

 
© 2025 | Daniel Christian