My post-pandemic learning list — from chieflearningofficer.com by Elliott Masie
This is the time to extend our skills as learning professionals through the power of learning. 

Excerpt:

Curation on a personal level. I want to create new ways to curate awesome information and knowledge that I encounter every day. I want a “Later” button on my mouse or a gesture feature on my phone to capture and re-present indicated content to me at a later time. My curiosity as a learner is demanding a better way to tag or selectively highlight content, conversations and resources effortlessly and at any time throughout the day.

Adding arts to learning for impact. We have hosted 33 one-hour Empathy Concerts since April 2020, combining Broadway performers and learning experts for powerful blends of content and music relevant to the workplace. I am excited to expand models for incorporating music, songs and theater into our learning efforts. Arts expand the emotional impact of cognitive mastery.

Also see:

The reverse culture shock of returning to the office — from chieflearningofficer.com by Camille Preston
Understanding reverse culture shock and its effects may be the best way to prepare for post-pandemic work and life.

Excerpt:

While leaving home to return to work will be remarkably different than coming back from an overseas tour, there are similarities. Understanding reverse culture shock and its effects may also be the best way to prepare for post-pandemic work and life.

 

 

16 BigLaw firms have no Black partners, including firm ranked No. 1 for diversity — from abajournal.com by Debra Cassens Weiss

Excerpt:

Many law firms that ranked relatively well on the American Lawyer’s 2021 Diversity Scorecard have no Black partners, including an immigration law firm ranked No. 1 for diversity.

The number of Black partners increased only slightly last year, rising from 2.1% to 2.2%, Law.com reports.

Law firms with no Black partners that ranked well on the 2021 Diversity Scorecard are: Berry Appleman & Leiden ranked No. 1 on the diversity scorecard; Wood Smith Henning & Berman, ranked No. 6 on the diversity scorecard; Kobre & Kim ranked No. 8 on the diversity scorecard; Irell & Manella; ranked No. 9 on the diversity scorecard; and Knobbe Martens, ranked 15th on the scorecard.

Law firms surveyed for the scorecard are among the nation’s 200 top grossing firms (the Am Law 200) and the nation’s 250 largest law firms (the National Law Journal 250).

 

Microsoft President Warns of Orwell’s 1984 ‘Coming to Pass’ in 2024 — from interestingengineering.com by Chris Young
Microsoft’s Brad Smith warned we may be caught up in a losing race with artificial intelligence.

Excerpt (emphasis DSC):

The surveillance-state dystopia portrayed in George Orwell’s 1984 could “come to pass in 2024” if governments don’t do enough to protect the public against artificial intelligence (AI), Microsoft president Brad Smith warned in an interview for the BBC’s investigative documentary series Panorama.

During the interview, Smith warned of China’s increasing AI prowess and the fact that we may be caught up in a losing race with the technology itself.

“If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith stated.

From DSC:
This is a major heads up to all those in the legal/legislative realm — especially the American Bar Association (ABA) and the Bar Associations across the country! The ABA needs to realize they have to up their game and get with the incredibly fast pace of the twenty-first century. If that doesn’t occur, we and future generations will pay the price. Two thoughts come to my mind in regards to the ABA and for the law schools out there:

Step 1: Allow 100% online-based JD programs all the time, from here on out.

Step 2: Encourage massive new program development within all law schools to help future lawyers, judges, legislative reps, & others build up more emerging technology expertise & the ramifications thereof.

Google’s plan to make search more sentient — from vox.com by Rebecca Heilweil
Google announces new search features every year, but this time feels different.

Excerpt:

At the keynote speech of its I/O developer conference on Tuesday, Google revealed a suite of ways the company is moving forward with artificial intelligence. These advancements show Google increasingly trying to build AI-powered tools that seem more sentient and that are better at perceiving how humans actually communicate and think. They seem powerful, too.

Two of the biggest AI announcements from Google involve natural language processing and search.

Google also revealed a number of AI-powered improvements to its Maps platform that are designed to yield more helpful results and directions.

Google’s plans to bring AI to education make its dominance in classrooms more alarming — from fastcompany.com by Ben Williamson
The tech giant has expressed an ambition to transform education with artificial intelligence, raising fresh ethical questions.

Struggling to Get a Job? Artificial Intelligence Could Be the Reason Why — from newsweek.com by Lydia Veljanovski; with thanks to Sam DeBrule for the resource

Excerpt:

Except that isn’t always the case. In many instances, instead of your application being tossed aside by a HR professional, it is actually artificial intelligence that is the barrier to entry. While this isn’t a problem in itself—AI can reduce workflow by rapidly filtering applicants—the issue is that within these systems lies the possibility of bias.

It is illegal in the U.S. for employers to discriminate against a job applicant because of their race, color, sex, religion, disability, national origin, age (40 or older) or genetic information. However, these AI hiring tools are often inadvertently doing just that, and there are no federal laws in the U.S. to stop this from happening.

These Indian edtech companies are shaping the future of AI & robotics — from analyticsinsight.net by Apoorva Komarraju May 25, 2021

Excerpt:

As edtech companies have taken a lead by digitizing education for the modern era, they have taken the stance to set up Atal Tinkering Labs in schools along with other services necessary for the budding ‘kidpreneurs’. With the availability of these services, students can experience 21st-century technologies like IoT, 3D printing, AI, and Robotics.

Researchers develop machine-learning model that accurately predicts diabetes, study says — from ctvnews.ca by Christy Somos

Excerpt:

TORONTO — Canadian researchers have developed a machine-learning model that accurately predicts diabetes in a population using routinely collected health data, a new study says.

The study, published in the JAMA Network Open journal, tested new machine-learning technology on routinely collected health data that examined the entire population of Ontario. The study was run by the ICES not-for-profit data research institute.

Using linked administrative health data from Ontario from 2006 to 2016, researchers created a validated algorithm by training the model on information taken from nearly 1.7 million patients.

Project Guideline: Enabling Those with Low Vision to Run Independently — from ai.googleblog.com by Xuan Yang; with thanks to Sam DeBrule for the resource

Excerpt:

For the 285 million people around the world living with blindness or low vision, exercising independently can be challenging. Earlier this year, we announced Project Guideline, an early-stage research project, developed in partnership with Guiding Eyes for the Blind, that uses machine learning to guide runners through a variety of environments that have been marked with a painted line. Using only a phone running Guideline technology and a pair of headphones, Guiding Eyes for the Blind CEO Thomas Panek was able to run independently for the first time in decades and complete an unassisted 5K in New York City’s Central Park.

Deepfake Maps Could Really Mess With Your Sense of the World — from wired.com by Will Knight
Researchers applied AI techniques to make portions of Seattle look more like Beijing. Such imagery could mislead governments or spread misinformation online.

In a paper published last month, researchers altered satellite images to show buildings in Seattle where there are none.

 

21 jobs of the future: A guide to getting — and staying — employed over the next 10 years — from cognizant.com and  the Center for The Future of Work

Excerpt:

WHAT THE NEXT 10 YEARS WILL BRING: NEW JOBS
In this report, we propose 21 new jobs that will emerge over the next 10 years and will become cornerstones of the future of work. In producing this report, we imagined hundreds of jobs that could emerge within the major macroeconomic, political, demographic, societal, cultural, business and technology trends observable today, e.g., growing populations, aging populations, populism, environmentalism, migration, automation, arbitrage, quantum physics, AI, biotechnology, space exploration, cybersecurity, virtual reality.

Among the jobs we considered, some seemed further out on the horizon and are not covered here: carbon farmers, 3-D printing engineers, avatar designers, cryptocurrency arbitrageurs, drone jockeys, human organ developers, teachers of English as a foreign language for robots, robot spa owners, algae farmers, autonomous fleet valets, Snapchat addiction therapists, urban vertical farmers and Hyperloop construction managers. These are jobs that younger generations may do in the further off future.

21 jobs on a chart where tech-centricity is on the vertical axis and the time horizon is on the horizontal axis. 21 jobs are represented in this graphic and report.

Also see:

Here are the top 10 jobs of the future — from bigthink.com by Robert Brown
Say hello to your new colleague, the Workplace Environment Architect.

Excerpt:

6. Algorithm Bias Auditor – “All online, all the time” lifestyles for work and leisure accelerated the competitive advantage derived from algorithms by digital firms everywhere. But from Brussels to Washington, given the increasing statutory scrutiny on data, it’s a near certainty that when it comes to how they’re built, verification through audits will help ensure the future workforce is also the fair workforce.

 

Reimagining Online Culture: Project-Based Learning, Inclusion, and Reach in Online Education — from er.educause.edu by Christian Schneider
The pandemic created a unique opportunity for educators to rethink their approach to online learning and explore how this educational environment can expand access while increasing and building on diversity.

Excerpt:

The move to online education during the pandemic has been one of the greatest experiments ever conducted. It was initially met with reluctance and fatigue, but once we moved beyond the attempts to replicate what we do in real life, it brought to light important innovations.

We cut out constraints, categorizations, and biases while concentrating on our faces, voices, and work, and we extended the reach of geographical, cultural, and social access.

During the pandemic, however, when most students were in their home countries, they seemed to be more comfortable as their authentic selves, working on projects that related to their local environments.

Teaching online can not only make education available to more people around the globe but also open a space where students can share “a piece of themselves,” where different perspectives can interact, where we can learn from each other and our local environments and opportunities. This creates an enormous opportunity for equity and inclusion.

 

What if we could create these kinds of calendars and/or apps for faculty and staff as well as for students? — idea from Daniel Christian. The vehicles could be developed as analog/physical formats or in digital formats and apps. In the digital realm, one could receive a daily notification.

For faculty/staff:

  • Teaching and learning tips; pedagogies (flipped learning, active learning, etc.); ideas that have worked well for others
  • Creative experiments to try (such as digital storytelling or with an emerging technology such as AR, MR, or VR)
  • Tips & tricks re: tools within the learning ecosystem of one’s organization
  • How to make digital content that’s accessible
  • Items re: bias, diversity, equity & inclusion
  • Dates to be aware of (for processes on one’s LMS/CMS as an example)
  • Notes of encouragement and/or humor
  • Links to key resources
  • Other

[The Corporate Training / L&D world could do this as well.] 

An example of what a front cover of a physical flip calendar could look like

An example of what a page might contain within a physical flip calendar

A calendar page that says Memory if the residue of thought.

Example calendar page that states when courses will be published on an LMS

For students

  • Studying tips
  • How to take courses online
  • How people learn
  • Resources, books, people to follow on Twitter, blogs and RSS feeds, etc.
  • Pictures of judges, legislative bodies, law offices, corporate HQs, other
  • Notes of encouragement
  • Ethics
  • Professionalism
  • Other
 

This is an abstract picture of a person's head made of connections peering sideways -- it links to Artificial intelligence and the future of national security from ASU

Artificial intelligence and the future of national security — from news.asu.edu

Excerpt:

Artificial intelligence is a “world-altering” technology that represents “the most powerful tools in generations for expanding knowledge, increasing prosperity and enriching the human experience” and will be a source of enormous power for the companies and countries that harness them, according to the recently released Final Report of the National Security Commission on Artificial Intelligence.

This is not hyperbole or a fantastical version of AI’s potential impact. This is the assessment of a group of leading technologists and national security professionals charged with offering recommendations to Congress on how to ensure American leadership in AI for national security and defense. Concerningly, the group concluded that the U.S. is not currently prepared to defend American interests or compete in the era of AI.

Also see:

EU Set to Ban Surveillance, Start Fines Under New AI Rules — from bloomberg.com by Natalia Drozdiak

Excerpt:

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.

Also see:

Wrongfully arrested man sues Detroit police over false facial recognition match — from washingtonpost.com by Drew Harwell
The case could fuel criticism of police investigators’ use of a controversial technology that has been shown to perform worse on people of color

Excerpts:

A Michigan man has sued Detroit police after he was wrongfully arrested and falsely identified as a shoplifting suspect by the department’s facial recognition software in one of the first lawsuits of its kind to call into question the controversial technology’s risk of throwing innocent people in jail.

Robert Williams, a 43-year-old father in the Detroit suburb of Farmington Hills, was arrested last year on charges he’d taken watches from a Shinola store after police investigators used a facial recognition search of the store’s surveillance-camera footage that identified him as the thief.

Prosecutors dropped the case less than two weeks later, arguing that officers had relied on insufficient evidence. Police Chief James Craig later apologized for what he called “shoddy” investigative work. Williams, who said he had been driving home from work when the 2018 theft had occurred, was interrogated by detectives and held in custody for 30 hours before his release.

Williams’s attorneys did not make him available for comment Tuesday. But Williams wrote in The Washington Post last year that the episode had left him deeply shaken, in part because his young daughters had watched him get handcuffed in his driveway and put into a police car after returning home from work.

“How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?” he wrote. “As any other black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly — even though I knew I had done nothing wrong.”

Addendum on 4/20/21:

 

Even as colleges pledge to improve, share of engineering and math graduates who are Black declines — from hechingerreport.org by Melba Newsome; with thanks to Ryan Craig for this resource
The supply of Black scientists, engineers and mathematicians is flat or falling even as demand goes up

Excerpt:

Black enrollment in STEM fields — science, technology, engineering and math — is among the issues that urgently demand attention, said Cato Laurencin, CEO of the Connecticut Institute for Clinical and Translational Science. “We need to move from talking about the issue of Blacks in STEM and systemic racism to making concrete changes,” Laurencin said.

The proportion of bachelor’s degrees in science awarded to Black graduates remained flat at about 9 percent from 2001 to 2016, according to the most recent available figures from the National Science Foundation; in engineering, it declined from 5 percent to 4 percent; and in math, it dropped from 7 percent to 4 percent.

 

 

 

Improving Digital Inclusion & Accessibility for Those With Learning Disabilities — from inclusionhub.com by Meredith Kreisa
Learning disabilities must be taken into account during the digital design process to ensure digital inclusion and accessibility for the community. This comprehensive guide outlines common learning disabilities, associated difficulties, accessibility barriers and best practices, and more.

“Learning shouldn’t be something only those without disabilities get to do,” explains Seren Davies, a full stack software engineer and accessibility advocate who is dyslexic. “It should be for everyone. By thinking about digital accessibility, we are making sure that everyone who wants to learn can.”

“Learning disability” is a broad term used to describe several specific diagnoses. Dyslexia, dyscalculia, dysgraphia, nonverbal learning disorder, and oral/written language disorder and specific reading comprehension deficit are among the most prevalent.

An image of a barrier being torn down -- revealing a human mind behind it. This signifies the need to tear down any existing barriers that might hinder someone's learning experience.

 

Pro:

AI-powered chatbots automate IT help at Dartmouth — from edscoop.com by Ryan Johnston

Excerpt:

To prevent a backlog of IT requests and consultations during the coronavirus pandemic, Dartmouth College has started relying on AI-powered chatbots to act as an online service desk for students and faculty alike, the school said Wednesday.

Since last fall, the Hanover, New Hampshire, university’s roughly 6,600 students and 900 faculty have been able to consult with “Dart” — the virtual assistant’s name — to ask IT or service-related questions related to the school’s technology. More than 70% of the time, their question is resolved by the chatbot, said Muddu Sudhakar, the co-founder and CEO of Aisera, the company behind the software.

Con:

The Foundations of AI Are Riddled With Errors — from wired.com by Will Knight
The labels attached to images used to train machine-vision systems are often wrong. That could mean bad decisions by self-driving cars and medical algorithms.

Excerpt:

“What this work is telling the world is that you need to clean the errors out,”says Curtis Northcutt, a PhD student at MIT who led the new work.“Otherwise the models that you think are the best for your real-world business problem could actually be wrong.”

 

 

DC: Yet another reason for Universal Design for Learning’s multiple means of presentation/media:

Encourage faculty to presume students are under-connected. Asynchronous, low-bandwidth approaches help give students more flexibility in accessing course content in the face of connectivity challenges.

— as excerpted from campustechnology.com’s article entitled, “4 Ways Institutions Can Meet Students’ Connectivity and Technology Needs

 

 

If equity is a priority, UDL is a must — from cultofpedagogy.com by Katie Novak

Once you identify the firm goal, ask yourself, “Based on the variability in my class, what barriers may prevent learners from working toward that goal and how can I eliminate those barriers through design?”

Excerpt:

When we design the same learning pathways for all learners, we might tell ourselves we are being fair, but in fact, single pathways are exclusionary.  Beverly Daniel Tatum, author of the critically acclaimed book, Why Are All The Black Kids Sitting Together in the Cafeteria? And Other Conversations About Race, challenges us to focus on impact over intentions. It may not be our intent to exclude our learners, but the reality is that many students do not have opportunities to learn at high levels or to access curriculum and instruction that is accessible, engaging, culturally sustaining, and linguistically appropriate.

Luckily, there is a framework that rejects these one-size-fits-all solutions and empowers educators to proactively design learning experiences so all students can increase their brainpower and accelerate and own their learning. The framework is Universal Design for Learning (UDL).

UDL is a framework for designing learning experiences so students have options for how they learn, what materials they use, and how they demonstrate their learning. 

From DSC:
I put together this graphic as I’m working on a Module (for Canvas) to address the topic of accessibility:

An image of a barrier being torn down -- revealing a human mind behind it. This signifies the need to tear down any existing barriers that might hinder someone's learning experience.
By Daniel Christian March 2021
 

Two items from faculty focus.com by Jenae Cohn, PhD, and Courtney Plotts, PhD:

  1. How to Structure Your Online Class for Inclusion, Part 1 
  2. How to Structure Your Online Class for Inclusion: Two Principles for Fostering Engagement, Part 2 

Excerpt:

A strong sense of community begins with faculty designing and planning for the sense of community in the course. In order to build a strong sense of community within an online course, instructors should start by identifying the type of community they want to create. In other words, what is the common thread that runs through an online course: Inquiry, information giving, information gathering, and/or active listening?

Although not intuitive to all instructors, this question surrounding the idea of a sense of community is imperative for creating cohesion and a sense of belonging to a learning environment. Here are some ways that instructors might start to think about what community might mean for their class context:

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 

How Facebook got addicted to spreading misinformation — from technologyreview.com by Karen Hao

Excerpt (emphasis DSC):

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect — from forbes.com by Nisha Talagala

5 trends for AI in 2021

 
© 2024 | Daniel Christian