The Research is in: 2019 Education Research Highlights — edutopia.org by Youki Terada
Does doodling boost learning? Do attendance awards work? Do boys and girls process math the same way? Here’s a look at the big questions that researchers tackled this year.

Excerpt:

Every year brings new insights—and cautionary tales—about what works in education. 2019 is no different, as we learned that doodling may do more harm than good when it comes to remembering information. Attendance awards don’t work and can actually increase absences. And while we’ve known that school discipline tends to disproportionately harm students of color, a new study reveals a key reason why: Compared with their peers, black students tend to receive fewer warnings for misbehavior before being punished.

CUT THE ARTS AT YOUR OWN RISK, RESEARCHERS WARN
As arts programs continue to face the budget ax, a handful of new studies suggest that’s a grave mistake. The arts provide cognitive, academic, behavioral, and social benefits that go far beyond simply learning how to play music or perform scenes in a play.

In a major new study from Rice University involving 10,000 students in third through eighth grades, researchers determined that expanding a school’s arts programs improved writing scores, increased the students’ compassion for others, and reduced disciplinary infractions. The benefits of such programs may be especially pronounced for students who come from low-income families, according to a 10-year study of 30,000 students released in 2019.

Unexpectedly, another recent study found that artistic commitment—think of a budding violinist or passionate young thespian—can boost executive function skills like focus and working memory, linking the arts to a set of overlooked skills that are highly correlated to success in both academics and life.

Failing to identify and support students with learning disabilities early can have dire, long-term consequences. In a comprehensive 2019 analysis, researchers highlighted the need to provide interventions that align with critical phases of early brain development. In one startling example, reading interventions for children with learning disabilities were found to be twice as effective if delivered by the second grade instead of third grade.

 

AI hiring could mean robot discrimination will head to courts — from news.bloomberglaw.com by Chris Opfer

  • Algorithm vendors, employers grappling with liability issues
  • EEOC already looking at artificial intelligence cases

Excerpt:

As companies turn to artificial intelligence for help making hiring and promotion decisions, contract negotiations between employers and vendors selling algorithms are being dominated by an untested legal question: Who’s liable when a robot discriminates?

The predictive strength of any algorithm is based at least in part on the information it is fed by human sources. That comes with concerns the technology could perpetuate existing biases, whether it is against people applying for jobs, home loans, or unemployment insurance.

From DSC:
Are law schools and their faculty/students keeping up with these kinds of issues? Are lawyers, judges, attorney generals, and others informed about these emerging technologies?

 

Welcome to the future! The future of work is… — from gettingsmart.com

Excerpt:

The future of work is here, and with it, new challenges — so what does this mean for teaching and learning? It means more contribution and young people learning how to make a difference. In our exploration of the #futureofwork, sponsored by eduInnovation and powered by Getting Smart, we dive into what’s happening, what’s coming and how schools might prepare.

 

 

 

A face-scanning algorithm increasingly decides whether you deserve the job — from washingtonpost.com by Drew Harwell
HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

Excerpt:

An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce — and how prospective employees prove their worth.

Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score.

 

The system, they argue, will assume a critical role in helping decide a person’s career. But they doubt it even knows what it’s looking for: Just what does the perfect employee look and sound like, anyway?

“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York.

 

From DSC:
If you haven’t been screened out by an algorithm from an Applicant Tracking System recently, then you haven’t been looking for a job in the last few years. If that’s the case:

  • Then you might not be very interested in this posting.
  • You will be very surprised in the future, when you do need to search for a new job.

Because the truth is, it’s very difficult to get the eyes of a human being to even look at your resume and/or to meet you in person. The above posting/article should disturb you even more. I don’t think that the programmers have programmed everything inside an experienced HR professional’s mind.

 

Also see:

  • In case after case, courts reshape the rules around AI — from muckrock.com
    AI Now Institute recommends improvements and highlights key AI litigation
    Excerpt:
    When undercover officers with the Jacksonville Sheriff’s Office bought crack cocaine from someone in 2015, they couldn’t actually identify the seller. Less than a year later, though, Willie Allen Lynch was sentenced to 8 years in prison, picked through a facial recognition system. He’s still fighting in court over how the technology was used, and his case and others like it could ultimately shape the use of algorithms going forward, according to a new report.
 

Everyday Media Literacy — from routledge.com by Sue Ellen Christian
An Analog Guide for Your Digital Life, 1st Edition

Description:

In this graphic guide to media literacy, award-winning educator Sue Ellen Christian offers students an accessible, informed and lively look at how they can consume and create media intentionally and critically.

The straight-talking textbook offers timely examples and relevant activities to equip students with the skills and knowledge they need to assess all media, including news and information. Through discussion prompts, writing exercises, key terms, online links and even origami, readers are provided with a framework from which to critically consume and create media in their everyday lives. Chapters examine news literacy, online activism, digital inequality, privacy, social media and identity, global media corporations and beyond, giving readers a nuanced understanding of the key concepts and concerns at the core of media literacy.

Concise, creative and curated, this book highlights the cultural, political and economic dynamics of media in our contemporary society, and how consumers can mindfully navigate their daily media use. Everyday Media Literacy is perfect for students (and educators) of media literacy, journalism, education and media effects looking to build their understanding in an engaging way.

 

YouTube’s algorithm hacked a human vulnerability, setting a dangerous precedent — from which-50.com by Andrew Birmingham

Excerpt (emphasis DSC):

Even as YouTube’s recommendation algorithm was rolled out with great fanfare, the fuse was already burning. A project of The Google Brain and designed to optimise engagement, it did something unforeseen — and potentially dangerous.

Today, we are all living with the consequences.

As Zeynep Tufekci, an associate professor at the University of North Carolina, explained to attendees of Hitachi Vantara’s Next 2019 conference in Las Vegas this week, “What the developers did not understand at the time is that YouTube’ algorithm had discovered a human vulnerability. And it was using this [vulnerability] at scale to increase YouTube’s engagement time — without a single engineer thinking, ‘is this what we should be doing?’”

 

The consequence of the vulnerability — a natural human tendency to engage with edgier ideas — led to YouTube’s users being exposed to increasingly extreme content, irrespective of their preferred areas of interest.

“What they had done was use machine learning to increase watch time. But what the machine learning system had done was to discover a human vulnerability. And that human vulnerability is that things that are slightly edgier are more attractive and more interesting.”

 

From DSC:
Just because we can…

 

 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

Take a tour of Google Earth with speakers of 50 different indigenous languages — from fastcompany.com by Melissa Locker

Excerpt:

After the United Nations declared 2019 the International Year of Indigenous Languages, Google decided to help draw attention to the indigenous languages spoken around the globe and perhaps help preserve some of the endangered ones too. To that end, the company recently launched its first audio-driven collection, a new Google Earth tour complete with audio recordings from more than 50 indigenous language speakers from around the world.

 

 

State Attempts to Nix Public School’s Facial Recognition Plans — from futurism.com by Kristin Houser
But it might not have the authority to actually stop an upcoming trial.

Excerpt (emphasis DSC):

Chaos Reigns
New York’s Lockport City School District (CSD) was all set to become the first public school district in the U.S. to test facial recognition on its students and staff. But just two days after the school district’s superintendent announced the project’s June 3 start date, the New York State Education Department (NYSED) attempted to put a stop to the trial, citing concerns for students’ privacy. Still, it’s not clear whether the department has the authority to actually put the project on hold — *****the latest sign that the U.S. is in desperate need of clear-cut facial recognition legislation.*****

 
 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao
A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.

Excerpt:

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

 

Along these lines, also see:

‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul
Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Excerpt:

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

 

 

 

Addendum on 4/20/19:

Amazon is now making its delivery drivers take selfies — from theverge.com by Shannon Liao
It will then use facial recognition to double-check

From DSC:
I don’t like this piece re: Amazon’s use of facial recognition at all. Some organization like Amazon asserts that they need facial recognition to deliver services to its customers, and then, the next thing we know, facial recognition gets its foot in the door…sneaks in the back way into society’s house. By then, it’s much harder to get rid of. We end up with what’s currently happening in China. I don’t want to pay for anything with my face. Ever. As Mark Zuckerberg has demonstrated time and again, I don’t trust humankind to handle this kind of power. Plus, the developing surveillance states by several governments is a chilling thing indeed. China is using it to identify/track Muslims.

China using AI to track Muslims

Can you think of some “groups” that people might be in that could be banned from receiving goods and services? I can. 

The appalling lack of privacy that’s going on in several societies throughout the globe has got to be stopped. 

 

 

Through the legal looking glass — from lodlaw.com by Lawyers On Demand (LOD) &  Jordan Furlong

Excerpts (emphasis DSC):

But here’s the thing: Even though most lawyers’ career paths have twisted and turned and looped back in unexpected directions, the landscape over which they’ve zig-zagged these past few decades has been pretty smooth, sedate and predictable. The course of these lawyers’ careers might not have been foreseeable, but for the most part, the course of the legal profession was, and that made the twists and turns easier to navigate.

Today’s lawyers, or anyone who enters the legal profession in the coming years, probably won’t be as fortunate. The fundamental landscape of the law is being remade as we speak, and the next two decades in particular will feature upheavals and disruptions at a pace and on a scale we’ve not seen before — following and matching similar tribulations in the wider world. This report is meant to advise you of the likeliest (but by no means certain) nature and direction of the fault lines along which the legal career landscape will fracture and remake itself in the coming years. Our hope is to help you anticipate these developments and adjust your own career plans in response, on the fly if necessary.

So, before you proceed any further into this report — before you draw closer to answering the question, “Will I still want to be a lawyer tomorrow?” — you need to think about why you’re a lawyer today.

Starting within the next five years or so, we should begin to see more lawyers drawn towards fulfilling the profession’s vocational or societal role, rather than choosing to pursue a private-sector commercial path. This will happen because:

  • generational change will bring new attitudes to the profession,
  • technological advances will reduce private legal work opportunities, and
  • a series of public crises will drive more lawyers by necessity towards societal roles.


It seems likely enough, in fact, that we’re leaving the era in which law was predominantly viewed as a safe, prestigious, private career, and entering one in which law is just as often considered a challenging, self-sacrificial, public career. More lawyers will find themselves grouped with teachers, police officers, and social workers — positions that pay decently but not spectacularly, that play a difficult but critical role in the civic order. We could call this the rising career path of the civic lawyer.

But if your primary or even sole motivation for entering the law is to become a wealthy member of the financial and political elite, then we suggest you should start looking for alternatives now. These types of careers will be fewer and farther between, and we suspect they will be increasingly at odds with the emerging spirit and character of the profession.

A prediction (which they admit can be a fool’s errand):
Amazon buys LegalZoom in the US as part of its entry into the global services sector, offering discounted legal services to Prime members. Regulators’ challenges will fail, signalling the beginning of the end of lawyer control of the legal market.

 

 

Five Principles for Thinking Like a Futurist — from er.educause.edu by Marina Gorbis

Excerpt:

In 2018 we celebrated the fifty-year anniversary of the founding of the Institute for the Future (IFTF). No other futures organization has survived for this long; we’ve actually survived our own forecasts! In these five decades we learned a lot, and we still believe—even more strongly than before—that systematic thinking about the future is absolutely essential for helping people make better choices today, whether you are an individual or a member of an educational institution or government organization. We view short-termism as the greatest threat not only to organizations but to society as a whole.

In my twenty years at the Institute, I’ve developed five core principles for futures thinking:

  • Forget about predictions.
  • Focus on signals.*
  • Look back to see forward.
  • Uncover patterns.*
  • Create a community.

 

* From DSC:
I have a follow up thought regarding those bullet points about signals and patterns. With today’s exponential pace of technological change, I have asserted for several years now that our students — and all of us really — need to be skilled in pulse-checking the relevant landscapes around us. That’s why I’m a big fan of regularly tapping into — and contributing towards — streams of content. Subscribing to RSS feeds, following organizations and/or individuals on Twitter, connecting with people on LinkedIn, etc. Doing so will help us identify trends, patterns, and the signals that Marina talks about in her article.

It reminds me of the following graphic from January 2017:

 
© 2024 | Daniel Christian