From DSC:
My brother-in-law sent me the link to the video below. It’s a very solid interview about racism and what some solutions are to it. It offers some important lessons for us.

A heads up here: There’s some adult language in this piece — from the interviewer not the interviewee (i.e., you know…several of those swear words that I’ve been trying since second grade to get rid of in my vocabulary! Sorry to report that I’ve not enjoyed too much success in that area. Thanks for your patience LORD…the work/process continues).

While I have several pages worth of notes (because that’s just how I best process information and stay focused), I will just comment on a couple things:

* A 10 year old boy has rocks thrown at him by adults and kids and rightfully asks, “Why are they doing this to me when they don’t even *know* me?!”  That burning question lead to a decades-long search for Mr. Daryl Davis as he sought the answer to that excellent question.

* Years later Daryl surmised this phenomenon was/is at play: Unchecked ignorance –> leads to fear –> unchecked fear leads to hatred –> unchecked hatred leads to destruction. One of the best ways to stop this is via education and exposure to the truth — which we can get by being with and talking to/with each other. How true.  
One of the best things my parents ever did was to move us from a predominantly white neighborhood and school setting to a far more diverse setting. Prior to the move, we used to hear (and likely believed was true) that “There are all kinds of guns and knives at this junior high school and at this high school. Violence abounds there.” After moving and getting exposure to the people and learning environments at those schools, we realized that that fear was a lie…a lie born out of ignorance. The truth/reality was different from the lie/ignorance.
* Mr. Daryl Davis is an instrument of peace. He is:
  • Highly articulate
  • A multi-talented gentleman
  • A deep thinker
  • …and an eloquent communicator.

I thanked my brother-in-law for the link to the interview.


Also see:

Healing Racial Trauma: The Road to Resilience— from christianbook.com by Sheila Wise Rowe

Product Description
As a child, Sheila Wise Rowe was bused across town to a majority white school, where she experienced the racist lie that one group is superior to all others. This lie continues to be perpetuated today by the action or inaction of the government, media, viral videos, churches, and within families of origin. In contrast, Scripture declares that we are all fearfully and wonderfully made.

Rowe, a professional counselor, exposes the symptoms of racial trauma to lead readers to a place of freedom from the past and new life for the future. In each chapter, she includes an interview with a person of color to explore how we experience and resolve racial trauma. With Rowe as a reliable guide who has both been on the journey and shown others the way forward, you will find a safe pathway to resilience.

 

 

From DSC:
As some of you may know, I’m now working for the WMU-Thomas M. Cooley Law School. My faith gets involved here, but I believe that the LORD wanted me to get involved with:

  • Using technology to increase access to justice (#A2J)
  • Contributing to leveraging the science of learning for the long-term benefit of our students, faculty, and staff
  • Raising awareness regarding the potential pros and cons of today’s emerging technologies
  • Increase the understanding that the legal realm has a looooong way to go to try to get (even somewhat) caught up with the impacts that such emerging technologies can/might have on us.
  • Contributing and collaborating with others to help develop a positive future, not a negative one.

Along these lines…in regards to what’s been happening with law schools over the last few years, I wanted to share a couple of things:

1) An article from The Chronicle of Higher Education by Benjamin Barton:

The Law School Crash

 

2) A response from our President and Dean, James McGrath:Repositioning a Law School for the New Normal

 

From DSC:
I also wanted to personally say that I arrived at WMU-Cooley Law School in 2018, and have been learning a lot there (which I love about my job!).  Cooley employees are very warm, welcoming, experienced, knowledgeable, and professional. Everyone there is mission-driven. My boss, Chris Church, is multi-talented and excellent. Cooley has a great administrative/management team as well.

There have been many exciting, new things happening there. But that said, it will take time before we see the results of these changes. Perseverance and innovation will be key ingredients to crafting a modern legal education — especially in an industry that is just now beginning to offer online-based courses at the Juris Doctor (J.D.) level (i.e., 20 years behind when this began occurring within undergraduate higher education).

My point in posting this is to say that we should ALL care about what’s happening within the legal realm!  We are all impacted by it, whether we realize it or not. We are all in this together and no one is an island — not as individuals, and not as organizations.

We need:

  • Far more diversity within the legal field
  • More technical expertise within the legal realm — not only with lawyers, but with legislators, senators, representatives, judges, others
  • Greater use of teams of specialists within the legal field
  • To offer more courses regarding emerging technologies — and not only for the legal practices themselves but also for society at large.
  • To be far more vigilant in crafting a positive world to be handed down to our kids and grandkids — a dream, not a nightmare. Just because we can, doesn’t mean we should.

Still not convinced that you should care? Here are some things on the CURRENT landscapes:

  • You go to drop something off at your neighbor’s house. They have a camera that gets activated.  What facial recognition database are you now on? Did you give your consent to that? No, you didn’t.
  • Because you posted your photo on Facebook, YouTube, Venmo and/or on millions of other websites, your face could be in ClearView AI’s database. Did you give your consent to that occurring? No, you didn’t.
  • You’re at the airport and facial recognition is used instead of a passport. Whose database was that from and what gets shared? Did you give your consent to that occurring? Probably not, and it’s not easy to opt-out either.
  • Numerous types of drones, delivery bots, and more are already coming onto the scene. What will the sidewalks, streets, and skies look like — and sound like — in your neighborhood in the near future? Is that how you want it? Did you give your consent to that happening? No, you didn’t.
  • …and on and on it goes.

Addendum — speaking of islands!

Palantir CEO: Silicon Valley can’t be on ‘Palo Alto island’ — Big Tech must play by the rules — from cnbc.com by Jessica Bursztynsky

Excerpt:

Palantir Technologies co-founder and CEO Alex Karp said Thursday the core problem in Silicon Valley is the attitude among tech executives that they want to be separate from United States regulation.

“You cannot create an island called Palo Alto Island,” said Karp, who suggested tech leaders would rather govern themselves. “What Silicon Valley really wants is the canton of Palo Alto. We have the United States of America, not the ‘United States of Canton,’ one of which is Palo Alto. That must change.”

“Consumer tech companies, not Apple, but the other ones, have basically decided we’re living on an island and the island is so far removed from what’s called the United States in every way, culturally, linguistically and in normative ways,” Karp added.

 

 

From DSC:
I’ll say it again, just because we can, doesn’t mean we should.

From the article below…we can see another unintended consequence is developing on society’s landscapes. I really wish the 20 and 30 somethings that are being hired by the big tech companies — especially at Amazon, Facebook, Google, Apple, and Microsoft — who are developing these things would ask themselves:

  • “Just because we can develop this system/software/application/etc., SHOULD we be developing it?”
  • What might the negative consequences be? 
  • Do the positive contributions outweigh the negative impacts…or not?

To colleges professors and teachers:
Please pass these thoughts onto your students now, so that this internal questioning/conversations begin to take place in K-16.


Report: Colleges Must Teach ‘Algorithm Literacy’ to Help Students Navigate Internet — from edsurge.com by Rebecca Koenig

Excerpt (emphasis DSC):

If the Ancient Mariner were sailing on the internet’s open seas, he might conclude there’s information everywhere, but nary a drop to drink.

That’s how many college students feel, anyway. A new report published this week about undergraduates’ impressions of internet algorithms reveals students are skeptical of and unnerved by tools that track their digital travels and serve them personalized content like advertisements and social media posts.

And some students feel like they’ve largely been left to navigate the internet’s murky waters alone, without adequate guidance from teachers and professors.

Researchers set out to learn “how aware students are about their information being manipulated, gathered and interacted with,” said Alison Head, founder and director of Project Information Literacy, in an interview with EdSurge. “Where does that awareness drop off?”

They found that many students not only have personal concerns about how algorithms compromise their own data privacy but also recognize the broader, possibly negative implications of tools that segment and customize search results and news feeds.

 

From DSC:
Very disturbing that citizens had no say in this. Legislators, senators, representatives, lawyers, law schools, politicians, engineers, programmers, professors, teachers, and more…please reflect upon our current situation here. How can we help create the kind of future that we can hand down to our kids and rest well at night…knowing we did all that we could to provide a dream — and not a nightmare — for them?


The Secretive Company That Might End Privacy as We Know It — from nytimes.com by Kashmir Hill
A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

 

Excerpts:

“But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year…”

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

 

Indian police are using facial recognition to identify protesters in Delhi — from fastcompany.com by Kristin Toussaint

Excerpt:

At Modi’s rally on December 22, Delhi police used Automated Facial Recognition System (AFRS) software—which officials there acquired in 2018 as a tool to find and identify missing children—to screen the crowd for faces that match a database of people who have attended other protests around the city, and who officials said could be disruptive.

According to the Indian Express, Delhi police have long filmed these protest events, and the department announced Monday that officials fed that footage through AFRS. Sources told the Indian news outlet that once “identifiable faces” are extracted from that footage, a dataset will point out and retain “habitual protesters” and “rowdy elements.” That dataset was put to use at Modi’s rally to keep away “miscreants who could raise slogans or banners.”

 

From DSC:
Here in the United States…are we paying attention to today’s emerging technologies and collaboratively working to create a future dream — versus a future nightmare!?!  A vendor or organization might propose a beneficial reason to use their product or technology — and it might even meet the hype at times…but then comes along other unintended uses and consequences of that technology. For example, in the article above, what started out as a technology that was supposed to be used to find/identify missing children (a benefit) was later used to identify protesters (an unintended consequence, and a nightmare in terms of such an expanded scope of use I might add)!

Along these lines, the youth of today have every right to voice their opinions and to have a role in developing or torpedoing emerging techs. What we build and put into place now will impact their lives bigtime!

 

7 Artificial Intelligence Trends to Watch in 2020 — from interestingengineering.com by Christopher McFadden

Excerpts:

Per this article, the following trends were listed:

  1. Computer Graphics will greatly benefit from AI
  2. Deepfakes will only get better, er, worse
  3. Predictive text should get better and better
  4. Ethics will become more important as time goes by
  5. Quantum computing will supercharge AI
  6. Facial recognition will appear in more places
  7. AI will help in the optimization of production pipelines

Also, this article listed several more trends:

According to sources like The Next Web, some of the main AI trends for 2020 include:

  • The use of AI to make healthcare more accurate and less costly
  • Greater attention paid to explainability and trust
  • AI becoming less data-hungry
  • Improved accuracy and efficiency of neural networks
  • Automated AI development
  • Expanded use of AI in manufacturing
  • Geopolitical implications for the uses of AI

Artificial Intelligence offers great potential and great risks for humans in the future. While still in its infancy, it is already being employed in some interesting ways.

According to sources like Forbes, some of the next “big things” in technology include, but are not limited to:

  • Blockchain
  • Blockchain As A Service
  • AI-Led Automation
  • Machine Learning
  • Enterprise Content Management
  • AI For The Back Office
  • Quantum Computing AI Applications
  • Mainstreamed IoT

Also see:

Artificial intelligence predictions for 2020: 16 experts have their say — from verdict.co.uk by Ellen Daniel

Excerpts:

  • Organisations will build in processes and policies to prevent and address potential biases in AI
  • Deepfakes will become a serious threat to corporations
  • Candidate (and employee) care in the world of artificial intelligence
  • AI will augment humans, not replace them
  • Greater demand for AI understanding
  • Ramp up in autonomous vehicles
  • To fully take advantage of AI technologies, you’ll need to retrain your entire organisation
  • Voice technologies will infiltrate the office
  • IT will run itself while data acquires its own DNA
  • The ethics of AI
  • Health data and AI
  • AI to become an intrinsic part of robotic process automation (RPA)
  • BERT will open up a whole new world of deep learning use cases

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of what’s in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

 

 

Art-filled journeys into the future — methods of futures education for children in lower stage comprehensive school — from kultus.fi by Ilpo Rybatzki and Otto Tähkäpää

Art-filled futures education

 

See this PDF file which contains the following excerpt:

In art, futures literacy plays a significant role. Art has the ability to point elsewhere; to fool and mess around with things and shake up conventions without needing to achieve measurable benefits (Varto, 2008). Art ensures a solid background for imagining alternative worlds. It is important to support a permissive atmosphere that supports experimentation! From the perspective of art pedagogy, activities focus on the idea of art experience as meeting place (Pääjoki, 2004) where people can see themselves in a new light beside another person’s thoughts and imagination. Strengthening futures literacy means supporting transformative learning that aims for change. Through this type of learning, we can question norms, roles, identities and the concept of what is ‘normal’ (Lehtonen et al., 2018).

When discussing the future, we are always discussing values: what kind of future is desirable for any one person? Artistic activity can produce materials through which human meanings can be communicated from one person to another and questions about values in life can be discussed (Varto, 2008; Valkeapää, 2012). Encounters create opportunities for dialogue and enriching one’s perspectives. Important aspects include creating safe settings, the individual expression of the participants, the courage to open up and thrown oneself into the centre of an experience, as well as the courage to question or even completely let go of presumptions. In the age of the environmental crisis, art has a critical role in all of society. We cannot solve difficult problems using the same kind of thinking that created the problems in the first place.

 

Don’t trust AI until we build systems that earn trust — from economist.com
Progress in artificial intelligence belies a lack of transparency that is vital for its adoption, says Gary Marcus, coauthor of “Rebooting AI”

Excerpts:

Mr Marcus argues that it would be foolish of society to put too much stock in today’s AI techniques since they are so prone to failures and lack the transparency that researchers need to understand how algorithms reached their conclusions.

As part of The Economist’s Open Future initiative, we asked Mr Marcus about why AI can’t do more, how to regulate it and what teenagers should study to remain relevant in the workplace of the future.

Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other field. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.

The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high. It’s fine if autotagging people in photos turns out to be only 90 percent reliable—if it is just about personal photos that people are posting to Instagram—but it better be much more reliable when the police start using it to find suspects in surveillance photos.

 

2019 AI report tracks profound growth — from ide.mit.edu by Paula Klein

Excerpt:

Until now “we’ve been sorely lacking good data about basic questions like ‘How is the technology advancing’ and ‘What is the economic impact of AI?’ ” Brynjolfsson said. The new index, which tracks three times as many data sets as last year’s report, goes a long way toward providing answers.

  1. Education
  • At the graduate level, AI has rapidly become the most popular specialization among computer science PhD students in North America. In 2018, over 21% of graduating Computer Science PhDs specialize in Artificial Intelligence/Machine Learning.
  • Industry is the largest consumer of AI talent. In 2018, over 60% of AI PhD graduates went to industry, up from 20% in 2004.
  • In the U.S., AI faculty leaving academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

In the U.S., #AI faculty leaving #academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

Greta Thunberg is the youngest TIME Person of the Year ever. Here’s how she made history — from time.com

Excerpt:

The politics of climate action are as entrenched and complex as the phenomenon itself, and Thunberg has no magic solution. But she has succeeded in creating a global attitudinal shift, transforming millions of vague, middle-of-the-night anxieties into a worldwide movement calling for urgent change. She has offered a moral clarion call to those who are willing to act, and hurled shame on those who are not. She has persuaded leaders, from mayors to Presidents, to make commitments where they had previously fumbled: after she spoke to Parliament and demonstrated with the British environmental group Extinction Rebellion, the U.K. passed a law requiring that the country eliminate its carbon footprint. She has focused the world’s attention on environmental injustices that young indigenous activists have been protesting for years. Because of her, hundreds of thousands of teenage “Gretas,” from Lebanon to Liberia, have skipped school to lead their peers in climate strikes around the world.

 

Young people! You CAN and will make a big impact/difference!

 

Artificial Intelligence has a gender problem — why it matters for everyone — from nbcnews.com by Halley Bondy
To fight the rise of bias in AI, more representation is critical in the computing workforce, where only 26 percent of workers are women, 3 percent are African-American women, and 2 percent are Latinx.

Excerpt:

More women and minorities must work in tech, or else they risk being left behind in every industry.

This grim future was painted by Artificial Intelligence (AI) equality experts who spoke at a conference Thursday hosted by LivePerson, an AI company that connects brands and consumers.

In that future, if AI goes unchecked, workplaces will be completely homogenous, hiring only white, nondisabled men.

Guest speaker Cathy O’Neil, who authored “Weapons of Math Destruction,” explained how hiring bias works with AI: company algorithms are created by (mostly white male) data scientists, and they are based on the company’s historic wins. If a CEO is specifically looking for hirees who won’t leave the company after a year, for example, he might turn to AI to look for candidates based on his company’s retention rates. Chances are, most of his company’s historic wins only include white men, said O’Neil.

 

The future of law and computational technologies: Two sides of the same coin — from law.mit.edu by Daniel Linna
Law and computation are often thought of as being two distinct fields. Increasingly, that is not the case. Dan Linna explores the ways a computational approach could help address some of the biggest challenges facing the legal industry.

Excerpt:

The rapid advancement of artificial intelligence (“AI”) introduces opportunities to improve legal processes and facilitate social progress. At the same time, AI presents an original set of inherent risks and potential harms. From a Law and Computational Technologies perspective, these circumstances can be broadly separated into two categories. First, we can consider the ethics, regulations, and laws that apply to technology. Second, we can consider the use of technology to improve the delivery of legal services, justice systems, and the law itself. Each category presents an unprecedented opportunity to use significant technological advancements to preserve and expand the rule of law.

For basic legal needs, access to legal services might come in the form of smartphones or other devices that are capable of providing users with an inventory of their legal rights and obligations, as well as providing insights and solutions to common legal problems. Better yet, AI and pattern matching technologies can help catalyze the development of proactive approaches to identify potential legal problems and prevent them from arising, or at least mitigate their risk.

We risk squandering abundant opportunities to improve society with computational technologies if we fail to proactively create frameworks to embed ethics, regulation, and law into our processes by design and default.

To move forward, technologists and lawyers must radically expand current notions of interdisciplinary collaboration. Lawyers must learn about technology, and technologists must learn about the law.

 

 

Considering AI in hiring? As its use grows, so do the legal implications for employers. — from forbes.com by Alonzo Martinez; with thanks to Paul Czarapata for his posting on Twitter on this

Excerpt:

As employers grapple with a widespread labor shortage, more are turning to artificial intelligence tools in their search for qualified candidates.

Hiring managers are using increasingly sophisticated AI solutions to streamline large parts of the hiring process. The tools scrape online job boards and evaluate applications to identify the best fits. They can even stage entire online interviews and scan everything from word choice to facial expressions before recommending the most qualified prospects.

But as the use of AI in hiring grows, so do the legal issues surrounding it. Critics are raising alarms that these platforms could lead to discriminatory hiring practices. State and federal lawmakers are passing or debating new laws to regulate them. And that means organizations that implement these AI solutions must not only stay abreast of new laws, but also look at their hiring practices to ensure they don’t run into legal trouble when they deploy them.

 

Amazon’s Ring planned neighborhood “watch lists” built on facial recognition — from theintercept.com by Sam Biddle

Excerpts (emphasis DSC):

Ring, Amazon’s crime-fighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.

Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds.

Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.

 

From DSC:
Those working or teaching within the legal realm — this one’s for you. But it’s also for the leadership of the C-Suites in our corporate world — as well as for all of those programmers, freelancers, engineers, and/or other employees working on AI within the corporate world.

By the way, and not to get all political here…but who’s to say what happens with our data when it’s being reviewed in Ukraine…?

 

Also see:

  • Opinion: AI for good is often bad — from wired.com by Mark Latonero
    Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
 
© 2024 | Daniel Christian