Report: There’s More to Come for AI in Ed — from thejournal.com by Dian Schaffhauser

Excerpts:

The group came up with dozens of “opportunities” for AI in education, from extending what teachers can do to better understanding human learning:

  • Using virtual instructors to free up “personalization time” for classroom teachers;
  • Offloading the “cognitive load” of teaching;
  • Providing “job aids” for teachers;
  • Identifying the links between courses, credentials, degrees and skills;
  • “Revolutionizing” testing and assessment;
  • Creating new kinds of “systems of support”;
  • Helping with development of “teaching expertise”; and
  • Better understanding human learning through “modeling and building interfaces” in AI.

But contributors also offered just as many barriers to success:

  • Differences in the way teachers teach would require “different job aids”;
  • Teachers would fear losing their jobs;
  • Data privacy concerns;
  • Bias worries;
  • Dealing with unrealistic expectations and fears about AI pushed in “popular culture”;
  • Lack of diversity in gender, ethnicity and culture in AI projects; and
  • Smart use of data would require more teacher training.
 

From DSC:
The good…

London A.I. lab claims breakthrough that could accelerate drug discovery — from nytimes.com by
Researchers at DeepMind say they have solved “the protein folding problem,” a task that has bedeviled scientists for more than 50 years

This long-sought breakthrough could accelerate the ability to understand diseases, develop new medicines and unlock mysteries of the human body.

…and the not so good…

 

From DSC:
Who needs to be discussing/debating “The Social Dilemma” movie? Whether one agrees with the perspectives put forth therein or not, the discussion boards out there should be lighting up in the undergraduate areas of Computer Science (especially Programming), Engineering, Business, Economics, Mathematics, Statistics, Philosophy, Religion, Political Science, Sociology, and perhaps other disciplines as well. 

To those starting out the relevant careers here…just because we can, doesn’t mean we should. Ask yourself not whether something CAN be developed, but *whether it SHOULD be developed* and what the potential implications of a technology/invention/etc. might be. I’m not aiming to take a position here. Rather, I’m trying to promote some serious reflection for those developing our new, emerging technologies and our new products/services out there.

Who needs to be discussing/debating The Social Dilemna movie?

 

 

Facial Recognition Start-Up Mounts a First Amendment Defense — from nytimes.com by Kashmir Hill
Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.

Excerpts:

Litigation against the start-up “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century,” Mr. Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court.

Clearview AI has scraped billions of photos from the internet, including from platforms like LinkedIn and Instagram, and sells access to the resulting database to law enforcement agencies. When an officer uploads a photo or a video image containing a person’s face, the app tries to match the likeness and provides other photos of that person that can be found online.

From DSC:
Many, if not all of us, are now required to be lifelong learners in order to stay marketable. I was struck by that when I read the following excerpt from the above article:

“I’m learning the language,” Mr. Abrams said. “I’ve never used the words ‘facial biometric algorithms’ until this phone call.”

 
 

IBM, Amazon, and Microsoft abandon law enforcement face recognition market — from which-50.com by Andrew Birmingham

Excerpt:

Three global tech giants — IBM, Amazon, and Microsoft — have all announced that they will no longer sell their face recognition technology to police in the USA, though each announcement comes with its own nuance.

The new policy comes in the midst of ongoing national demonstrations in the US about police brutality and more generally the subject of racial inequality in the country under the umbrella of the Black Lives Matter movement.

From DSC:
While I didn’t read the fine print (so I don’t know all of the “nuances” they are referring to) I see this as good news indeed! Well done whomever at those companies paused, and thought…

 

…just because we can…

just because we can does not mean we should


…doesn’t mean we should.

 

just because we can does not mean we should

Addendum on 6/18/20:

  • Why Microsoft and Amazon are calling on Congress to regulate facial recognition tech — from finance.yahoo.com by Daniel HowleyExcerpt:
    The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies.“If these technologies were to be deployed, I think you cannot do it in the absence of legislation,” explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.
 

From DSC:
As the technologies get more powerful, so do the ramifications/consequences. And here’s the kicker. Just like we’re seeing with trying to deal with systemic racism in our country…like it or not, it all comes back to the state of the hearts and minds out there. I’m not trying to be preachy or to pretend that I’m better than anyone. I’m a sinner. I know that full well. And I view all of us as equals trying to survive and to make our way in this tough-to-live-in world. But the unfortunate truth is that technologies are tools, and how the tools are used depends upon one’s heart.

Proverbs 4:23 New International Version (NIV) — from biblegateway.com

23 Above all else, guard your heart,
    for everything you do flows from it.

 

This startup is using AI to give workers a “productivity score” — from technologyreview.com by Will Douglas
Enaible is one of a number of new firms that are giving employers tools to help keep tabs on their employees—but critics fear this kind of surveillance undermines trust.

Excerpt:

In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.

Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.”

Machine-learning algorithms also encode hidden bias in the data they are trained on. Such bias is even harder to expose when it’s buried inside an automated system. If these algorithms are used to assess an employee’s performance, it can be hard to appeal an unfair review or dismissal. 

 

 

From DSC:
I can’t help but reflect on how slippery the slope is when we start talking about using drones — especially as sponsored and used by governments, including our government here in the U.S. Consider the following from The Future Institute.

The Future Institute Today -- discussing the slippery slope of using drones

Excerpt:

Eyes in the sky
As nationwide racial justice protests continue, some journalists and protestors have noticed a new addition to the armed police officers and National Guard troops: a drone flying a hexagon-shaped route 20,000 feet above the streets in Minneapolis. The drone, flown by U.S. Customs and Border Protection, is called a Predator, and is a piece of military technology used for identifying and targeting terrorists overseas. Lately, it’s become a more common sight domestically.

Last month, a number of New Yorkers witnessed a drone floating above them, barking orders to follow social distancing guidelines. The mysterious drone wasn’t official police equipment, but rather a privately owned device piloted by a man named Xavier Arthur in Queens, who was frustrated that people weren’t following stay-at-home orders. He claimed to represent the “Anti-Covid-19 Volunteer Drone Task Force. 

It’s not an isolated incident. During the outbreak, drones have been used extensively to monitor residents and encourage them to stay indoors, to inspect traffic stops and hospitals, and to spray cities with disinfectants. In Paris and Mumbai, they’re patrolling social distancing violators. In China, a video clip went viral, showing a drone breaking up a mahjong game—residents had defied local orders that they stay indoors. Drones with infrared cameras also allegedly flew overhead and checked for people with fevers.

Advanced drones can pinpoint certain behaviors in crowds from high altitudes, recognize and automatically follow targets, and communicate with each other or to command centers on the ground with remarkable precision and low latency. The pandemic and protests are playing to the strengths of an emerging real-time aerial surveillance ecosystem.

3 Things You Should Know

  1. The Flying Internet of Things is taking off.
  2. New drones can self-destruct.
  3. Shareable drones may drive growth in the industry.
 
 

From DSC:
As the ripples move outward from this time of the Coronavirus, we need to be very careful with #EmergingTechnologies. For example, where might the use of (police dept) drones equipped with #AI #FacialRecognition flying overhead take us? What if you’re of the “wrong religion” in a country? Hmm…

 

Social Distancing Enforcement Drones Arrive in the U.S. — from nymag.com by Adam Raymond

Excerpts:

In late January, a viral video from China showed people who’d wandered outside in the early days of the coronavirus outbreak getting scolded by a disembodied voice from a drone flying overhead. Last month, similar campaigns began in France, where locals flouting travel restrictions were gently reminded to “respectez les distances de sécurité s’il vous plaît.”

Now, self-righteous flying robots have made their way to the U.S., with at least two American police departments deploying drones to tell people to disperse, go home, and stay there.

“These drones will be around the City with an automated message from the Mayor telling you to STOP gathering, disperse and go home,” the police department wrote on Facebook. “Summonses HAVE AND WILL CONTINUE to be issued to those found in violation. Fines are up to $1000. You have been advised.”

 

 

Per Techcrunch article:

The world is vulnerable to a new type of trolling as people turn to Zoom video calls to feel connected amidst quarantines. Jerks are using Zoom’s screensharing feature to blast other viewers with the most awful videos from across the internet, from violence to shocking pornography.

 

My thanks to a friend for causing me to further reflect on this article: “Can computers ever replace the classroom?” [Beard]


From DSC:
I’d like to thank Mr. Eric Osterberg — a fraternity brother and friend of mine — for sending me the following article. I wrote back to him. After thanking Eric for the article, I said:

Such an article makes me reflect on things — which is always a good thing for me to try to see my blindspots and/or to think about the good and bad of things. Technologies are becoming more powerful and integrated into our lives — for better at times and for worse at other times.

I’m wondering how the legal realm can assist and/or help create a positive future for societies throughout the globe…any thoughts?


Can computers ever replace the classroom? — from theguardian.com by Alex Beard
With 850 million children worldwide shut out of schools, tech evangelists claim now is the time for AI education. But as the technology’s power grows, so too do the dangers that come with it. 

Excerpts:

But it’s in China, where President Xi Jinping has called for the nation to lead the world in AI innovation by 2030, that the fastest progress is being made. In 2018 alone, Li told me, 60 new AI companies entered China’s private education market. Squirrel AI is part of this new generation of education start-ups. The company has already enrolled 2 million student users, opened 2,600 learning centres in 700 cities across China, and raised $150m from investors.

The supposed AI education revolution is not here yet, and it is likely that the majority of projects will collapse under the weight of their own hype.

The point, in short, is that AI doesn’t have to match the general intelligence of humans to be useful – or indeed powerful. This is both the promise of AI, and the danger it poses.

It was a reminder that Squirrel AI’s platform, like those of its competitors worldwide, doesn’t have to be better than the best human teachers – to improve people’s lives, it just needs to be good enough, at the right price, to supplement what we’ve got. The problem is that it is hard to see technology companies stopping there. For better and worse, their ambitions are bigger. “We could make a lot of geniuses,” Li told me.

 

Future Today Institute's 2020 tech trends report

Key takeaways of this report:

  • Welcome to the Synthetic Decade.
  • You’ll soon have augmented hearing and sight.
  • A.I.-as-a-Service and Data-as-a-Service will reshape business.
  • China has created a new world order.
  • Home and office automation is nearing the mainstream.
  • Everyone alive today is being scored.
  • We’ve traded FOMO for abject fear.
  • It’s the end of forgetting.
  • Our new trust economy is being formed.

 

Facial recognition startup Clearview AI says its full client list was stolen — from engadget.com by Igor Bonifacic

Excerpt:

You might expect a high-profile (and controversial) facial recognition startup like Clearview AI would have its data locked down, but it turns out it’s just as vulnerable as almost any other company to malicious individuals. In a notification obtained by The Daily Beast, the company says a recent vulnerability allowed someone to gain “unauthorized access” to a list of all of its customers. Clearview works with approximately 600 law enforcement agencies across North America, including the Chicago Police Department.

Also see:

 

 
© 2024 | Daniel Christian