How I use Minecraft to help kids with autism — from ted.com by Stuart Duncan; with thanks to Dr. Kate Christian for this resource

Description:

The internet can be an ugly place, but you won’t find bullies or trolls on Stuart Duncan’s Minecraft server, AutCraft. Designed for children with autism and their families, AutCraft creates a safe online environment for play and self-expression for kids who sometimes behave a bit differently than their peers (and who might be singled out elsewhere). Learn more about one of the best places on the internet with this heartwarming talk.

 

Below are two excerpted snapshots from Stuart’s presentation:

Stuart Duncan speaking at TEDX York U

These are the words autistic students used to describe their experience with Stuart's Minecraft server

 

As seen/accessible from this page.

A brief insert from DSC:
Another futurist Thomas Frey has some thoughts along this same line.

A top futurist predicts the largest internet company of 2030 will be an online school

#Canada #education #future #trends #careerdevelopment #change #paceofchange #automation #robotics #education #AI #learnhowtolearn #unlearn #learningecosystems #lifelonglearning #endofroutine #experientiallearning

 

Machines are for answers. Humans are for questions. 

 


Also relevant/see:


 

Both of the items below are from Sam DeBrule’s Machine Learnings e-newsletter:


Also see:

 

Isaiah 1:17

Learn to do right; seek justice.
    Defend the oppressed.
Take up the cause of the fatherless;
    plead the case of the widow.

From DSC:
This verse especially caught my eye as we have severe access to justice issues here in the United States.

 
 

A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI — from wired.com by Khari Johnson
Researchers are encouraging those who work in AI to explicitly consider racism, gender, and other structural inequalities.

Excerpt:

FORMS OF AUTOMATION such as artificial intelligence increasingly inform decisions about who gets hired, is arrested, or receives health care. Examples from around the world articulate that the technology can be used to exclude, control, or oppress people and reinforce historic systems of inequality that predate AI.

“Algorithms are animated by data, data comes from people, people make up society, and society is unequal,” the paper reads. “Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage.

 

The biggest tech trends of 2022, according to over 40 experts — from fastcompany.com by Mark Sullivan
Startup founders, Big Tech execs, VCs, and tech scholars offer their predictions on how Web3, the metaverse, and other emerging ideas will shape the next year.

We asked startup founders, Big Tech execs, VCs, scholars, and other experts to speculate on the coming year within their field of interest. Altogether, we collected more than 40 predictions about 2022. Together, they offer a smart composite look at the things we’re likely to be talking about by this time next year.

 

From DSC:
As I looked at the article below, I couldn’t help but wonder…what is the role of the American Bar Association (ABA) in this type situation? How can the ABA help the United States deal with the impact/place of emerging technologies?


Clearview AI will get a US patent for its facial recognition tech — from engadget.com by J. Fingas
Critics are worried the company is patenting invasive tech.

Excerpt:

Clearview AI is about to get formal acknowledgment for its controversial facial recognition technology. Politico reports Clearview has received a US Patent and Trademark Office “notice of allowance” indicating officials will approve a filing for its system, which scans faces across public internet data to find people from government lists and security camera footage. The company just has to pay administrative fees to secure the patent.

In a Politico interview, Clearview founder Hoan Ton-That claimed this was the first facial recognition patent involving “large-scale internet data.” The firm sells its tool to government clients (including law enforcement) hoping to accelerate searches.

As you might imagine, there’s a concern the USPTO is effectively blessing Clearview’s technology and giving the company a chance to grow despite widespread objections to its technology’s very existence. 

Privacy, news, facial recognition, USPTO, internet, patent,
Clearview AI, surveillance, tomorrow, AI, artificial intelligence

 

From DSC:
From my perspective, both of the items below are highly-related to each other:

Let’s Teach Computer Science Majors to Be Good Citizens. The Whole World Depends on It. — from edsurge.com by Anne-Marie Núñez, Matthew J. Mayhew, Musbah Shaheen and Laura S. Dahl

Excerpt:

Change may need to start earlier in the workforce development pipeline. Undergraduate education offers a key opportunity for recruiting students from historically underrepresented racial and ethnic, gender, and disability groups into computing. Yet even broadened participation in college computer science courses may not shift the tech workforce and block bias from seeping into tech tools if students aren’t taught that diversity and ethics are essential to their field of study and future careers.

Computer Science Majors Lack Citizenship Preparation
Unfortunately, those lessons seem to be missing from many computer science programs.

…and an excerpt from Why AI can’t really filter out “hate news” — with thanks to Sam DeBrule for this resource (emphasis DSC):

The incomprehensibility and unexplainability of huge algorithms
Michael Egnor: What terrifies me about artificial intelligence — and I don’t think one can overstate this danger — is that artificial intelligence has two properties that make it particularly deadly in human civilization. One is concealment. Even though every single purpose in artificial intelligence is human, it’s concealed. We don’t really understand it. We don’t understand Google’s algorithms.

There may even be a situation where Google doesn’t understand Google’s algorithms. But all of it comes from the people who run Google. So the concealment is very dangerous. We don’t know what these programs are doing to our culture. And it may be that no one knows, but they are doing things.

Note:Roman Yampolskiy has written about the incomprehensibility and unexplainability of AI: “Human beings are finite in our abilities. For example, our short term memory is about 7 units on average. In contrast, an AI can remember billions of items and AI capacity to do so is growing exponentially. While never infinite in a true mathematical sense, machine capabilities can be considered such in comparison with ours. This is true for memory, compute speed, and communication abilities.” So we have built-in bias and incomprehensibility at the same time.

From DSC:
That part about concealment reminds me that our society depends upon the state of the hearts of the tech leaders. We don’t like to admit that, but it’s true. The legal realm is too far behind to stop the Wild West of technological change. The legal realm is trying to catch up, but they’re coming onto the race track with no cars…just as pedestrians walking or running as fast as they can….all the while, the technological cars are whizzing by. 

The pace has changed significantly and quickly

 

The net effect of all of this is that we are more dependent upon the ethics, morals, and care for their fellow humankind (or not) of the C-Suites out there (especially Facebook/Meta Platforms, Google, Microsoft, Amazon, Google, and Apple) than we care to admit. Are they producing products and services that aim to help our societies move forward, or are they just trying to make some more bucks? Who — or what — is being served?

The software engineers and software architects are involved here big time as well. “Just because we can doesn’t mean we should.” But that perspective is sometimes in short supply.

 

Over 60,000 Fake Applications Submitted in Student Aid Scheme, California Says — from nytimes.com from Vimal Patel
It was unclear how much money, if any, was disbursed to the suspicious students. The federal Education Department said it was investigating the suspected fraud.

Excerpt:

According to Mr. Perry, fraud of this nature is easier to pull off at community colleges than at four-year institutions, because the two-year institutions don’t have admissions committees vetting applicants. And while colleges have had some fully virtual components for many years, the pandemic — which forced many colleges to operate entirely online — has provided the conditions for such schemes to flourish. “Somebody trying to perpetuate this would think this was a more likely time to try to get away with this,” Mr. Perry said.

He added that the next step for federal investigators should be to determine how widespread this conduct is and whether colleges elsewhere should be on the lookout.

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 
 
 

Artificial Intelligence: Should You Teach It To Your Employees?— from forbes.com by Tom Taulli

Excerpt:

“If more people are AI literate and can start to participate and contribute to the process, more problems–both big and small–across the organization can be tackled,” said David Sweenor, who is the Senior Director of Product Marketing at Alteryx. “We call this the ‘Democratization of AI and Analytics.’ A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.”

New Artificial Intelligence Tool Accelerates Discovery of Truly New Materials — from scitechdaily.com
The new artificial intelligence tool has already led to the discovery of four new materials.

Excerpt:

Researchers at the University of Liverpool have created a collaborative artificial intelligence tool that reduces the time and effort required to discover truly new materials.

AI development must be guided by ethics, human wellbeing and responsible innovation — from healthcareitnews.com by Bill Siwicki
An expert in emerging technology from the IEEE Standards Association describes the elements that must be considered as artificial intelligence proliferates across healthcare.

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 
© 2024 | Daniel Christian