A group of workers are shown paving a new highway in this image.

From DSC:
What are the cognitive “highways” within our minds?

I’ve been thinking a lot about highways recently. Not because it’s construction season (quite yet) here in Michigan (USA), but because I’ve been reflecting upon how many of us build cognitive highways within our minds. The highways that I’m referring to are our well-trodden routes of thinking that we quickly default/resort to. Such well-trodden pathways in our minds get built over time…as we build our habits and/or our ways of thinking about things. Sometimes these routes get built without our even recognizing that new construction zones are already in place.

Those involved with cognitive psychology will connect instantly with what I’m saying here. Those who have studied memory, retrieval practice, how people learn, etc. will know what I’m referring to. 

But instead of a teaching and learning related origin, I got to thinking about this topic due to some recent faith-based conversations instead. These conversations revolved around such questions as:

  • What makes our old selves different from our new selves? (2 Corinthians 5:17)
  • What does it mean to be transformed by the “renewing of our minds?” (Romans 12:2)
  • When a Christian says, “Keep your eyes on Christ” — what does that really mean and look like (practically speaking)?

For me, at least a part of the answers to those questions has to do with what’s occupying my thought life. I don’t know what it means to keep my eyes on Christ, as I can’t see Him. But I do understand what it means to keep my thoughts on what Christ said and/or did…or on the kinds of things that Philippians 4:8 suggests that we think about. No wonder that we often hear the encouragement to be in the Word…as I think that new cognitive highways get created in our minds as we read the Bible. That is, we begin to look at things differently. We take on different perspectives.

The ramifications of this idea are huge:

  • We can’t replace an old highway by ourselves. It takes others to help us out…to teach us new ways of thinking.
  • We sometimes have to unlearn some things. It took time to learn our original perspective on those things, and it will likely be a process for new learning to occur and replace the former way of thinking about those topics.
  • This idea relates to addictions as well. It takes time for addicts to build up their habits/cravings…and it takes time to replace those habits/cravings with more positive ones. One — or one’s family, partner/significant other, and friends — should not expect instant change. Change takes time, and therefore patience and grace are required. This goes for the teachers/faculty members, coaches, principals, pastors, policemen/women, judges, etc. that a person may interact with as well over time. (Hmmm…come to think of it, it sounds like some other relationships may be involved here at times also. Certainly, God knows that He needs to be patient with us — often, He has no choice. Our spouses know this as well and we know that about them too.)
  • Christians, who also struggle with addictions and go to the hospital er…the church rather, take time to change their thoughts, habits, and perspectives. Just as the rebuilding of a physical highway takes time, so it takes time to build new highways (patterns of thinking and responses) in our minds. So the former/old highways may still be around for a while yet, but the new ones are being built and getting stronger every day.
  • Sometimes we need to re-route certain thoughts. Or I suppose another way to think about this is to use the metaphor of “changing the tapes” being played in our minds. Like old cassette tapes, we need to reject some tapes/messages and insert some new ones.

What are the cognitive highways within your own mind? How can you be patient with others (that you want to see change occur within) inside of your own life?

Anyway, thanks for reading this posting. May you and yours be blessed on this day. Have a great week and weekend!


Addendum on 3/31/22…also relevant, see:

I Analyzed 13 TED Talks on Improving Your Memory— Here’s the Quintessence — from learntrepreneurs.com by Eva Keiffenheim
How you can make the most out of your brain.

Excerpt:

In her talk, brain researcher and professor Lara Boyds explains what science currently knows about neuroplasticity. In essence, your brain can change in three ways.

Change 1 — Increase chemical signalling
Your brain works by sending chemicals signals from cell to cell, so-called neurons. This transfer triggers actions and reactions. To support learning your brain can increase the concentration of these signals between your neurons. Chemical signalling is related to your short-term memory.

Change 2 — Alter the physical structure
During learning, the connections between neurons change. In the first change, your brain’s structure stays the same. Here, your brain’s physical structure changes?—?which takes more time. That’s why altering the physical structure influences your long-term memory.

For example, research shows that London taxi cab drivers who actually have to memorize a map of London to get their taxicab license have larger brain regions devoted to spatial or mapping memories.

Change 3 — Alter brain function
This one is crucial (and will also be mentioned in the following talks). When you use a brain region, it becomes more and more accessible. Whenever you access a specific memory, it becomes easier and easier to use again.

But Boyd’s talk doesn’t stop here. She further explores what limits or facilitates neuroplasticity. She researches how people can recover from brain damages such as a stroke and developed therapies that prime or prepare the brain to learn?—?including simulation, exercise and robotics.

Her research is also helpful for healthy brains?—?here are the two most important lessons:

The primary driver of change in your brain is your behaviour.

There is no one size fits all approach to learning.

 


 

China Is About to Regulate AI—and the World Is Watching — from wired.com by Jennifer Conrad
Sweeping rules will cover algorithms that set prices, control search results, recommend videos, and filter content.

Excerpt:

On March 1, China will outlaw this kind of algorithmic discrimination as part of what may be the world’s most ambitious effort to regulate artificial intelligence. Under the rules, companies will be prohibited from using personal information to offer users different prices for a product or service.

The sweeping rules cover algorithms that set prices, control search results, recommend videos, and filter content. They will impose new curbs on major ride-hailing, ecommerce, streaming, and social media companies.

 

How I use Minecraft to help kids with autism — from ted.com by Stuart Duncan; with thanks to Dr. Kate Christian for this resource

Description:

The internet can be an ugly place, but you won’t find bullies or trolls on Stuart Duncan’s Minecraft server, AutCraft. Designed for children with autism and their families, AutCraft creates a safe online environment for play and self-expression for kids who sometimes behave a bit differently than their peers (and who might be singled out elsewhere). Learn more about one of the best places on the internet with this heartwarming talk.

 

Below are two excerpted snapshots from Stuart’s presentation:

Stuart Duncan speaking at TEDX York U

These are the words autistic students used to describe their experience with Stuart's Minecraft server

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 
 

 

Artificial Intelligence: Should You Teach It To Your Employees?— from forbes.com by Tom Taulli

Excerpt:

“If more people are AI literate and can start to participate and contribute to the process, more problems–both big and small–across the organization can be tackled,” said David Sweenor, who is the Senior Director of Product Marketing at Alteryx. “We call this the ‘Democratization of AI and Analytics.’ A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.”

New Artificial Intelligence Tool Accelerates Discovery of Truly New Materials — from scitechdaily.com
The new artificial intelligence tool has already led to the discovery of four new materials.

Excerpt:

Researchers at the University of Liverpool have created a collaborative artificial intelligence tool that reduces the time and effort required to discover truly new materials.

AI development must be guided by ethics, human wellbeing and responsible innovation — from healthcareitnews.com by Bill Siwicki
An expert in emerging technology from the IEEE Standards Association describes the elements that must be considered as artificial intelligence proliferates across healthcare.

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 

The Fight to Define When AI Is ‘High Risk’ — from wired.com by Khari Johnson
Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.

Excerpt:

The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that’s deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.

 

Many Americans aren’t aware they’re being tracked with facial recognition while shopping  — from techradar.com by Anthony Spadafora
You’re not just on camera, you’re also being tracked

Excerpt:

Despite consumer opposition to facial recognition, the technology is currently being used in retail stores throughout the US according to new research from Piplsay.

While San Francisco banned the police from using facial recognition back in 2019 and the EU called for a five year ban on the technology last year, several major retailers in the US including Lowe’s, Albertsons and Macy’s have been using it for both fraud and theft detection.

From DSC:
I’m not sure how prevalent this practice is…and that’s precisely the point. We don’t know what all of those cameras are actually doing in our stores, gas stations, supermarkets, etc. I put this in the categories of policy, law schools, legal, government, and others as the legislative and legal realm need to scramble to catch up to this Wild Wild West.

Along these lines, I was watching a portion of 60 minutes last night where they were doing a piece on autonomous trucks (reportedly to hit the roads without a person sometime later this year). When asked about oversight, there was some…but not much.

Readers of this blog will know that I have often wondered…”How does society weigh in on these things?”

Along these same lines, also see:

  • The NYPD Had a Secret Fund for Surveillance Tools — from wired.com by Sidney Fussell
    Documents reveal that police bought facial-recognition software, vans equipped with x-ray machines, and “stingray” cell site simulators—with no public oversight.
 

The Future of Social Media: Re-Humanisation and Regulation — by Gerd Leonhard

How could social media become ‘human’ again? How can we stop the disinformation, dehumanisation and dataism that has resulted from social media’s algorithmic obsessions? I foresee that the EXTERNALTIES i.e. the consequences of unmitigated growth of exponential digital technologies will become just as big as the consequences of climate change. In fact, today, the social media industry already has quite a few parallels to the oil, gas and coal business: while private make huge profits from extracting the ‘oil’ (i.e. user data), the external damage is left to society and governments to fix. This needs to change! In this keynote I make some precise suggestions as to how that could happen.

Some snapshots/excerpts:

The future of social media -- a video by Gerd Leonhard in the summer of 2021

 

 

 

 


From DSC:
Gerd brings up some solid points here. His presentation and perspectives are not only worth checking out, but they’re worth some time for us to seriously reflect on what he’s saying.

What kind of future do we want?

And for you professors, teachers, instructional designers, trainers, and presenters out there, check out *how* he delivers the content. It’s well done and very engaging.


 

Let’s Teach Computer Science Majors to Be Good Citizens. The Whole World Depends on It. — from edsurge.com by Anne-Marie Núñez, Matthew J. Mayhew, Musbah Shaheen and Laura S. Dahl

Excerpt:

To mitigate the perpetuation of these and related inequities, observers have called for increased diversification of the technology workforce. However, as books like “Brotopia” by Emily Chang and “Race after Technology” by Ruha Benjamin indicate, the culture of tech companies can be misogynistic and racist and therefore unwelcoming to many people. Google’s firing of a well-regarded Black scientist for her research on algorithmic bias in December 2020 suggests that there may be limited capacity within the industry to challenge this culture.

Change may need to start earlier in the workforce development pipeline. Undergraduate education offers a key opportunity for recruiting students from historically underrepresented racial and ethnic, gender, and disability groups into computing. Yet even broadened participation in college computer science courses may not shift the tech workforce and block bias from seeping into tech tools if students aren’t taught that diversity and ethics are essential to their field of study and future careers.

Also mentioned/see:

  • Teaching Responsible Computing Playbook
    The ultimate goal of Teaching Responsible Computing is to educate a new wave of students who bring holistic thinking to the design of technology products. To do this, it is critical for departments to work together across computing, humanistic studies, and more, and collaborate across institutions. This Playbook offers the lessons learned from the process of adapting and enhancing curricula to include responsible computing in a broad set of institutions and help others get started doing the same in their curricula. 
 

This is an abstract picture of a person's head made of connections peering sideways -- it links to Artificial intelligence and the future of national security from ASU

Artificial intelligence and the future of national security — from news.asu.edu

Excerpt:

Artificial intelligence is a “world-altering” technology that represents “the most powerful tools in generations for expanding knowledge, increasing prosperity and enriching the human experience” and will be a source of enormous power for the companies and countries that harness them, according to the recently released Final Report of the National Security Commission on Artificial Intelligence.

This is not hyperbole or a fantastical version of AI’s potential impact. This is the assessment of a group of leading technologists and national security professionals charged with offering recommendations to Congress on how to ensure American leadership in AI for national security and defense. Concerningly, the group concluded that the U.S. is not currently prepared to defend American interests or compete in the era of AI.

Also see:

EU Set to Ban Surveillance, Start Fines Under New AI Rules — from bloomberg.com by Natalia Drozdiak

Excerpt:

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.

Also see:

Wrongfully arrested man sues Detroit police over false facial recognition match — from washingtonpost.com by Drew Harwell
The case could fuel criticism of police investigators’ use of a controversial technology that has been shown to perform worse on people of color

Excerpts:

A Michigan man has sued Detroit police after he was wrongfully arrested and falsely identified as a shoplifting suspect by the department’s facial recognition software in one of the first lawsuits of its kind to call into question the controversial technology’s risk of throwing innocent people in jail.

Robert Williams, a 43-year-old father in the Detroit suburb of Farmington Hills, was arrested last year on charges he’d taken watches from a Shinola store after police investigators used a facial recognition search of the store’s surveillance-camera footage that identified him as the thief.

Prosecutors dropped the case less than two weeks later, arguing that officers had relied on insufficient evidence. Police Chief James Craig later apologized for what he called “shoddy” investigative work. Williams, who said he had been driving home from work when the 2018 theft had occurred, was interrogated by detectives and held in custody for 30 hours before his release.

Williams’s attorneys did not make him available for comment Tuesday. But Williams wrote in The Washington Post last year that the episode had left him deeply shaken, in part because his young daughters had watched him get handcuffed in his driveway and put into a police car after returning home from work.

“How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?” he wrote. “As any other black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly — even though I knew I had done nothing wrong.”

Addendum on 4/20/21:

 

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud — from vice.com by Gabriel Geiger; with thanks to Sam DeBrule for this resource
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.

Excerpt:

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

On a more positive note, Sam DeBrule (in his Machine Learnings e-newsletter) also notes the following article:

Can artificial intelligence combat wildfires? Sonoma County tests new technology — from latimes.com by Alex Wigglesworth

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 
© 2025 | Daniel Christian