This Uncensored AI Art Tool Can Generate Fantasies—and Nightmares — from wired.com by Will Knight
Open source project Stable Diffusion allows anyone to conjure images with algorithms, but some fear it will be used to create unethical horrors.

Excerpt:

Image generators like Stable Diffusion can create what look like real photographs or hand-crafted illustrations depicting just about anything a person can imagine. This is possible thanks to algorithms that learn to associate the properties of a vast collection of images taken from the web and image databases with their associated text labels. Algorithms learn to render new images to match a text prompt in a process that involves adding and removing random noise to an image.

Also relevant/see:

There’s a text-to-image AI art app for Mac now—and it will change everything — from fastcompany.com by Jesus Diaz
Diffusion Bee harnesses the power of the open source text-to-image AI Stable Diffusion, turning it into a one-click Mac App. Brace yourself for a new creativity Big Bang.


Speaking of AI, also see:

 

You just hired a deepfake. Get ready for the rise of imposter employees. — from protocol.com by Mike Elgan
New technology — plus the pandemic remote work trend — is helping fraudsters use someone else’s identity to get a job.

Excerpt:

Companies have been increasingly complaining to the FBI about prospective employees using real-time deepfake video and deepfake audio for remote interviews, along with personally identifiable information (PII), to land jobs at American companies.

One place they’re likely getting the PII is through posting fake job openings, which enables them to harvest job candidate information, resumes and more, according to the FBI.

The main drivers appear to be money, espionage, access to company systems and unearned career advancement.

 

10 in-demand soft skills to supercharge your career — from fastcompany.com by Melissa Rosenthal
Your résumé and experience may get you through the door, but these soft skills could help you clinch the job.

Excerpt:

A LinkedIn Global Talent Trends report shows that 92% of talent professionals reported that soft skills are equally or more important to hire for than hard skills. The same study reveals that 89% surveyed said that when a new hire doesn’t work out, it’s because they lack much-needed soft skills.

The hard truth about hard skills is that they can have a short half-life. Constant innovation, technology updates, and new feature releases render many of these skills obsolete quickly. Meanwhile, soft skills never expire—they are relevant, transferable, and keep a person highly employable.

 

Why Infosys’s cofounder Nilekani is urging leaders to use tech for good

Why Infosys’s cofounder Nilekani is urging leaders to use tech for good  — from mckinsey.com by Gautam Kumra
The cofounder of the multinational IT company believes Indian start-ups will soon develop technologies to transform education, healthcare, and other social challenges.

Excerpts:

McKinsey: The world has also become a more complex place, with recent geopolitics, inflation complexity, rocketing energy prices, excessive liquidity, and digitization challenges. How do you personally keep adapting and learning?

Nandan Nilekani: In the last 40 years, I think we have gone through every transition: mainframes to minicomputers to LANs [local area networks] to internet to smartphones to AI. It has been fun understanding and riding these waves.

In my view, if a billion people can use something, then that’s a benefit. A billion people can learn using technology. A billion people can get better healthcare using technology. A billion people can move around and change jobs using technology.

From DSC:
I hope I can meet Nandan Nilekani someday. I feel that he is a kindred spirit. Several things that he said really resonated with me.

 

New: Futurist Friday Podcast Interview with Gerd Leonhard: TheGoodFuture? — from futuristgerd.com by Gerd Leonhard

Excerpt:

Over the course of the summer of 2022, DonMacPherson and 12 Geniuses are releasing 12 interviews with futurists and forward thinkers in order to help their global audience of leaders become better visionaries for their organizations and be more prepared for the uncertain future.

In this episode, Gerd Leonhard joins the show. First, he points out that “the future is already here, we just haven’t paid enough attention to it.” He talks about how technology is promising to make us superhuman, that we are in the biggest shift era in recent history as far as energy and climate is concerned, and that machines and artificial intelligence are starting to emulate humanity.

 

Just Because You Can Doesn’t Mean You Should: What Genetic Engineers Can Learn From ‘Jurassic World’ — from singularityhub.com by Andrew Maynard

Excerpt:

Maybe this is the abiding message of Jurassic World: Dominion—that despite incredible advances in genetic design and engineering, things can and will go wrong if we don’t embrace the development and use of the technology in socially responsible ways.

The good news is that we still have time to close the gap between “could” and “should” in how scientists redesign and reengineer genetic code. But as Jurassic World: Dominion reminds moviegoers, the future is often closer than it might appear.

 

Inside a radical new project to democratize AI — from technologyreview.com by Melissa Heikkilä
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

Excerpt:

PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.

Another item re: AI:

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns — from protocol.com by Kate Kaye
The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year — “strongly encouraged”researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.

 

The Future of Education | By Futurist Gerd Leonhard | A Video for EduCanada — from futuristgerd.com

Per Gerd:

Recently, I was invited by the Embassy of Canada in Switzerland to create this special presentation and promotional video discussing the Future of Education and to explore how Canada might be leading the way. Here are some of the key points I spoke about in the video. Watch the whole thing here: the Future of Education.

 

…because by 2030, I believe, the traditional way of learning — just in case — you know storing, downloading information will be replaced by learning just in time, on-demand, learning to learn, unlearning, relearning, and the importance of being the right person. Character skills, personality skills, traits, they may very well rival the value of having the right degree.

If you learn like a robot…you’ll never have a job to begin with.

Gerd Leonhard


Also relevant/see:

The Next 10 Years: Rethinking Work and Revolutionising Education (Gerd Leonhard’s keynote in Riga) — from futuristgerd.com


 

How to ensure we benefit society with the most impactful technology being developed today — from deepmind.com by Lila Ibrahim

In 2000, I took a sabbatical from my job at Intel to visit the orphanage in Lebanon where my father was raised. For two months, I worked to install 20 PCs in the orphanage’s first computer lab, and to train the students and teachers to use them. The trip started out as a way to honour my dad. But being in a place with such limited technical infrastructure also gave me a new perspective on my own work. I realised that without real effort by the technology community, many of the products I was building at Intel would be inaccessible to millions of people. I became acutely aware of how that gap in access was exacerbating inequality; even as computers solved problems and accelerated progress in some parts of the world, others were being left further behind. 

After that first trip to Lebanon, I started reevaluating my career priorities. I had always wanted to be part of building groundbreaking technology. But when I returned to the US, my focus narrowed in on helping build technology that could make a positive and lasting impact on society. That led me to a variety of roles at the intersection of education and technology, including co-founding Team4Tech, a non-profit that works to improve access to technology for students in developing countries. 


Also relevant/see:

Microsoft AI news: Making AI easier, simpler, more responsible — from venturebeat.com by Sharon Goldman

But one common theme bubbles over consistently: For AI to become more useful for business applications, it needs to be easier, simpler, more explainable, more accessible and, most of all, responsible

 

 

Can you truly own anything in the metaverse? A law professor explains how blockchains and NFTs don’t protect virtual property — from theconversation.com by João Marinotti

Excerpt:

Despite these claims, the legal status of virtual “owners” is significantly more complicated. In fact, the current ownership of metaverse assets is not governed by property law at all, but rather by contract law. As a legal scholar who studies property law, tech policy and legal ownership, I believe that what many companies are calling “ownership” in the metaverse is not the same as ownership in the physical world, and consumers are at risk of being swindled.

 

Announcing the 2022 AI Index Report — from hai.stanford.edu by Stanford University

Excerpt/description:

Welcome to the Fifth Edition of the AI Index

The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report trackscollatesdistills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.

The 2022 AI Index report measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, and more. The latest edition includes data from a broad set of academic, private, and non-profit organizations as well as more self-collected data and original analysis than any previous editions.

Also relevant/see:

  • Andrew Ng predicts the next 10 years in AI — from venturebeat.com by George Anadiotis
  • Nvidia’s latest AI wizardry turns 2D photos into 3D scenes in milliseconds — from thenextweb.com by Thomas Macaulay
    The Polaroid of the future?
    Nvidia events are renowned for mixing technical bravado with splashes of showmanship — and this year’s GTC conference was no exception. The company ended a week that introduced a new enterprise GPU and an Arm-based “superchip” with a trademark flashy demo. Some 75 years after the world’s first instant photo captured the 3D world in a 2D picture…

Nvidia believes Instant NeRF could generate virtual worlds, capture video conferences in 3D, and reconstruct scenes for 3D maps.

 

A group of workers are shown paving a new highway in this image.

From DSC:
What are the cognitive “highways” within our minds?

I’ve been thinking a lot about highways recently. Not because it’s construction season (quite yet) here in Michigan (USA), but because I’ve been reflecting upon how many of us build cognitive highways within our minds. The highways that I’m referring to are our well-trodden routes of thinking that we quickly default/resort to. Such well-trodden pathways in our minds get built over time…as we build our habits and/or our ways of thinking about things. Sometimes these routes get built without our even recognizing that new construction zones are already in place.

Those involved with cognitive psychology will connect instantly with what I’m saying here. Those who have studied memory, retrieval practice, how people learn, etc. will know what I’m referring to. 

But instead of a teaching and learning related origin, I got to thinking about this topic due to some recent faith-based conversations instead. These conversations revolved around such questions as:

  • What makes our old selves different from our new selves? (2 Corinthians 5:17)
  • What does it mean to be transformed by the “renewing of our minds?” (Romans 12:2)
  • When a Christian says, “Keep your eyes on Christ” — what does that really mean and look like (practically speaking)?

For me, at least a part of the answers to those questions has to do with what’s occupying my thought life. I don’t know what it means to keep my eyes on Christ, as I can’t see Him. But I do understand what it means to keep my thoughts on what Christ said and/or did…or on the kinds of things that Philippians 4:8 suggests that we think about. No wonder that we often hear the encouragement to be in the Word…as I think that new cognitive highways get created in our minds as we read the Bible. That is, we begin to look at things differently. We take on different perspectives.

The ramifications of this idea are huge:

  • We can’t replace an old highway by ourselves. It takes others to help us out…to teach us new ways of thinking.
  • We sometimes have to unlearn some things. It took time to learn our original perspective on those things, and it will likely be a process for new learning to occur and replace the former way of thinking about those topics.
  • This idea relates to addictions as well. It takes time for addicts to build up their habits/cravings…and it takes time to replace those habits/cravings with more positive ones. One — or one’s family, partner/significant other, and friends — should not expect instant change. Change takes time, and therefore patience and grace are required. This goes for the teachers/faculty members, coaches, principals, pastors, policemen/women, judges, etc. that a person may interact with as well over time. (Hmmm…come to think of it, it sounds like some other relationships may be involved here at times also. Certainly, God knows that He needs to be patient with us — often, He has no choice. Our spouses know this as well and we know that about them too.)
  • Christians, who also struggle with addictions and go to the hospital er…the church rather, take time to change their thoughts, habits, and perspectives. Just as the rebuilding of a physical highway takes time, so it takes time to build new highways (patterns of thinking and responses) in our minds. So the former/old highways may still be around for a while yet, but the new ones are being built and getting stronger every day.
  • Sometimes we need to re-route certain thoughts. Or I suppose another way to think about this is to use the metaphor of “changing the tapes” being played in our minds. Like old cassette tapes, we need to reject some tapes/messages and insert some new ones.

What are the cognitive highways within your own mind? How can you be patient with others (that you want to see change occur within) inside of your own life?

Anyway, thanks for reading this posting. May you and yours be blessed on this day. Have a great week and weekend!


Addendum on 3/31/22…also relevant, see:

I Analyzed 13 TED Talks on Improving Your Memory— Here’s the Quintessence — from learntrepreneurs.com by Eva Keiffenheim
How you can make the most out of your brain.

Excerpt:

In her talk, brain researcher and professor Lara Boyds explains what science currently knows about neuroplasticity. In essence, your brain can change in three ways.

Change 1 — Increase chemical signalling
Your brain works by sending chemicals signals from cell to cell, so-called neurons. This transfer triggers actions and reactions. To support learning your brain can increase the concentration of these signals between your neurons. Chemical signalling is related to your short-term memory.

Change 2 — Alter the physical structure
During learning, the connections between neurons change. In the first change, your brain’s structure stays the same. Here, your brain’s physical structure changes?—?which takes more time. That’s why altering the physical structure influences your long-term memory.

For example, research shows that London taxi cab drivers who actually have to memorize a map of London to get their taxicab license have larger brain regions devoted to spatial or mapping memories.

Change 3 — Alter brain function
This one is crucial (and will also be mentioned in the following talks). When you use a brain region, it becomes more and more accessible. Whenever you access a specific memory, it becomes easier and easier to use again.

But Boyd’s talk doesn’t stop here. She further explores what limits or facilitates neuroplasticity. She researches how people can recover from brain damages such as a stroke and developed therapies that prime or prepare the brain to learn?—?including simulation, exercise and robotics.

Her research is also helpful for healthy brains?—?here are the two most important lessons:

The primary driver of change in your brain is your behaviour.

There is no one size fits all approach to learning.

 


 

China Is About to Regulate AI—and the World Is Watching — from wired.com by Jennifer Conrad
Sweeping rules will cover algorithms that set prices, control search results, recommend videos, and filter content.

Excerpt:

On March 1, China will outlaw this kind of algorithmic discrimination as part of what may be the world’s most ambitious effort to regulate artificial intelligence. Under the rules, companies will be prohibited from using personal information to offer users different prices for a product or service.

The sweeping rules cover algorithms that set prices, control search results, recommend videos, and filter content. They will impose new curbs on major ride-hailing, ecommerce, streaming, and social media companies.

 

How I use Minecraft to help kids with autism — from ted.com by Stuart Duncan; with thanks to Dr. Kate Christian for this resource

Description:

The internet can be an ugly place, but you won’t find bullies or trolls on Stuart Duncan’s Minecraft server, AutCraft. Designed for children with autism and their families, AutCraft creates a safe online environment for play and self-expression for kids who sometimes behave a bit differently than their peers (and who might be singled out elsewhere). Learn more about one of the best places on the internet with this heartwarming talk.

 

Below are two excerpted snapshots from Stuart’s presentation:

Stuart Duncan speaking at TEDX York U

These are the words autistic students used to describe their experience with Stuart's Minecraft server

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 
© 2022 | Daniel Christian