AI/ML in EdTech: The Miracle, The Grind, and the Wall — from eliterate.us by Michael Feldstein

Excerpt:

Essentially, I see three stages in working with artificial intelligence and machine learning (AI/ML). I call them the miracle, the grind, and the wall. These stages can have implications for both how we can get seduced by these technologies and how we can get bitten by them. The ethical implications are important.

 

This Uncensored AI Art Tool Can Generate Fantasies—and Nightmares — from wired.com by Will Knight
Open source project Stable Diffusion allows anyone to conjure images with algorithms, but some fear it will be used to create unethical horrors.

Excerpt:

Image generators like Stable Diffusion can create what look like real photographs or hand-crafted illustrations depicting just about anything a person can imagine. This is possible thanks to algorithms that learn to associate the properties of a vast collection of images taken from the web and image databases with their associated text labels. Algorithms learn to render new images to match a text prompt in a process that involves adding and removing random noise to an image.

Also relevant/see:

There’s a text-to-image AI art app for Mac now—and it will change everything — from fastcompany.com by Jesus Diaz
Diffusion Bee harnesses the power of the open source text-to-image AI Stable Diffusion, turning it into a one-click Mac App. Brace yourself for a new creativity Big Bang.


Speaking of AI, also see:

 

The Multidisciplinary Approach to Thinking — from fs.blog by Peter Kaufman; with thanks to Robert Ferraro for this resource

Excerpt:

Peter Kaufman is one of the most successful businessmen of our time, and yet few people have ever heard of him. He’s the CEO of Glenair, an aerospace company based in California, and the editor of Poor Charlie’s Almanack, a book about Charlie Munger.

This speech was to the California Polytechnic State University Pomona Economics Club. The transcript and audio are reproduced here with the permission of Peter Kaufman.

As with many “conversational” talks given without notes, it’s better to listen to the audio to pick up on subtleties that won’t come across in the lightly edited transcript.

There is a simple takeaway. Using a true multidisciplinary understanding of things, Peter identifies two often overlooked, parabolic “Big Ideas”: 1) Mirrored Reciprocation (go positive and go first) and 2) Compound Interest (being constant). A great “Life Hack” is to simply combine these two into one basic approach to living your life: “Go positive and go first, and be constant in doing it.”

 

Howard University receives 2 bomb threats in a week as some HBCU students say they feel forgotten after no arrests in previous threats — from cnn.com by Jacquelyne Germain

Excerpt:

(CNN) As Howard University students returned to campus on Monday for the start of the fall semester, the university received two bomb threats just months after the school and other historically Black colleges and universities had to lock down or postpone classes because of similar threats.

From DSC:
I wonder if the response would look different if this happened at one of the Ivy League schools…? Yeh, probably so. Either way, this is incredibly sad that this happens at all.


Addendum on 9/2/22:

DHS details response to HBCU bomb threats but says ‘much more’ needs to be done — from highereddive.com by Natalie Schwartz


 

You just hired a deepfake. Get ready for the rise of imposter employees. — from protocol.com by Mike Elgan
New technology — plus the pandemic remote work trend — is helping fraudsters use someone else’s identity to get a job.

Excerpt:

Companies have been increasingly complaining to the FBI about prospective employees using real-time deepfake video and deepfake audio for remote interviews, along with personally identifiable information (PII), to land jobs at American companies.

One place they’re likely getting the PII is through posting fake job openings, which enables them to harvest job candidate information, resumes and more, according to the FBI.

The main drivers appear to be money, espionage, access to company systems and unearned career advancement.

 

10 in-demand soft skills to supercharge your career — from fastcompany.com by Melissa Rosenthal
Your résumé and experience may get you through the door, but these soft skills could help you clinch the job.

Excerpt:

A LinkedIn Global Talent Trends report shows that 92% of talent professionals reported that soft skills are equally or more important to hire for than hard skills. The same study reveals that 89% surveyed said that when a new hire doesn’t work out, it’s because they lack much-needed soft skills.

The hard truth about hard skills is that they can have a short half-life. Constant innovation, technology updates, and new feature releases render many of these skills obsolete quickly. Meanwhile, soft skills never expire—they are relevant, transferable, and keep a person highly employable.

 

New: Futurist Friday Podcast Interview with Gerd Leonhard: TheGoodFuture? — from futuristgerd.com by Gerd Leonhard

Excerpt:

Over the course of the summer of 2022, DonMacPherson and 12 Geniuses are releasing 12 interviews with futurists and forward thinkers in order to help their global audience of leaders become better visionaries for their organizations and be more prepared for the uncertain future.

In this episode, Gerd Leonhard joins the show. First, he points out that “the future is already here, we just haven’t paid enough attention to it.” He talks about how technology is promising to make us superhuman, that we are in the biggest shift era in recent history as far as energy and climate is concerned, and that machines and artificial intelligence are starting to emulate humanity.

 

Just Because You Can Doesn’t Mean You Should: What Genetic Engineers Can Learn From ‘Jurassic World’ — from singularityhub.com by Andrew Maynard

Excerpt:

Maybe this is the abiding message of Jurassic World: Dominion—that despite incredible advances in genetic design and engineering, things can and will go wrong if we don’t embrace the development and use of the technology in socially responsible ways.

The good news is that we still have time to close the gap between “could” and “should” in how scientists redesign and reengineer genetic code. But as Jurassic World: Dominion reminds moviegoers, the future is often closer than it might appear.

 

Inside a radical new project to democratize AI — from technologyreview.com by Melissa Heikkilä
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

Excerpt:

PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.

Another item re: AI:

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns — from protocol.com by Kate Kaye
The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year — “strongly encouraged”researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.

 

The Future of Education | By Futurist Gerd Leonhard | A Video for EduCanada — from futuristgerd.com

Per Gerd:

Recently, I was invited by the Embassy of Canada in Switzerland to create this special presentation and promotional video discussing the Future of Education and to explore how Canada might be leading the way. Here are some of the key points I spoke about in the video. Watch the whole thing here: the Future of Education.

 

…because by 2030, I believe, the traditional way of learning — just in case — you know storing, downloading information will be replaced by learning just in time, on-demand, learning to learn, unlearning, relearning, and the importance of being the right person. Character skills, personality skills, traits, they may very well rival the value of having the right degree.

If you learn like a robot…you’ll never have a job to begin with.

Gerd Leonhard


Also relevant/see:

The Next 10 Years: Rethinking Work and Revolutionising Education (Gerd Leonhard’s keynote in Riga) — from futuristgerd.com


 
 

Will Learning Move into the Metaverse? — from learningsolutionsmag.com by Pamela Hogle

Excerpt:

In its 2022 Tech Trends report, the Future Today Institute predicts that, “The future of work will become more digitally immersive as companies deploy virtual meeting platforms, digital experiences, and mixed reality worlds.”

Learning leaders are likely to spearhead the integration of their organizations’ workers into a metaverse, whether by providing training in using the tools that make a metaverse possible or through developing training and performance support resources that learners will use in an immersive environment.

Advantages of moving some workplace collaboration and learning into a metaverse include ease of scaling and globalization. The Tech Trends report mentions personalization at scale and easy multilingual translation as advantages of “synthetic media”—algorithmically generated digital content, which could proliferate in metaverses.

Also see:

Future Institute Today -- Tech Trends 2022


Also from learningsolutionsmag.com, see:

Manage Diverse Learning Ecosystems with Federated Governance

Excerpt:

So, over time, the L&D departments eventually go back to calling their own shots.

What does this mean for the learning ecosystem? If each L&D team chooses its own learning platforms, maintenance and support will be a nightmare. Each L&D department may be happy with the autonomy but learners have no patience for navigating multiple LMSs or going to several systems to get their training records.

Creating common infrastructure among dispersed groups
Here you have the problem: How can groups that have no accountability to each other share a common infrastructure?

 

Every month Essentials publish an Industry Trend Report on AI in general and the following related topics:

  • AI Research
  • AI Applied Use Cases
  • AI Ethics
  • AI Robotics
  • AI Marketing
  • AI Cybersecurity
  • AI Healthcare

The Race to Hide Your Voice — from wired.com by Matt Burgess
Voice recognition—and data collection—have boomed in recent years. Researchers are figuring out how to protect your privacy.

AI: Where are we now? — from educause.edu by EDUCAUSE
Is the use of AI in higher education today invisible? dynamic? perilous? Maybe it’s all three.

What is artificial intelligence and how is it used? — from europart.europa.eu; with thanks to Tom Barrett for this resource

 

Radar Trends to Watch: June 2022 — from oreilly.com

Excerpt:

The explosion of large models continues. Several developments are especially noteworthy. DeepMind’s Gato model is unique in that it’s a single model that’s trained for over 600 different tasks; whether or not it’s a step towards general intelligence (the ensuing debate may be more important than the model itself), it’s an impressive achievement. Google Brain’s Imagen creates photorealistic images that are impressive, even after you’ve seen what DALL-E 2 can do. And Allen AI’s Macaw (surely an allusion to Emily Bender and Timnit Gebru’s Stochastic Parrots paper) is open source, one tenth the size of GPT-3, and claims to be more accurate. Facebook/Meta is also releasing an open source large language model, including the model’s training log, which records in detail the work required to train it.

 

 

How to ensure we benefit society with the most impactful technology being developed today — from deepmind.com by Lila Ibrahim

In 2000, I took a sabbatical from my job at Intel to visit the orphanage in Lebanon where my father was raised. For two months, I worked to install 20 PCs in the orphanage’s first computer lab, and to train the students and teachers to use them. The trip started out as a way to honour my dad. But being in a place with such limited technical infrastructure also gave me a new perspective on my own work. I realised that without real effort by the technology community, many of the products I was building at Intel would be inaccessible to millions of people. I became acutely aware of how that gap in access was exacerbating inequality; even as computers solved problems and accelerated progress in some parts of the world, others were being left further behind. 

After that first trip to Lebanon, I started reevaluating my career priorities. I had always wanted to be part of building groundbreaking technology. But when I returned to the US, my focus narrowed in on helping build technology that could make a positive and lasting impact on society. That led me to a variety of roles at the intersection of education and technology, including co-founding Team4Tech, a non-profit that works to improve access to technology for students in developing countries. 


Also relevant/see:

Microsoft AI news: Making AI easier, simpler, more responsible — from venturebeat.com by Sharon Goldman

But one common theme bubbles over consistently: For AI to become more useful for business applications, it needs to be easier, simpler, more explainable, more accessible and, most of all, responsible

 

 
© 2025 | Daniel Christian