Everyday Media Literacy: An Analog Guide for Your Digital Life — from routledge.com by Sue Ellen Christian

In this second edition, award-winning educator Sue Ellen Christian offers students an accessible and informed guide to how they can consume and create media intentionally and critically.

The textbook applies media literacy principles and critical thinking to the key issues facing young adults today, from analyzing and creating media messages to verifying information and understanding online privacy. Through discussion prompts, writing exercises, key terms, and links, readers are provided with a framework from which to critically consume and create media in their everyday lives. This new edition includes updates covering privacy aspects of AI, VR and the metaverse, and a new chapter on digital audiences, gaming, and the creative and often unpaid labor of social media and influencers. Chapters examine news literacy, online activism, digital inequality, social media and identity, and global media corporations, giving readers a nuanced understanding of the key concepts at the core of media literacy. Concise, creative, and curated, this book highlights the cultural, political, and economic dynamics of media in contemporary society, and how consumers can mindfully navigate their daily media use.

This textbook is perfect for students and educators of media literacy, journalism, and education looking to build their understanding in an engaging way.

 

180 Degree Turn: NYC District Goes From Banning ChatGPT to Exploring AI’s Potential — from edweek.org by Alyson Klein (behind paywall)

New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.


The Leader’s Blindspot: How to Prepare for the Real Future — from preview.mailerlite.io by the AIEducator
The Commonly Held Belief: AI Will Automate Only Boring, Repetitive Tasks First

The Days of Task-Based Views on AI Are Numbered
The winds of change are sweeping across the educational landscape (emphasis DSC):

  1. Multifaceted AI: AI technologies are not one-trick ponies; they are evolving into complex systems that can handle a variety of tasks.
  2. Rising Expectations: As technology becomes integral to our lives, the expectations for personalised, efficient education are soaring.
  3. Skill Transformation: Future job markets will demand a different skill set, one that is symbiotic with AI capabilities.

Teaching: How to help students better understand generative AI — from chronicle.com by Beth McMurtrie
Beth describes ways professors have used ChatGPT to bolster critical thinking in writing-intensive courses

Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.

“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.

The students, he wrote, found the writing “soulless.”


7 Questions with Dr. Cristi Ford, VP of Academic Affairs at D2L — from campustechnology.com by Rhea Kelly

In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?

To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.


AI Platform Built by Teachers, for Teachers, Class Companion Raises $4 Million to Tap Into the Power of Practice — from prweb.com

“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


10 Ways Technology Leaders Can Step Up and Into the Generative AI Discussion in Higher Ed — from er.educause.edu by Lance Eaton and Stan Waddell

  1. Offer Short Primers on Generative AI
  2. Explain How to Get Started
  3. Suggest Best Practices for Engaging with Generative AI
  4. Give Recommendations for Different Groups
  5. Recommend Tools
  6. Explain the Closed vs. Open-Source Divide
  7. Avoid Pitfalls
  8. Conduct Workshops and Events
  9. Spot the Fake
  10. Provide Proper Guidance on the Limitations of AI Detectors


 

Canva’s new AI tools automate boring, labor-intensive design tasks — from theverge.com by Jess Weatherbed
Magic Studio features like Magic Switch automatically convert your designs into blogs, social media posts, emails, and more to save time on manually editing documents.


Canva launches Magic Studio, partners with Runway ML for video — from bensbites.beehiiv.com by Ben Tossell

Here are the highlights of launched features under the new Magic Studio:

  • Magic Design – Turn ideas into designs instantly with AI-generated templates.
  • Magic Switch – Transform content into different formats and languages with one click.
  • Magic Grab – Make images editable like Canva templates for easy editing.
  • Magic Expand – Use AI to expand images beyond the original frame.
  • Magic Morph – Transform text and shapes with creative effects and prompts.
  • Magic Edit – Make complex image edits using simple text prompts.
  • Magic Media – Generate professional photos, videos and artworks from text prompts.
  • Magic Animate – Add animated transitions and motion to designs instantly.
  • Magic Write – Generate draft text and summaries powered by AI.



Adobe Firefly

Meet Adobe Firefly -- Adobe is going hard with the use of AI. This is a key product along those lines.


Addendums on 10/11/23:


Adobe Releases New AI Models Aimed at Improved Graphic Design — from bloomberg.com
New version of Firefly is bigger than initial tool, Adobe says Illustrator, Express programs each get own generative tools


 

Deepfakes: An evidentiary tsunami! — fromthebrainyacts.beehiiv.com by Josh Kubicki

Excerpt: (emphasis DSC):

I’ve written and spoken about this before but the rise of deepfakes is going to have a profound impact on courts throughout the world. This week we saw three major deepfake stories.

Whether you are a lawyer or not, this topic will impact you. So, please consider these questions as we will need to have answers for each one very soon (if not now).

  1. How will we establish a reliable and consistent standard to authenticate digital evidence as genuine and not altered by deepfake technology?
  2. Will the introduction of deepfakes shift the traditional burdens of proof or production, especially when digital evidence is introduced?
  3. Will courts require expert witnesses for digital evidence authentication in every case, and what standards will be used to qualify these experts?
  4. Are there existing technological tools or methods to detect deepfakes? (yes there is but it is not 100%) How can courts keep abreast of rapidly advancing technology?
  5. …plus several more questions

From DSC:
What are law schools doing about this? Are they addressing this?


And speaking of legal matters and law schools, this might be interesting or helpful to someone out there:

 

Introducing Magic Studio: the power of AI, all in one place — from canva.com


Also relevant/see:

Canva’s new AI features make everyone a designer — from joinsuperhuman.ai by Zain Kahn

…here are all the cool new ways you can use Canva to create pro-grade designs for your work:

  • Magic Media: Generate photos and videos with text prompts.
  • Magic Design: Turn ideas into designs with AI-generated templates.
  • Magic Switch: Translate content into different languages and formats.
  • Magic Expand: Make images bigger with AI.
  • Magic Edit: Edit images with simple text prompts.
  • Magic Morph: Transform text and shapes with visual effects.
  • Magic Write: Generate texts and summaries with AI.

Canva also announced that they’re creating a $200 million fund to compensate creators who opt-in to train their AI models.

 

As AI Chatbots Rise, More Educators Look to Oral Exams — With High-Tech Twist — from edsurge.com by Jeffrey R. Young

To use Sherpa, an instructor first uploads the reading they’ve assigned, or they can have the student upload a paper they’ve written. Then the tool asks a series of questions about the text (either questions input by the instructor or generated by the AI) to test the student’s grasp of key concepts. The software gives the instructor the choice of whether they want the tool to record audio and video of the conversation, or just audio.

The tool then uses AI to transcribe the audio from each student’s recording and flags areas where the student answer seemed off point. Teachers can review the recording or transcript of the conversation and look at what Sherpa flagged as trouble to evaluate the student’s response.

 

Is Your AI Model Going Off the Rails? There May Be an Insurance Policy for That — from wsj.com by Belle Lin; via Brainyacts
As generative AI creates new risks for businesses, insurance companies sense an opportunity to cover the ways AI could go wrong

The many ways a generative artificial intelligence project can go off the rails poses an opportunity for insurance companies, even as those grim scenarios keep business technology executives up at night.

Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches several years ago, insurance providers have started taking steps into the AI space by offering financial protection against models that fail.

Corporate technology leaders say such policies could help them address risk-management concerns from board members, chief executives and legal departments.

 

Humane’s ‘Ai Pin’ debuts on the Paris runway — from techcrunch.com by Brian Heater

“The [Ai Pin is a] connected and intelligent clothing-based wearable device uses a range of sensors that enable contextual and ambient compute interactions,” the company noted at the time. “The Ai Pin is a type of standalone device with a software platform that harnesses the power of Ai to enable innovative personal computing experiences.”


Also relevant/see:

 



AI Meets Med School— from insidehighered.com by Lauren Coffey
Adding to academia’s AI embrace, two institutions in the University of Texas system are jointly offering a medical degree paired with a master’s in artificial intelligence.

Doctor AI

The University of Texas at San Antonio has launched a dual-degree program combining medical school with a master’s in artificial intelligence.

Several universities across the nation have begun integrating AI into medical practice. Medical schools at the University of Florida, the University of Illinois, the University of Alabama at Birmingham and Stanford and Harvard Universities all offer variations of a certificate in AI in medicine that is largely geared toward existing professionals.

“I think schools are looking at, ‘How do we integrate and teach the uses of AI?’” Dr. Whelan said. “And in general, when there is an innovation, you want to integrate it into the curriculum at the right pace.”

Speaking of emerging technologies and med school, also see:


Though not necessarily edu-related, this was interesting to me and hopefully will be to some profs and/or students out there:


How to stop AI deepfakes from sinking society — and science — from nature.com by Nicola Jones; via The Neuron
Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.





Exploring the Impact of AI in Education with PowerSchool’s CEO & Chief Product Officer — from michaelbhorn.substack.com by Michael B. Horn

With just under 10 acquisitions in the last 5 years, PowerSchool has been active in transforming itself from a student information systems company to an integrated education company that works across the day and lifecycle of K–12 students and educators. What’s more, the company turned heads in June with its announcement that it was partnering with Microsoft to integrate AI into its PowerSchool Performance Matters and PowerSchool LearningNav products to empower educators in delivering transformative personalized-learning pathways for students.


AI Learning Design Workshop: The Trickiness of AI Bootcamps and the Digital Divide — from eliterate.usby Michael Feldstein

As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.


Global AI Legislation Tracker— from iapp.org; via Tom Barrett

Countries worldwide are designing and implementing AI governance legislation commensurate to the velocity and variety of proliferating AI-powered technologies. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases, and voluntary guidelines and standards.

This tracker identifies legislative policy and related developments in a subset of jurisdictions. It is not globally comprehensive, nor does it include all AI initiatives within each jurisdiction, given the rapid and widespread policymaking in this space. This tracker offers brief commentary on the wider AI context in specific jurisdictions, and lists index rankings provided by Tortoise Media, the first index to benchmark nations on their levels of investment, innovation and implementation of AI.


Diving Deep into AI: Navigating the L&D Landscape — from learningguild.com by Markus Bernhardt

The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.



Learning Lab | ChatGPT in Higher Education: Exploring Use Cases and Designing Prompts — from events.educause.edu; via Robert Gibson on LinkedIn

Part 1: October 16 | 3:00–4:30 p.m. ET
Part 2: October 19 | 3:00–4:30 p.m. ET
Part 3: October 26 | 3:00–4:30 p.m. ET
Part 4: October 30 | 3:00–4:30 p.m. ET


Mapping AI’s Role in Education: Pioneering the Path to the Future — from marketscale.com by Michael B. Horn, Jacob Klein, and Laurence Holt

Welcome to The Future of Education with Michael B. Horn. In this insightful episode, Michael gains perspective on mapping AI’s role in education from Jacob Klein, a Product Consultant at Oko Labs, and Laurence Holt, an Entrepreneur In Residence at the XQ Institute. Together, they peer into the burgeoning world of AI in education, analyzing its potential, risks, and roadmap for integrating it seamlessly into learning environments.


Ten Wild Ways People Are Using ChatGPT’s New Vision Feature — from newsweek.com by Meghan Roos; via Superhuman

Below are 10 creative ways ChatGPT users are making use of this new vision feature.


 

Reimagining Hiring and Learning with the Power of AI — from linkedin.com by Hari Srinivasan

That’s why today we’re piloting new tools like our new release of Recruiter 2024 and LinkedIn Learning’s AI-powered coaching experience to help with some of the heavy lifting so HR professionals can focus on what matters most.

“AI is quickly transforming recruitment, training, and many other HR practices,” says Josh Bersin, industry analyst and CEO of The Josh Bersin Company. “LinkedIn’s new features in Recruiter 2024 and LinkedIn Learning can massively improve recruiter productivity and help all employees build the skills they need to grow in their careers.”

By pairing generative AI with our unique insights gained from the more than 950 million professionals, 65 million companies, and 40,000 skills on our platform, we’ve reimagined our Recruiter product to help our customers find that short list of qualified candidates — faster.

From DSC:
While I’m very interested to see how Microsoft’s AI-powered LinkedIn Learning coach will impact peoples’ growth/development, I need to admit that I still approach AI and hiring/finding talent with caution. I’m sure I was weeded out by several Applicant Tracking Systems (ATS) back in 2017 when I was looking for my next position — and I only applied to positions that I had the qualifications for. And if you’ve tried to get a job recently, I bet you were weeded out by an ATS as well. So while this might help recruiters, the jury is still out for me as to whether these developments are good or bad for the rest of society.

Traditional institutions of higher education may want to research these developments to see which SKILLS are in demand.

Also relevant/see:

LinkedIn Launches Exciting Gen AI Features in Recruiter and Learning — from joshbersin.com by Josh Bersin

This week LinkedIn announced some massive Gen AI features in its two flagship products: LinkedIn Recruiter and LinkedIn Learning. Let me give you an overview.

LinkedIn goes big on new AI tools for learning, recruitment, marketing and sales, powered by OpenAI — from techcrunch.com by Ingrid Lunden

LinkedIn Learning will be incorporating AI in the form of a “learning coach” that is essentially built as a chatbot. Initially the advice that it will give will be trained on suggestions and tips, and it will be firmly in the camp of soft skills. One example: “How can I delegate tasks and responsibility effectively?”

The coach might suggest actual courses, but more importantly, it will actually also provide information, and advice, to users. LinkedIn itself has a giant catalogue of learning videos, covering both those soft skills but also actual technical skills and other knowledge needed for specific jobs. It will be interesting to see if LinkedIn extends the coach to covering that material, too.

 

 

Will Legal Prompt Engineers Replace Lawyers? — from forbes.com by Charles Lew

A woman at the computer

.

From DSC:
I’m not crazy about the click bait nature of the title, but the article lists some ways that AI could/is impacting the legal realm.
For example, here’s an excerpt:

Engineers in this capacity might not be legal experts, but they excel in framing precise questions for these models, drawing out answers that align with legal nuances. Essentially, these experts represent a significant paradigm shift, evolving the role of legal practitioners.

In legal research, an LPE harnesses advanced models to improve comprehension. Specific legal texts, statutes or summaries fed into the AI yield clarifications, contextual insights or succinct summaries. This assists legal professionals in quickly grasping the implications of texts, streamlining the research process.

In legal drafting, AI can suggest relevant clauses, pinpoint angles of an argument and provide recommendations to enhance clarity. It ensures consistency in terminology and references, detects redundant language and verifies the accuracy of legal citations. It flags potential high-risk language, aligns with jurisdictional norms and prioritizes relevance through contextual analysis. The system checks coherence in stipulated timelines and identifies potentially biased or non-inclusive language.

For training and brainstorming, LPEs can present hypothetical situations, formulating questions that unearth potential legal arguments or implications. Not only does it serve as an instructional tool for budding legal professionals, it also exercises a fresh perspective for seasoned attorneys.


12 Thoughts on Promises and Challenges of AI in Legal after Yesterday’s AI Summit at Harvard Law School — from lawnext.com by Bob Ambrogi

  1. Armed with AI, pro se litigants could overwhelm the courts, so the courts need to be prepared to respond in kind.
  2. If AI is to enhance access to justice, it will not be only by increasing lawyer productivity, but also by directly empowering consumers.
  3. Even the AI experts don’t understand AI.
  4. Experts are already striving to make the black box of AI more transparent.
  5. Even as law firms adopt AI, they are finding implementation to be a challenge.
  6. …and more

 

Comparing Online and AI-Assisted Learning: A Student’s View — from educationnext.org by Daphne Goldstein
An 8th grader reviews traditional Khan Academy and its AI-powered tutor, Khanmigo

Hi everyone, I’m Daphne, a 13-year-old going into 8th grade.

I’m writing to compare “regular” Khan Academy (no AI) to Khanmigo (powered by GPT4), using three of my own made-up criteria.

They are: efficiency, effectiveness, and enjoyability. Efficiency is how fast I am able to cover a math topic and get basic understanding. Effectiveness is my quality of understanding—the difference between basic and advanced understanding. And the final one—most important to kids and maybe least important to adults who make kids learn math—is enjoyability.


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


First Impressions with GPT-4V(ision) — from blog.roboflow.com by James Gallagher; via Donald Clark on LinkedIn

On September 25th, 2023, OpenAI announced the rollout of two new features that extend how people can interact with its recent and most advanced model, GPT-4: the ability to ask questions about images and to use speech as an input to a query.

This functionality marks GPT-4’s move into being a multimodal model. This means that the model can accept multiple “modalities” of input – text and images – and return results based on those inputs. Bing Chat, developed by Microsoft in partnership with OpenAI, and Google’s Bard model both support images as input, too. Read our comparison post to see how Bard and Bing perform with image inputs.

In this guide, we are going to share our first impressions with the GPT-4V image input feature.


 

Student Use Cases for AI: Start by Sharing These Guidelines with Your Class — from hbsp.harvard.edu by Ethan Mollick and Lilach Mollick

To help you explore some of the ways students can use this disruptive new technology to improve their learning—while making your job easier and more effective—we’ve written a series of articles that examine the following student use cases:

  1. AI as feedback generator
  2. AI as personal tutor
  3. AI as team coach
  4. AI as learner

Recap: Teaching in the Age of AI (What’s Working, What’s Not) — from celt.olemiss.edu by Derek Bruff, visiting associate director

Earlier this week, CETL and AIG hosted a discussion among UM faculty and other instructors about teaching and AI this fall semester. We wanted to know what was working when it came to policies and assignments that responded to generative AI technologies like ChatGPT, Google Bard, Midjourney, DALL-E, and more. We were also interested in hearing what wasn’t working, as well as questions and concerns that the university community had about teaching and AI.


Teaching: Want your students to be skeptical of ChatGPT? Try this. — from chronicle.com by Beth McMurtrie

Then, in class he put them into groups where they worked together to generate a 500-word essay on “Why I Write” entirely through ChatGPT. Each group had complete freedom in how they chose to use the tool. The key: They were asked to evaluate their essay on how well it offered a personal perspective and demonstrated a critical reading of the piece. Weiss also graded each ChatGPT-written essay and included an explanation of why he came up with that particular grade.

After that, the students were asked to record their observations on the experiment on the discussion board. Then they came together again as a class to discuss the experiment.

Weiss shared some of his students’ comments with me (with their approval). Here are a few:


2023 EDUCAUSE Horizon Action Plan: Generative AI — from library.educause.edu by Jenay Robert and Nicole Muscanell

Asked to describe the state of generative AI that they would like to see in higher education 10 years from now, panelists collaboratively constructed their preferred future.
.

2023-educause-horizon-action-plan-generative-ai


Will Teachers Listen to Feedback From AI? Researchers Are Betting on It — from edsurge.com by Olina Banerji

Julie York, a computer science and media teacher at South Portland High School in Maine, was scouring the internet for discussion tools for her class when she found TeachFX. An AI tool that takes recorded audio from a classroom and turns it into data about who talked and for how long, it seemed like a cool way for York to discuss issues of data privacy, consent and bias with her students. But York soon realized that TeachFX was meant for much more.

York found that TeachFX listened to her very carefully, and generated a detailed feedback report on her specific teaching style. York was hooked, in part because she says her school administration simply doesn’t have the time to observe teachers while tending to several other pressing concerns.

“I rarely ever get feedback on my teaching style. This was giving me 100 percent quantifiable data on how many questions I asked and how often I asked them in a 90-minute class,” York says. “It’s not a rubric. It’s a reflection.”

TeachFX is easy to use, York says. It’s as simple as switching on a recording device.

But TeachFX, she adds, is focused not on her students’ achievements, but instead on her performance as a teacher.


ChatGPT Is Landing Kids in the Principal’s Office, Survey Finds — from the74million.org by Mark Keierleber
While educators worry that students are using generative AI to cheat, a new report finds students are turning to the tool more for personal problems.

Indeed, 58% of students, and 72% of those in special education, said they’ve used generative AI during the 2022-23 academic year, just not primarily for the reasons that teachers fear most. Among youth who completed the nationally representative survey, just 23% said they used it for academic purposes and 19% said they’ve used the tools to help them write and submit a paper. Instead, 29% reported having used it to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Part of the disconnect dividing teachers and students, researchers found, may come down to gray areas. Just 40% of parents said they or their child were given guidance on ways they can use generative AI without running afoul of school rules. Only 24% of teachers say they’ve been trained on how to respond if they suspect a student used generative AI to cheat.


Embracing weirdness: What it means to use AI as a (writing) tool — from oneusefulthing.org by Ethan Mollick
AI is strange. We need to learn to use it.

But LLMs are not Google replacements, or thesauruses or grammar checkers. Instead, they are capable of so much more weird and useful help.


Diving Deep into AI: Navigating the L&D Landscape — from learningguild.com by Markus Bernhardt

The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.

Thanks to generative AI, such visions are transitioning from fiction to reality.


Video: Unleashing the Power of AI in L&D — from drphilippahardman.substack.com by Dr. Philippa Hardman
An exclusive video walkthrough of my keynote at Sweden’s national L&D conference this week

Highlights

  • The wicked problem of L&D: last year, $371 billion was spent on workplace training globally, but only 12% of employees apply what they learn in the workplace
  • An innovative approach to L&D: when Mastery Learning is used to design & deliver workplace training, the rate of “transfer” (i.e. behaviour change & application) is 67%
  • AI 101: quick summary of classification, generative and interactive AI and its uses in L&D
  • The impact of AI: my initial research shows that AI has the potential to scale Mastery Learning and, in the process:
    • reduce the “time to training design” by 94% > faster
    • reduce the cost of training design by 92% > cheaper
    • increase the quality of learning design & delivery by 96% > better
  • Research also shows that the vast majority of workplaces are using AI only to “oil the machine” rather than innovate and improve our processes & practices
  • Practical tips: how to get started on your AI journey in your company, and a glimpse of what L&D roles might look like in a post-AI world

 

ChatGPT can now see, hear, and speak — from openai.com
We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.

Voice and image give you more ways to use ChatGPT in your life. Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it. When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you.

We’re rolling out voice and images in ChatGPT to Plus and Enterprise users over the next two weeks. Voice is coming on iOS and Android (opt-in in your settings) and images will be available on all platforms.





OpenAI Seeks New Valuation of Up to $90 Billion in Sale of Existing Shares — from wsj.com (behind paywall)
Potential sale would value startup at roughly triple where it was set earlier this year


The World’s First AI Cinema Experience Starring YOU Is Open In NZ And Buzzy Doesn’t Cover It — from theedge.co.nz by Seth Gupwell
Allow me to manage your expectations.

Because it’s the first-ever on Earth, it’s hard to label what kind of entertainment Hypercinema is. While it’s marketed as a “live AI experience” that blends “theatre, film and digital technology”, Dr. Gregory made it clear that it’s not here to make movies and TV extinct.

Your face and personality are how HyperCinema sets itself apart from the art forms of old. You get 15 photos of your face taken from different angles, then answer a questionnaire – mine started by asking what my fave vegetable was and ended by demanding to know what I thought the biggest threat to humanity was. Deep stuff, but the questions are always changing, cos that’s how AI rolls.

All of this information is stored on your cube – a green, glowing accessory that you carry around for the whole experience and insert into different sockets to transfer your info onto whatever screen is in front of you. Upon inserting your cube, the “live AI experience” starts.

The AI has taken your photos and superimposed your face on a variety of made-up characters in different situations.


Announcing Microsoft Copilot, your everyday AI companion — from blogs.microsoft.com by Yusuf Mehdi

We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology. With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action. At Microsoft, we think about this as having a copilot to help navigate any task. We have been building AI-powered copilots into our most used and loved products – making coding more efficient with GitHub, transforming productivity at work with Microsoft 365, redefining search with Bing and Edge and delivering contextual value that works across your apps and PC with Windows.

Today we take the next step to unify these capabilities into a single experience we call Microsoft Copilot, your everyday AI companion. Copilot will uniquely incorporate the context and intelligence of the web, your work data and what you are doing in the moment on your PC to provide better assistance – with your privacy and security at the forefront.


DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
DALL·E 3 is now in research preview, and will be available to ChatGPT Plus and Enterprise customers in October, via the API and in Labs later this fall.


 

The next wave of AI will be interactive — from joinsuperhuman.ai by Zain Kahn
ALSO: AI startups raise over $500 million

Google DeepMind cofounder Mustafa Suleyman thinks that generative is a passing phase, and that interactive AI is the next big thing in AI. Suleyman called the transformation “a profound moment” in the history of technology.

Suleyman divided AI’s evolution into 3 waves:

  1. Classification: Training computers to classify various types of data like images and text.
  2. Generative: The current wave, which takes input data to generate new data. ChatGPT is the best example of this.
  3. Interactive: The next wave, where an AI will be capable of communicating and operating autonomously.

“Think of it as autonomous software that can talk to other apps to get things done.”

From DSC:
Though I find this a generally positive thing, the above sentence makes me exclaim, “No, nothing could possibly go wrong there.”


 
© 2025 | Daniel Christian