The Misunderstanding About Education That Cost Mark Zuckerberg $100 Million — from danmeyer.substack.com by Dan Meyer
Personalized learning can feel isolating. Whole class learning can feel personal. This is hard to understand.

Excerpt (emphasis DSC):

Last week, Matt Barnum reported in Chalkbeat that the Chan Zuckerberg Initiative is laying off dozens of staff members and pivoting away from the personalized learning platform they have funded since 2015 with somewhere near $100M.

I have tried to illustrate as often as my subscribers will tolerate that students don’t particularly enjoy learning alone with laptops within social spaces like classrooms. That learning fails to answer their questions about their social identity. It contributes to their feelings of alienation and disbelonging. I find this case easy to make but hard to prove. Maybe we just haven’t done personalized learning right? Maybe Summit just needed to include generative AI chatbots in their platform?

What is far easier to prove, or rather to disprove, is the idea that “whole class instruction must feel impersonal to students,” that “whole class instruction must necessarily fail to meet the needs of individual students.”

From DSC:
I appreciate Dan’s comments here (as highlighted above) as they are helpful in my thoughts regarding the Learning from the Living [Class] Room vision. They seem to be echoed here by Jeppe Klitgaard Stricker when he says:

Personalized learning paths can be great, but they also entail a potential abolishment or unintended dissolution of learning communities and belonging.

Perhaps this powerful, global, Artificial Intelligence (AI)-backed, next-generation, lifelong learning platform of the future will be more focused on postsecondary students and experiences — but not so much for the K12 learning ecosystem.

But the school systems I’ve seen here in Michigan (USA) represent systems that address a majority of the class only. These one-size-fits-all systems don’t work for many students who need extra help and/or who are gifted students. The trains move fast. Good luck if you can’t keep up with the pace.

But if K-12’ers are involved in a future learning platform, the platform needs to address what Dan’s saying. It must address students questions about their social identity and not contribute to their feelings of alienation and disbelonging. It needs to support communities of practice and learning communities.

 

Creating an ‘ecosystem’ to close the Black talent gap in technology — from mckinsey.com (emphasis below from DSC)

Chris Perkins, associate partner, McKinsey: Promoting diversity in tech is more nuanced than driving traditional diversity initiatives. This is primarily because of the specialized hard and soft skills required to enter tech-oriented professions and succeed throughout their careers. Our research shows us that various actors, such as nonprofits, for-profits, government agencies, and educational institutions are approaching the problem in small pockets. Could we help catalyze an ecosystem with wraparound support across sectors?

To design this, we have to look at the full pipeline and its “leakage” points, from getting talent trained and in the door all the way up to the C-suite. These gaps are caused by lack of awareness and support in early childhood education through college, and lack of sponsorship and mentorship in early- and mid- career positions.

 

Thinking with Colleagues: AI in Education — from campustechnology.com by Mary Grush
A Q&A with Ellen Wagner

Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.

Mary and Ellen linked to “What Is Top of Mind for Higher Education Leaders about AI?” — from northcoasteduvisory.com. Below are some excerpts from those notes:

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 


Along these lines, also see:


What people ask me most. Also, some answers. — from oneusefulthing.org by Ethan Mollick
A FAQ of sorts

I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.

I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.

Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.

 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 



AI Meets Med School— from insidehighered.com by Lauren Coffey
Adding to academia’s AI embrace, two institutions in the University of Texas system are jointly offering a medical degree paired with a master’s in artificial intelligence.

Doctor AI

The University of Texas at San Antonio has launched a dual-degree program combining medical school with a master’s in artificial intelligence.

Several universities across the nation have begun integrating AI into medical practice. Medical schools at the University of Florida, the University of Illinois, the University of Alabama at Birmingham and Stanford and Harvard Universities all offer variations of a certificate in AI in medicine that is largely geared toward existing professionals.

“I think schools are looking at, ‘How do we integrate and teach the uses of AI?’” Dr. Whelan said. “And in general, when there is an innovation, you want to integrate it into the curriculum at the right pace.”

Speaking of emerging technologies and med school, also see:


Though not necessarily edu-related, this was interesting to me and hopefully will be to some profs and/or students out there:


How to stop AI deepfakes from sinking society — and science — from nature.com by Nicola Jones; via The Neuron
Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.





Exploring the Impact of AI in Education with PowerSchool’s CEO & Chief Product Officer — from michaelbhorn.substack.com by Michael B. Horn

With just under 10 acquisitions in the last 5 years, PowerSchool has been active in transforming itself from a student information systems company to an integrated education company that works across the day and lifecycle of K–12 students and educators. What’s more, the company turned heads in June with its announcement that it was partnering with Microsoft to integrate AI into its PowerSchool Performance Matters and PowerSchool LearningNav products to empower educators in delivering transformative personalized-learning pathways for students.


AI Learning Design Workshop: The Trickiness of AI Bootcamps and the Digital Divide — from eliterate.usby Michael Feldstein

As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.


Global AI Legislation Tracker— from iapp.org; via Tom Barrett

Countries worldwide are designing and implementing AI governance legislation commensurate to the velocity and variety of proliferating AI-powered technologies. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases, and voluntary guidelines and standards.

This tracker identifies legislative policy and related developments in a subset of jurisdictions. It is not globally comprehensive, nor does it include all AI initiatives within each jurisdiction, given the rapid and widespread policymaking in this space. This tracker offers brief commentary on the wider AI context in specific jurisdictions, and lists index rankings provided by Tortoise Media, the first index to benchmark nations on their levels of investment, innovation and implementation of AI.


Diving Deep into AI: Navigating the L&D Landscape — from learningguild.com by Markus Bernhardt

The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.



Learning Lab | ChatGPT in Higher Education: Exploring Use Cases and Designing Prompts — from events.educause.edu; via Robert Gibson on LinkedIn

Part 1: October 16 | 3:00–4:30 p.m. ET
Part 2: October 19 | 3:00–4:30 p.m. ET
Part 3: October 26 | 3:00–4:30 p.m. ET
Part 4: October 30 | 3:00–4:30 p.m. ET


Mapping AI’s Role in Education: Pioneering the Path to the Future — from marketscale.com by Michael B. Horn, Jacob Klein, and Laurence Holt

Welcome to The Future of Education with Michael B. Horn. In this insightful episode, Michael gains perspective on mapping AI’s role in education from Jacob Klein, a Product Consultant at Oko Labs, and Laurence Holt, an Entrepreneur In Residence at the XQ Institute. Together, they peer into the burgeoning world of AI in education, analyzing its potential, risks, and roadmap for integrating it seamlessly into learning environments.


Ten Wild Ways People Are Using ChatGPT’s New Vision Feature — from newsweek.com by Meghan Roos; via Superhuman

Below are 10 creative ways ChatGPT users are making use of this new vision feature.


 

Next, The Future of Work is… Intersections — from linkedin.com by Gary A. Bolles; via Roberto Ferraro

So much of the way that we think about education and work is organized into silos. Sure, that’s one way to ensure a depth of knowledge in a field and to encourage learners to develop mastery. But it also leads to domains with strict boundaries. Colleges are typically organized into school sub-domains, managed like fiefdoms, with strict rules for professors who can teach in different schools.

Yet it’s at the intersections of seemingly-disparate domains where breakthrough innovation can occur.

Maybe intersections bring a greater chance of future work opportunity, because that young person can increase their focus in one arena or another as they discover new options for work — and because this is what meaningful work in the future is going to look like.

From DSC:
This posting strikes me as an endorsement for interdisciplinary degrees. I agree with much of this. It’s just hard to find the right combination of disciplines. But I supposed that depends upon the individual student and what he/she is passionate or curious about.


Speaking of the future of work, also see:

Centaurs and Cyborgs on the Jagged Frontier — from oneusefulthing.org by Ethan Mollick
I think we have an answer on whether AIs will reshape work…

A lot of people have been asking if AI is really a big deal for the future of work. We have a new paper that strongly suggests the answer is YES.
.

Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. Those are some very big impacts. Now, let’s add in the nuance.

 

From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.

That got me to thinking…

What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes? 

That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.

The end result would be that the main points were properly communicated/learned/received.

 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 

From DSC: If this is true, how will we meet this type of demand?!?

RESKILLING NEEDED FOR 40% OF WORKFORCE BECAUSE OF AI, REPORT FROM IBM SAYS — from staffingindustry.com; via GSV

Generative AI will require skills upgrades for workers, according to a report from IBM based on a survey of executives from around the world. One finding: Business leaders say 40% of their workforces will need to reskill as AI and automation are implemented over the next three years. That could translate to 1.4 billion people in the global workforce who require upskilling, according to the company.

 

Will one of our future learning ecosystems look like a Discord server type of service? [Christian]

 

Teaching Assistants that Actually Assist Instructors with Teaching — from opencontent.org by David Wiley

“…what if generative AI could provide every instructor with a genuine teaching assistant – a teaching assistant that actually assisted instructors with their teaching?”

Assignment Makeovers in the AI Age: Reading Response Edition — from derekbruff.org by Derek Bruff

For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.

What about option two? I think one version of this would be to do away with the reading response assignment altogether.

4 Steps to Help You Plan for ChatGPT in Your Classroom — from chronicle.com by Flower Darby
Why you should understand how to teach with AI tools — even if you have no plans to actually use them.


Some items re: AI in other areas:

15 Generative AI Tools A billion+ people will be collectively using very soon. I use most of them every day — from stefanbauschard.substack.com by Stefan Bauschard
ChatGPT, Bing, Office Suite, Google Docs, Claude, Perplexity.ai, Plug-Ins, MidJourney, Pi, Runway, Bard, Bing, Synthesia, D-ID

The Future of AI in Video: a look forward — from provideocoalition.com by Iain Anderson

Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster; resource from Tom Barrett

Along these lines of Hollywood and AI, see this Tweet:

Claude 2: ChatGPT rival launches chatbot that can summarise a novel –from theguardian.com by Dan Milmo; resource from Tom Barrett
Anthropic releases chatbot able to process large blocks of text and make judgments on what it is producing

Generative AI imagines new protein structures — from news.mit.edu by Rachel Gordon; resource from Sunday Signal
MIT researchers develop “FrameDiff,” a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.

Google’s medical AI chatbot is already being tested in hospitals — from theverge.com by Wes Davis; resource via GSV

Ready to Sing Elvis Karaoke … as Elvis? The Weird Rise of AI Music — from rollingstone.com by Brian Hiatt; resource from Misha da Vinci
From voice-cloning wars to looming copyright disputes to a potential flood of nonhuman music on streaming, AI is already a musical battleground

 

Sources of Cognitive Load — from learningscientists.org

Excerpt:

Cognitive Load Theory is an influential theory from educational psychology that describes how various factors affect our ability to use our working memory resources. We’ve done a digest about cognitive load theory here and talked about it here and here, but haven’t provided an overview of the theory so I want to give an overview here.

Cognitive load theory provides useful and dynamic model for how many different factors affect working memory and learning. Hopefully this post provides a useful overview of some of the main components of cognitive load!


From DSC:
Along these lines, a while back I put together a video regarding cognitive load. It addresses at least two main questions:

  1. What is cognitive load?
  2. Why should I care about it?

 

What is cognitive load? And why should I care about it?

What is cognitive load? And why should I care about it?

Transcript here.

 

How do I put it into practice?

  • Simplify the explanations of what you’re presenting as much as possible and break down complex tasks into smaller parts
  • Don’t place a large amount of text on a slide and then talk about it at the same time — doing so requires much more processing than most people can deal with.
  • Consider creating two versions of your PowerPoint files:
    • A text-light version that can be used for presenting that content to students
    • A text-heavy version — which can be posted to your LMS for the learners to go through at their own pace — and without trying to process so much information (voice and text, for example) at one time.
  • Design-wise:
    • Don’t use decorative graphics — everything on a slide should be there for a reason
    • Don’t use too many fonts or colors — this can be distracting
    • Don’t use background music when you are trying to explain something
 

Recording Arts as Reengagement, Social Justice and Pathway — from gettingsmart.com

Key Points

  • After a successful career as a recording artist, David “TC” Ellis created Studio 4 in St. Paul to spot budding music stars.
  • It became a hangout spot for creative young people, most of whom had “dropped out of school due to boredom and a sense that school wasn’t relevant to their lives and dreams.”
  • Ellis and colleagues then opened the High School for Recording Arts in 1998.

Young people learning how to perform and record music at the High School for Recording Arts

 

Coursera’s Global Skills Report for 2023 — from coursera.org
Benchmark talent and transform your workforce with skill development and career readiness insights drawn from 124M+ learners.

Excerpt:

Uncover global skill trends
See how millions of registered learners in 100 countries are strengthening critical business, technology, and data science skills.

 
© 2024 | Daniel Christian