From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Guided by a vision – often captured as a Portrait of a Graduate – co-constructed with local leaders, community members, students, and families, state policymakers can develop policies that equitably and effectively support students and educators in transforming learning experiences.
The Aurora Institute highlights the importance of collaborative efforts in creating education systems that truly meet the diverse needs of every student.
The Aurora Institute has spent years working with states looking to advance competency-based systems, and has identifieda set of key state policy leversthat policymakers can put into action to build more personalized and competency-based systems. These shifts should be guided by a vision–co-constructed with local leaders, community members, students, and families–for what students need to know and be able to do upon graduating.
There has been a move away from the traditional “Bachelor’s or Bust” mentality towards recognizing the value of diverse career pathways that may not necessarily require a four-year degree.
Local entities such as states, school districts, and private organizations have played a crucial role in implementing and scaling up career pathways programs.
While much has been written on this topic (see resources below), this post, in the context of our OECD study of five Anglophone countries, will attempt to provide a backdrop on what was happening at the federal level in the U.S. over the last several decades to help catalyze this shift in career pathways and offer a snapshot of how this work is evolving in two very different states—Delaware and Texas.
17 so that Christ may dwell in your hearts through faith. And I pray that you, being rooted and established in love,18 may have power, together with all the Lord’s holy people, to grasp how wide and long and high and deep is the love of Christ,19 and to know this love that surpasses knowledge—that you may be filled to the measure of all the fullness of God.
Yours, Lord, is the greatness and the power and the glory and the majesty and the splendor, for everything in heaven and earth is yours. Yours, Lord, is the kingdom; you are exalted as head over all.
7 “Two things I ask of you, Lord;
do not refuse me before I die: 8 Keep falsehood and lies far from me;
give me neither poverty nor riches,
but give me only my daily bread. 9 Otherwise, I may have too much and disown you
and say, ‘Who is the Lord?’
Or I may become poor and steal,
and so dishonor the name of my God.
Janelle’s story is all too familiar throughout the U.S. — stuck in a low-paying job, struggling to make ends meet after being failed by college. Roughly 40 million Americans have left college without completing a degree — historically seen as a golden ticket to the middle class.
Yet even with a degree, many fall short of economic prosperity.
From DSC: My wife does a lot of work with foster families and CASA kids, and she recommends these resources for helping children who have experienced adversity, early harm, toxic stress, and/or trauma.
TBRI® is an attachment-based, trauma-informed intervention that is designed to meet the complex needs of vulnerable children. TBRI® uses Empowering Principles to address physical needs, Connecting Principles for attachment needs, and Correcting Principles to disarm fear-based behaviors. While the intervention is based on years of attachment, sensory processing, and neuroscience research, the heartbeat of TBRI® is connection.
The adoption of a child is always a joyous moment in the life of a family. Some adoptions, though, present unique challenges. Welcoming these children into your family–and addressing their special needs–requires care, consideration, and compassion. Written by two research psychologists specializing in adoption and attachment, The Connected Child will help you:
Build bonds of affection and trust with your adopted child
Effectively deal with any learning or behavioral disorders
Discipline your child with love without making him or her feel threatened
Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.
Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.
On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.
… Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.
The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.
Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.
Google just casually announced Veo, a new rival to OpenAI’s Sora.
It can generate insanely good 1080p video up to 60 seconds.
Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.
Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.
At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.
Generative AI in Search: Let Google do the searching for you — from blog.google With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.
I quickly decided to take a different tack with my students, and instead asked each of them, “What problem in the world do you think you want to solve? If you could go to a school of hunger, poverty, Alzheimer’s disease, mental health … what kind of school would you want to attend?” This is when they started nodding vigorously.
What each of them identified was a grand challenge, or what Stanford d.school Executive Director Sarah Stein Greenberg has called: purpose learning. In a great talk for Wired, Greenberg asks,
What if students declared missions not majors? Or even better, what if they applied to the School of Hunger or the School of Renewable Energy? These are real problems that society doesn’t have answers to yet. Wouldn’t that fuel their studies with some degree of urgency and meaning and real purpose that they don’t yet have today?
The Ethical and Emotional Implications of AI Voice Preservation
Legal Considerations and Voice Rights From a legal perspective, the burgeoning use of AI in voice cloning also introduces a complex web of rights and permissions. The recent passage of Tennessee’s ELVIS Act, which allows legal action against unauthorized recreations of an artist’s voice, underscores the necessity for robust legal frameworks to manage these technologies. For non-celebrities, the idea of a personal voice bank brings about its own set of legal challenges. How do we regulate the use of an individual’s voice after their death? Who holds the rights to control and consent to the usage of these digital artifacts?
To safeguard against misuse, any system of voice banking would need stringent controls over who can access and utilize these voices. The creation of such banks would necessitate clear guidelines and perhaps even contractual agreements stipulating the terms under which these voices may be used posthumously.
Should we all consider creating voice banks to preserve our voices, allowing future generations the chance to interact with us even after we are gone?
After decades of neglect, access to justice has roared onto legal and political radars, fueled by a growing realization—first among lawyers but increasingly among the wider American public—that the civil justice system is in crisis. In roughly three-quarters of the 20 million civil cases filed in state courts each year, one side lacks a lawyer—a dynamic that poses a direct challenge to the system’s adversarial core.1 And these are the cases and litigants we can see. Beneath them lies a larger but hidden crisis. It consists of tens of millions more Americans who face genuine legal problems but take no formal legal action to protect their interests.2 As this double-layered calamity has come into focus, state supreme courts, bar associations, and even the crusty American Law Institute are taking note.3
These institutional plaintiffs have built business models around high-volume litigation practices, in large part by leveraging “legal tech,” from e-filing to AI. Yet the legal tech that serves individual Americans on the other side of the “v” remains clunky and limited. The result is a lopsided litigation landscape that’s wreaking havoc on litigants and courts alike.
For the director of music. For pipes. A psalm of David.
1 Listen to my words, Lord, consider my lament. 2 Hear my cry for help, my King and my God, for to you I pray. 3 In the morning, Lord, you hear my voice; in the morning I lay my requests before you and wait expectantly.
The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared.
About one university or college per week so far this year, on average, has announced that it will close or merge. That’s up from a little more than two a month last year, according to the State Higher Education Executive Officers Association, or SHEEO.
…
Most students at colleges that close give up on their educations altogether. Fewer than half transfer to other institutions, a SHEEO study found. Of those, fewer than half stay long enough to get degrees. Many lose credits when they move from one school to another and have to spend longer in college, often taking out more loans to pay for it.
…
Colleges are almost certain to keep closing. As many as one in 10 four-year colleges and universities are in financial peril, the consulting firm EY Parthenon estimates.
Students who transferlose an average of 43 percentof the credits they’ve already earned and paid for, the Government Accountability Office found in the most recent comprehensive study of this problem.
Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.
It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.
The paper — The Ethics of Advanced AI Assistants— is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.
Key questions for the ethical and societal analysis of advanced AI assistants include:
What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
What capabilities would an advanced AI assistant have? How capable could these assistants be?
What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
There has been a surge in new microschools in the U.S. since the start of the COVID-19 pandemic. The National Microschooling Network estimates there are about 95,000 microschools in the country.The median microschool serves 16 students.
There is no regulatory body solely responsible for tracking microschools, so it is difficult to determine just how much their popularity has grown.
Advocates for microschools say they offer some students — especially those who are gifted or have learning disabilities — a greater chance to thrive academically and socially than traditional schools do.
At Sphinx Academy, a micro-school based in Lexington, Ky., almost all 24 students are “twice exceptional,” meaning they are gifted in one academic area but have one or more learning disabilities like ADHD or dyslexia, according to the school’s director Jennifer Lincoln.
The stakes are high: Students have a lot of academic ground to make up following the pandemic. Yet they’re not fully engaged in the classroom, teachers report in a new national survey.