From DSC: In looking atMyNextChapter.ai— THIS TYPE OF FUNCTIONALITY of an AI-based chatbot talking to you re: good fits for a future job — is the kind of thing that could work well in this type of vision/learning platform. The AI asks you relevant career-oriented questions, comes up with some potential job fits, and then gives you resources about how to gain those skills, who to talk with, organizations to join, next steps to get your foot in the door somewhere, etc.
The next gen learning platform would provide links to online-based courses, blogs, peoples’ names on LinkedIn, courses from L&D organizations or from institutions of higher education or from other entities/places to obtain those skills (similar to the ” Action Plan” below from MyNextChapter.ai).
ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.
You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.” ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.
In Episode 5 of The Neuron Podcast, Corey Noles and Grant Harvey tackle the education crisis head-on. We explore the viral UCLA “CheatGPT” controversy, MIT’s concerning brain study, and innovative solutions like Alpha School’s 2-hour learning model. Plus, we break down OpenAI’s new $10M teacher training initiative and share practical tips for using AI to enhance learning rather than shortcut it. Whether you’re a student, teacher, or parent, you’ll leave with actionable insights on the future of education.
I also pondered what functions blogging has provided for me over the years.
Continuity – as an individual you persist across multiple organisations, roles and jobs. Although I stayed in one institution, I had many roles and the blog wasn’t associated with one specific project. Now I have left it continues.
Holistic – you can blog about one topic, but over time I think some personality will creep in. You are not just one thing, you have a personal life, tastes, interests etc which will all feed into what you do. A blog allows this more rounded representation.
Experimentation – there is relatively low cost and risk for much of it (this may not be the case for many people online, we need to acknowledge), so you can try things, and if they don’t work, so what? Also you can try formats that conventional outlets might not be appropriate for.
Development – the blog has been both an intentional and unintentional vehicle for working up ideas, documenting the process and getting feedback, which have led to more substantial outputs, such as books, project proposals and papers. Most importantly though it has been the means through which I have continually developed writing.
Connecting – particularly in those halcyon early days, it was a good way of finding others, working on ideas together, sharing something of yourself. A lot of my career related personal friendships have resulted from blogging.
Publicity – I became at one point (the OU crisis of 2018) something of a public voice of the OU, and have often used the blog for projects such as GO-GN
That’s not a bad return for a lil’ ol’ blog. I couldn’t say the same for academic journals.
AI is rewiring how we learn, and it’s a game-changer for L&D— from chieflearningofficer.com by Josh Bersin As AI becomes central to learner engagement, L&D leaders are being urged to fundamentally rethink corporate training, says global industry analyst Josh Bersin.
What are people really doing with ChatGPT? They’re learning. They’re asking questions, getting immediate answers, digging deeper, analyzing information and ultimately making themselves more productive. So, one could argue that simply by shifting to a “learn by inquiry” model, we may triple our value to the business.
From my experience, there are two main learning models in this industry. The first is “what you need to know”—linear or prescriptive things that every employee needs to understand about the company, its products and their role. This kind of content is well handled by existing L&D models.
The second, and far more important, is “what you’d like to know”—questions, curiosities and explorations about how the company works, what customers truly need and how we can each go further in our careers. Thanks to AI, this kind of learning is now explosive and transformative.
Imagine a sales rep who loses a deal. Naturally, they may ask, “What could I have done to be more successful?” A well-designed AI-powered learning system would take that question, give the employee an initial answer and chat with the individual to dig into the problem.
The system would then surface relevant sales training material and recommend videos, tips or case studies for help. And the employee, assuming they like the experience, would likely keep exploring until they feel they’ve learned what they need.
This “curiosity-based” learning is now possible, and its benefits extend far beyond traditional training.
Is graduate employability a core university priority? — from timeshighereducation.com by Katherine Emms and Andrea Laczik Universities, once judged primarily on the quality of their academic outcomes, are now also expected to prepare students for the workplace. Here’s how higher education is adapting to changing pressures
A clear, deliberate shift in priorities is under way. Embedding employability is central to an Edge Foundation report, carried out in collaboration with UCL’s Institute of Education, looking at how English universities are responding. In placing employability at the centre of their strategies – not just for professional courses but across all disciplines – the two universities that were analysed in this research show how they aim to prepare students for the labour market overall. Although the employability strategy is initialled by the universities’ senior leaders, the research showed that realising this employability strategy must be understood and executed by staff at all levels across departments. The complexity of offering insights into industry pathways and building relevant skills involves curricula development, student-centred teaching, careers support, partnership work and employer engagement.
Every student can benefit from an entrepreneurial mindset — from timeshighereducation.com by Nicolas Klotz To develop the next generation of entrepreneurs, universities need to nurture the right mindset in students of all disciplines. Follow these tips to embed entrepreneurial education
This shift demands a radical rethink of how we approach entrepreneurial mindset in higher education. Not as a specialism for a niche group of business students but as a core competency that every student, in every discipline, can benefit from.
At my university, we’ve spent the past several years re-engineering how we embed entrepreneurship into daily student life and learning.
What we’ve learned could help other institutions, especially smaller or resource-constrained ones, adapt to this new landscape.
The first step is recognising that entrepreneurship is not only about launching start-ups for profit. It’s about nurturing a mindset that values initiative, problem-solving, resilience and creative risk-taking. Employers increasingly want these traits, whether the student is applying for a traditional job or proposing their own venture.
Build foundations for university-industry partnerships in 90 days— from timeshighereducation.com by Raul Villamarin Rodriguez and Hemachandran K Graduate employability could be transformed through systematic integration of industry partnerships. This practical guide offers a framework for change in Indian universities
The most effective transformation strategy for Indian universities lies in systematic industry integration that moves beyond superficial partnerships and towards deep curriculum collaboration. Rather than hoping market alignment will occur naturally, institutions must reverse-engineer academic programmes from verified industry needs.
Our six-month implementation at Woxsen University demonstrates this framework’s practical effectiveness, achieving more than 130 industry partnerships, 100 per cent faculty participation in transformation training, and 75 per cent of students receiving industry-validated credentials with significantly improved employment outcomes.
How Do You Teach Computer Science in the A.I. Era? — from nytimes.com by Steve Lohr; with thanks to Ryan Craig for this resource Universities across the country are scrambling to understand the implications of generative A.I.’s transformation of technology.
The future of computer science education, Dr. Maher said, is likely to focus less on coding and more on computational thinking and A.I. literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions.
A.I. literacy is an understanding — at varying depths for students at different levels — of how A.I. works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.
At Carnegie Mellon, as faculty members prepare for their gathering, Dr. Cortina said his own view was that the coursework should include instruction in the traditional basics of computing and A.I. principles, followed by plenty of hands-on experience designing software using the new tools.
“We think that’s where it’s going,” he said. “But do we need a more profound change in the curriculum?”
Yoodli is an AI tool designed to help users improve their public speaking skills. It analyzes your speech in real-time or after a recording and gives you feedback on things like:
Filler words (“um,” “like,” “you know”)
Pacing (Are you sprinting or sedating your audience?)
Word choice and sentence complexity
Eye contact and body language (with video)
And yes, even your “uhhh” to actual word ratio
Yoodli gives you a transcript and a confidence score, plus suggestions that range from helpful to brutally honest. It’s basically Simon Cowell with AI ethics and a smiley face interface.
[What’s] going on with AI and education? — from theneuron.ai by Grant Harvey With students and teachers alike using AI, schools are facing an “assessment crisis” where the line between tool and cheating has blurred, forcing a shift away from a broken knowledge economy toward a new focus on building human judgment through strategic struggle.
What to do about it: The future belongs to the “judgment economy,” where knowledge is commoditized but taste, agency, and learning velocity become the new human moats. Use the “Struggle-First” principle: wrestle with problems for 20-30 minutes before turning to AI, then use AI as a sparring partner (not a ghostwriter) to deepen understanding. The goal isn’t to avoid AI, but to strategically choose when to embrace “desirable difficulties” that build genuine expertise versus when to leverage AI for efficiency.
… The Alpha-School Program in brief:
Students complete core academics in just 2 hours using AI tutors, freeing up 4+ hours for life skills, passion projects, and real-world experiences.
The school claims students learn at least 2x faster than their peers in traditional school.
The top 20% of students show 6.5x growth. Classes score in the top 1-2% nationally across the board.
Claims are based on NWEA’s Measures of Academic Progress (MAP) assessments… with data only available to the school. Hmm…
Austen Allred shared a story about the school, which put it on our radar.
.
In the latest installment of Gallup and the Walton Family Foundation’s research on education, K-12 teachers reveal how AI tools are transforming their workloads, instructional quality and classroom optimism. The report finds that 60% of teachers used an AI tool during the 2024–25 school year. Weekly AI users report reclaiming nearly six hours per week — equivalent to six weeks per year — which they reinvest in more personalized instruction, deeper student feedback and better parent communication.
Despite this emerging “AI dividend,” adoption is uneven: 40% of teachers aren’t using AI at all, and only 19% report their school has a formal AI policy. Teachers with access to policies and support save significantly more time.
Educators also say AI improves their work. Most report higher-quality lesson plans, assessments and student feedback. And teachers who regularly use AI are more optimistic about its benefits for student engagement and accessibility — mirroring themes from the Voices of Gen Z: How American Youth View and Use Artificial Intelligence report, which found students hesitant but curious about AI’s classroom role. As AI tools grow more embedded in education, both teachers and students will need the training and support to use them effectively.
What Is Amira Learning?
Amira Learning’s system is built upon research led by Jack Mostow, a professor at Carnegie Mellon who helped pioneer AI literacy education. Amira uses Claude AI to power its AI features, but these features are different than many other AI tools on the market. Instead of focusing on chat and generative response, Amira’s key feature is its advanced speech recognition and natural language processing capabilities, which allow the app to “hear” when a student is struggling and tailor suggestions to that student’s particular mistakes.
Though it’s not meant to replace a teacher, Amira provides real-time feedback and also helps teachers pinpoint where a student is struggling. For these reasons, Amira Learning is a favorite of education scientists and advocates for science of reading-based literacy instruction. The tool currently is used by more than 4 million students worldwide and across the U.S.
While GenAI can create documents or answer questions, agentic AI takes intelligence a step further by planning how to get multi-step work done, including tasks such as consuming information, applying logic, crafting arguments, and then completing them.? This leaves legal teams more time for nuanced decision-making, creative strategy, and relationship-building with clients—work that machines can’t do.
What we’re witnessing is a profession in transition where specific tasks are being augmented or automated while new skills and roles emerge.
The data tells an interesting story: approximately 79% of law firms have integrated AI tools into their workflows, yet only a fraction have truly transformed their operations. Most implementations focus on pattern recognition tasks such as document review, legal research, contract analysis. These implementations aren’t replacing lawyers; they’re redirecting attention to higher-value work.
This technological shift doesn’t happen in isolation. It’s occurring amid client pressure for efficiency, competition from alternative providers, and the expectations of a new generation of lawyers who have never known a world without AI assistance.
Lawyers using the Harvey artificial intelligence platform will soon be able to tap into LexisNexis’ vast legal research capabilities.
Thanks to a new partnership announced Wednesday, Harvey users will be able to ask legal questions and receive fast, citation-backed answers powered by LexisNexis case law, statutes and Shepard’s Citations, streamlining everything from basic research to complex motions. According to a press release, generated responses to user queries will be grounded in LexisNexis’ proprietary knowledge graphs and citation tools—making them more trustworthy for use in court or client work.
10 Legal Tech Companies to Know — from builtin.com These companies are using AI, automation and analytics to transform how legal work gets done. .
Harvey AI, a startup that provides automation for legal work, has raised $300 million in Series E funding at a $5 billion valuation, the company told Fortune. The round was co-led by Kleiner Perkins and Coatue, with participation from existing investors, including Conviction, Elad Gil, OpenAI Startup Fund, and Sequoia.
The billable time revolution — from jordanfurlong.substack.com by Jordan Furlong Gen AI will bring an end to the era when lawyers’ value hinged on performing billable work. Grab the coming opportunity to re-prioritize your daily activities and redefine your professional purpose.
Because of Generative AI, lawyers will perform fewer “billable” tasks in future; but why is that a bad thing? Why not devote that incoming “freed-up” time to operating, upgrading, and flourishing your law practice? Because this is what you do now: You run a legal business. You deliver good outcomes, good experiences, and good relationships to clients. Humans do some of the work and machines do some of the work and the distinction that matters is not billable/non-billable, it’s which type of work is best suited to which type of performer.
Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.
The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself.
Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.
First, the easy stuff.
Which AI to Use For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.
This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.
One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.
It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.
…
The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:
What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.
On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.
Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.
Of course, that would be a mistake.
We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.
By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.
Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.
The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.
The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.
Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing
Highlights:
Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
A pioneer in legal technology has predicted the billable hour model cannot survive the transition into the use of artificial intelligence.
Speaking to the Gazette on a visit to the UK, Canadian Jack Newton, founder and chief executive of lawtech company Clio, said there was a ‘structural incompatibility’ between the productivity gains of AI and the billable hour.
Newton said the adoption of AI should be welcomed and embraced by the legal profession but that lawyers will need an entrepreneurial mindset to make the most of its benefits.
Newton added: ‘There is enormous demand but the paradox is that the number one thing we hear from lawyers is they need to grow their firms through more clients, while 77% of legal needs are not met.
‘It’s exciting that AI can address these challenges – it will be a tectonic shift in the industry driving down costs and making legal services more accessible.’
The generative AI legal startup Harvey has entered into a strategic alliance with LexisNexis Legal & Professional by which it will integrate LexisNexis’ gen AI technology, primary law content, and Shepard’s Citations within the Harvey platform and jointly develop advanced legal workflows.
As a result of the partnership, Harvey’s customers working within its platform will be able to ask questions of LexisNexis Protégé, the AI legal assistant released in January, and receive AI-generated answers grounded in the LexisNexis collection of U.S. case law and statutes and validated through Shepard’s Citations, the companies said.
It’s not just about redesigning public education—it’s about rethinking how, where and with whom learning happens.Communities across the United States are shaping learner-centered ecosystems and gathering insights along the way.
What does it take to build a learner-centered ecosystem? A shared vision. Distributed leadership. Place-based experiences. Repurposed resources. And more. This piece unpacks 10 real-world insights from pilots in action. .
We believe the path forward is through the cultivation of learner-centered ecosystems — adaptive, networked structures that offer a transformed way of organizing, supporting, and credentialing community-wide learning. These ecosystems break down barriers between schools, communities, and industries, creating flexible, real-world learning experiences that tap into the full range of opportunities a community has to offer.
Last year, we announced our Learner-Centered Ecosystem Lab, a collaborative effort to create a community of practice consisting of twelve diverse sites across the country — from the streets of Brooklyn to the mountains of Ojai — that are demonstrating or piloting ecosystemic approaches. Since then, we’ve been gathering together, learning from one another, and facing the challenges and opportunities of trying to transform public education. And while there is still much more work to be done, we’ve begun to observe a deeper pattern language — one that aligns with our ten-point Ecosystem Readiness Framework, and one that, we hope, can help all communities start to think more practically and creatively about how to transform their own systems of learning.
So while it’s still early, we suspect that the way to establish a healthy learner-centered ecosystem is by paying close attention to the following ten conditions: