10 Higher Ed Trends to Watch In 2025 — from insidetrack.org

While “polarization” was Merriam-Webster’s word of the year for 2024, we have some early frontrunners for 2025 — especially when it comes to higher education. Change. Agility. Uncertainty. Flexibility. As we take a deep dive into the trends on tap for higher education in the coming year, it’s important to note that, with an incoming administration who has vowed to shake things up, the current postsecondary system could be turned on its head. With that in mind, we wade into our yearly look at the topics and trends that will be making headlines — and making waves — in the year ahead.

#Highereducation #learningecosystems #change #trends #businessmodels #trends #onlinelearning #AI #DEI #skillsbasedlearning #skills #alternatives #LearningandEmploymentRecords #LERs #valueofhighereducation #GenAI

 

Psalm 103:1-2

Of David.
Praise the Lord, my soul; all my inmost being, praise his holy name. Praise the Lord, my soul, and forget not all his benefits—

Hebrews 4:16

16 Let us then approach God’s throne of grace with confidence, so that we may receive mercy and find grace to help us in our time of need.

John 14:1-3

“Do not let your hearts be troubled. You believe in God; believe also in me. My Father’s house has many rooms; if that were not so, would I have told you that I am going there to prepare a place for you? And if I go and prepare a place for you, I will come back and take you to be with me that you also may be where I am.

John 11:21-25

21 “Lord,” Martha said to Jesus, “if you had been here, my brother would not have died. 22 But I know that even now God will give you whatever you ask.”

23 Jesus said to her, “Your brother will rise again.”

24 Martha answered, “I know he will rise again in the resurrection at the last day.”

25 Jesus said to her, “I am the resurrection and the life. The one who believes in me will live, even though they die; 26 and whoever lives by believing in me will never die. Do you believe this?”

Revelation 7:12
…saying:

“Amen!
Praise and glory and wisdom and thanks and honor
and power and strength be to our God for ever and ever.
Amen!”

Seek good, not evil, that you may live. Then the Lord God Almighty will be with you, just as you say he is. Hate evil, love good; maintain justice in the courts. Perhaps the Lord God Almighty will have mercy on the remnant of Joseph.
 

AI educators are coming to this school – and it’s part of a trend — from techradar.com by Eric Hal Schwartz
Two hours of lessons, zero teachers

  • An Arizona charter school will use AI instead of human teachers for two hours a day on academic lessons.
  • The AI will customize lessons in real-time to match each student’s needs.
  • The company has only tested this idea at private schools before but claims it hugely increases student academic success.

One school in Arizona is trying out a new educational model built around AI and a two-hour school day. When Arizona’s Unbound Academy opens, the only teachers will be artificial intelligence algorithms in a perfect utopia or dystopia, depending on your point of view.


AI in Instructional Design: reflections on 2024 & predictions for 2025 — from drphilippahardman.substack.com by Dr. Philippa Hardman
Aka, four new year’s resolutions for the AI-savvy instructional designer.


Debating About AI: A Free Comprehensive Guide to the Issues — from stefanbauschard.substack.com by Stefan Bauschard

In order to encourage and facilitate debate on key controversies related to AI, I put together this free 130+ page guide to the main arguments and ideas related to the controversies.


Universities need to step up their AGI game — from futureofbeinghuman.com by Andrew Maynard
As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions

And because of this, I think there’s a unique opportunity for universities (research universities in particular) to up their game and play a leadership role in navigating the coming advanced AI transition.

Of course, there are already a number of respected university-based initiatives that are working on parts of the challenge. Stanford HAI (Human-centered Artificial Intelligence) is one that stands out, as does the Leverhulm Center for the Future of Intelligence at the University of Cambridge, and the Center for Governance of AI at the University of Oxford. But these and other initiatives are barely scratching the surface of what is needed to help successfully navigate advanced AI transitions.

If universities are to be leaders rather than bystanders in ensuring human flourishing in an age of AI, there’s an urgent need for bolder and more creative forward-looking initiatives that support research, teaching, thought leadership, and knowledge mobilization, at the intersection of advanced AI and all aspects of what it means to thrive and grow as a species.


 

 

Introducing the 2025 Wonder Media Calendar for tweens, teens, and their families/households. Designed by Sue Ellen Christian and her students in her Global Media Literacy class (in the fall 2024 semester at Western Michigan University), the calendar’s purpose is to help people create a new year filled with skills and smart decisions about their media use. This calendar is part of the ongoing Wonder Media Library.com project that includes videos, lesson plans, games, songs and more. The website is funded by a generous grant from the Institute of Museum and Library Services, in partnership with Western Michigan University and the Library of Michigan.


 

 

Tech Trends 2025 — from deloitte.com by Deloitte Insights
In Deloitte’s 16th annual Tech Trends report, AI is the common thread of nearly every trend. Moving forward, it will be part of the substructure of everything we do.

We propose that the future of technology isn’t so much about more AI as it is about ubiquitous AI. We expect that, going forward, AI will become so fundamentally woven into the fabric of our lives that it’s everywhere, and so foundational that we stop noticing it.

AI will eventually follow a similar path, becoming so ubiquitous that it will be a part of the unseen substructure of everything we do, and we eventually won’t even know it’s there. It will quietly hum along in the background, optimizing traffic in our cities, personalizing our health care, and creating adaptative and accessible learning paths in education. We won’t “use” AI. We’ll just experience a world where things work smarter, faster, and more intuitively—like magic, but grounded in algorithms. We expect that it will provide a foundation for business and personal growth while also adapting and sustaining itself over time.

Nowhere is this AI-infused future more evident than in this year’s Tech Trends report, which each year explores emerging trends across the six macro forces of information technology (figure 1). Half of the trends that we’ve chronicled are elevating forces—interaction, information, and computation—that underpin innovation and growth. The other half—the grounding forces of the business of technology, cyber and trust, and core modernization—help enterprises seamlessly operate while they grow.

 

The legal tech trends that defined 2024 — from abajournal.com by Nicole Black

The year 2024 was one of change. In the midst of a largely unpopular back-to-office push, technological innovation and development occurred at a rapid clip. Legal professionals approached technology with a newfound curiosity and appreciation gained as a result of pandemic-era remote work experimentation. The increasing demand for generative artificial intelligence tools fueled heavy investments in the legal technology industry.

Simultaneously, law firm technology adoption was supported by a flurry of guidance released by ethics committees across the country. As technology upended traditional ways of working, some state bars reconsidered bar exam requirements and others experimented with loosening licensing regulations.

No matter where you looked, change was occurring at a breakneck pace as technology’s impact on the practice of law became inescapable. Through it all, a few key trends emerged that offer clues on where we’re headed in the coming year.

Meanwhile, some states, including Oregon, Washington and Nevada, explored allowing unlicensed professionals to practice law in limited areas, such as family law and small claims. These efforts seek to improve access to justice, representing a revised perspective on long-standing rules about who can —and cannot—deliver legal services.


What to Expect in 2025: AI Legal Tech and Regulation (65 Expert Predictions) — from natlawreview.com by Oliver Roberts

As 2024 comes to a close, it’s time to look ahead to how AI will shape the law and legal practice in 2025. Over the past year, we’ve witnessed growing adoption of AI across the legal sector, substantial investments in legal AI startups, and a rise in state-level AI regulations. While the future of 2025 remains uncertain, industry leaders are already sharing their insights.

Along with 2025 predictions from The National Law Review’s Editor-in-Chief Oliver Roberts, this article presents 65 expert predictions on AI and the law in 2025 from federal judges, startup founders, CEOs, and leaders of AI practice groups at global law firms.


The Potential of GenAI to Promote Access to Justice — from law.com by Joanne Sprague
GenAI-assisted legal support is not a substitute for lawyers, but may help legal aid professionals serve more clients efficiently and effectively.

Generative AI (GenAI) has been heralded as a transformative force, poised to revolutionize everything from medicine to education to law. While GenAI won’t perform surgery or earn diplomas, it holds the promise of enabling lawyers to get due process for more of their clients or even empowering individuals to represent themselves in court. The harsh reality is that low-income individuals do not receive sufficient legal help for 92% of their civil legal problems, and legal aid organizations must turn away one of every two requests they get, according to the 2022 Justice Gap Report. GenAI-assisted legal support is not a substitute for lawyers, but may help legal aid professionals serve more clients efficiently and effectively.

If implemented equitably, GenAI could democratize legal knowledge and empower individuals to navigate the complexities of the justice system more easily.

In her new book “Law Democratized,” Renee Knake Jefferson says that GenAI “has the potential to become the single most important tool in solving the legal justice crisis … if harnessed to do so ethically.” With GenAI, we can envision a possible future of informed self-representation and legal decision-making regardless of ability to pay.


Experimenting in the sandbox — from nationalmagazine.ca by Julie Sobowale
Ontario Bar Association launches AI platform for lawyers to learn tech

The Ontario Bar Association has launched a new, free interactive learning platform for lawyers looking to learn about generative AI.

The new learning platform, created to clarify some of that and help lawyers navigate this technology, is part of OBA’s Real Intelligence on AI project. It is being spearheaded by Colin Lachance, the association’s innovator-in-residence.

Users can ask questions to LawQI, an AI assistant specializing in Canadian law, and work through learning modules about prompt engineering, different generative AI tools and best practices. The portal is free for OBA members and Ontario law students.

“Lawyers need to know how AI works,” says Lachance, principal at PGYA Consulting and former president and CEO of the Canadian Legal Information Institute (CanLII).

“I wanted to create an environment where lawyers can experiment. By using the technology, you learn how to use it.”


The Innovation Strategist: Nicole Black — from substack.com by Tom Martin and Nicole Black
Where I interview Nicole Black about how she merged her legal expertise with her passion for technology to become a leading voice in legal innovation

Excerpt from Key Takeaways:

  • Her role as employee #1 at MyCase in 2012 allowed her to combine her legal expertise with her passion for technology, leading to her current position as Principal Legal Insight Strategist at Affinipay
  • She believes generative AI will fundamentally transform the legal profession, potentially more significantly than previous technological innovations like PCs and the internet
  • Her advice for new lawyers includes actively experimenting with AI tools like ChatGPT and preparing for significant changes in legal education and entry-level legal work

Legal Liabilities of AI for Attorneys and Small Firms — from ethicalailawinstitute.org by Trent Kubasiak

Many small firms and solo attorneys could be in for a nasty shock when it comes to the use of AI. A detailed report from NYU’s Journal of Legislation and Public Policy is shedding light on the potential legal liabilities of using generative AI. Co-authored by EqualAI CEO Miriam Vogel, former Homeland Security Secretary Michael Chertoff, and others, the report underscores a widespread misconception—that liability for AI-related outcomes rests solely with the developers of these technologies.

For attorneys and small business owners, this misconception can be dangerous. As Vogel explains, “There are so many laws on the books that people need to know are applicable.” From lending and housing regulations to employment law, the use of AI—even indirectly—can expose firms to significant risks.


Challenges And Opportunities Of Digital Transformation In US Law Firms — from forbes.com by Chad Sands

So, what is driving the transformation?

Some adoption of new “legal tech” is literally being forced by legacy software companies who are shutting down older, server-based technology platforms. But most law firms are now increasingly becoming more proactive in planning and executing their digital transformation strategies on their own.

This is no longer a choice or matter of “Why should we?”

It’s a question of “When will we?”

There are several factors driving this shift, one being client expectations.


Fresh Voices On Legal Tech with Ilona Logvinova — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Ilona Logvinova

The world of AI and legal tech is evolving ever more rapidly, and it is all too common for lawyers to feel intimidated at the thought of keeping up with the constant barrage of change. How should lawyers maintain their tech competence? Dennis and Tom talk with Ilona Logvinova about her work in tech and AI innovations for lawyers. She shares her career journey and offers perspectives on leveraging technology to create new and better opportunities for attorneys to thrive in their work.


AI Insights for Legal: Ten Key Takeaways from Summit AI New York — from techlawcrossroads.com by Stephen Embry

Despite the shortcomings, it still was a good Conference. (i.e., the Summit AI New York). I learned some things and confirmed that many of the AI related issues being faced by legal are also being faced by many other businesses. What were my top ten takeaways? Here they are in no particular order:

 

Teacher Shortage: Is Hybrid or Remote Teaching the Answer? — from edtechmagazine.com by Adam Stone
In these uncertain times, K–12 schools use technology to better support students and teachers.

How Can Remote or Hybrid Teaching Help?
A shift to virtual learning can help close the gaps.

First, remote work can draw more people into the field. “For some folks, particularly with the pandemic and teaching for a year or more online, they found that model appealing to them from a professional and personal standpoint,” Carbaugh says.

While many educators still prefer face-to-face interactions, he says, others may find the ability to work from home appealing.

Virtual learning can also broaden the candidate pool in hard-to-fill roles. In STEM, for instance, “you might have someone who is willing to teach a class for you in addition to their normal job,” Speegle says. “They can teach computer science, biology or calculus for an hour a day, and they’re done.”


What Happens When Public School Districts Embrace Hybrid Schools? — from asthe74million.org by Eric Wearne & Tom Loud
With a fifth of its school-age children engaged in homeschooling, one Tennessee district found a way to connect them to the public system

With one in five school-age children engaged in homeschooling, Blount County Schools decided in 2018 to offer an option aimed at bridging the best of both homeschooling and public school, while offering a flexible schedule and college preparatory academics.

While the hybrid schooling model is not necessarily new, two developments have emerged in recent years. First, interest in attending, founding, and working at these schools has increased since the Covid pandemic; and second, conventional public-school systems are starting to get into the game.


Launchpad Jobs — from burningglassinstitute.org; via Paul Fain’s Education Pipeline posting

Launchpad Jobs highlights how nondegree workers can achieve career success through strategic job choices. It reveals that nearly 2 million workers without college degrees earn six-figure salaries, demonstrating that fulfilling and well-paying careers are accessible without a traditional four-year education.

The report identifies key 73 roles, termed “Launchpad Jobs,” that offer a combination of strong wages, job stability, and upward mobility. These include positions such as EMTs, electricians, and bank tellers, which often serve as steppingstones to long-term success. Using big data analysis of career histories this report maps the trajectories of workers starting in various roles, showcasing how initial job choices influence future earnings and advancement potential.


Why College Freshman Enrollment Declined and What it Could Mean for Students — from usnews.com by Sarah Wood
Experts cite possible reasons for the 5% overall enrollment drop in fall 2024 and implications for the current admission cycle.


From DSC:
Speaking of learning ecosystems, this next piece is absolutely incredible in terms of learning ecosystems from other nations!!!

China leads world in massive open online courses: Ministry of Education — from globaltimes.cn by  Chen Xi; via GSV

China has established the world’s largest online education system, according to a document sent by the Ministry of Education to the Global Times on Wednesday.

As of now, the country has developed over 30 various online course platforms, with more than 97,000 massive open online courses (MOOCs) made available, 483 million registered users, and 1.39 billion learning instances. Additionally, 440 million instances of students obtaining course credits have been recorded, making China’s number of MOOCs and learners the highest in the world, according to the document.

Furthermore, a national smart education platform – the Smart Education of China in Higher Education – has launched 31,000 high-quality online courses, with 78,000 teachers participating in teaching and over 16.82 million users visiting, with more than 93 million visits, covering 183 countries and regions worldwide.

Many of these courses have garnered high praise among global students. 


2025 Job Skills Report — from coursera.org

Uncover the fastest-growing skills with the Job Skills Report 2025. This practical resource draws on data from Coursera’s 5 million enterprise learners to highlight the skills and learning experiences that employees, students, and job seekers will prioritize for career success* in 2025.

This year’s report reveals that generative AI (GenAI) is the most in-demand skill, with enterprise course enrollments soaring by 866% year-over-year. By upskilling learners globally, industry, higher education, and governments can unlock AI’s potential $15.7 trillion in global economic value ?by 2030.**

Access the report to:

  • Identify the fastest-growing skills in AI, business, data science, and technology.
  • Compare skill priorities of students, employees, and job seekers.
  • Understand how learners engage with AI learning experiences.

Break the monopoly on higher education pathways — from fastcompany.com by Antonio Gutierrez; via GSV
New models prove that younger and underserved populations are finding success with skills-based programs and hybrid educational models.

The Duet-SNHU model proves that accessible, flexible, and cost-effective alternatives are possible and scalable. Meanwhile, the explosion of nondegree credentials offers additional pathways to skills-focused career readiness, reflecting a growing appetite for innovation in education. To remain competitive in the global economy, the U.S. must embrace these alternatives while reforming traditional institutions.

Policymakers must prioritize funding based on performance metrics like graduation rates and job placements, and accreditors must hold institutions accountable for real-world outcomes. Business leaders, educators, and community stakeholders must champion scalable models that deliver equity and opportunity. The stakes are too high to cling to an outdated system. By disrupting the status quo, we can create an education system that serves all Americans and strengthens the economy for generations to come.

 

Introducing Gemini 2.0: our new AI model for the agentic era — from blog.google by Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

We’re getting 2.0 into the hands of developers and trusted testers today. And we’re working quickly to get it into our products, leading with Gemini and Search. Starting today our Gemini 2.0 Flash experimental model will be available to all Gemini users. We’re also launching a new feature called Deep Research, which uses advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports on your behalf. It’s available in Gemini Advanced today.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

.

Try Deep Research and our new experimental model in Gemini, your AI assistant — from blog.google by Dave Citron
Deep Research rolls out to Gemini Advanced subscribers today, saving you hours of time. Plus, you can now try out a chat optimized version of 2.0 Flash Experimental in Gemini on the web.

Today, we’re sharing the latest updates to Gemini, your AI assistant, including Deep Research — our new agentic feature in Gemini Advanced — and access to try Gemini 2.0 Flash, our latest experimental model.

Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.1


Google Unveils A.I. Agent That Can Use Websites on Its Own — from nytimes.com by Cade Metz and Nico Grant (NOTE: This is a GIFTED article for/to you.)
The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user.

Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.

Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.


Gemini 2.0 is the next chapter for Google AI — from axios.com by Ina Fried

Google Gemini 2.0 — a major upgrade to the core workings of Google’s AI that the company launched Wednesday — is designed to help generative AI move from answering users’ questions to taking action on its own…

The big picture: Hassabis said building AI systems that can take action on their own has been DeepMind’s focus since its early days teaching computers to play games such as chess and Go.

  • “We were always working towards agent-based systems,” Hassabis said. “From the beginning, they were able to plan and then carry out actions and achieve objectives.”
  • Hassabis said AI systems that can act as semi-autonomous agents also represent an important intermediate step on the path toward artificial general intelligence (AGI) — AI that can match or surpass human capabilities.
  • “If we think about the path to AGI, then obviously you need a system that can reason, break down problems and carry out actions in the world,” he said.

AI Agents vs. AI Assistants: Know the Key Differences — from aithority.com by Rishika Patel

The same paradigm applies to AI systems. AI assistants function as reactive tools, completing tasks like answering queries or managing workflows upon request. Think of chatbots or scheduling tools. AI agents, however, work autonomously to achieve set objectives, making decisions and executing tasks dynamically, adapting as new information becomes available.

Together, AI assistants and agents can enhance productivity and innovation in business environments. While assistants handle routine tasks, agents can drive strategic initiatives and problem-solving. This powerful combination has the potential to elevate organizations, making processes more efficient and professionals more effective.


Discover how to accelerate AI transformation with NVIDIA and Microsoft — from ignite.microsoft.com

Meet NVIDIA – The Engine of AI. From gaming to data science, self-driving cars to climate change, we’re tackling the world’s greatest challenges and transforming everyday life. The Microsoft and NVIDIA partnership enables Startups, ISVs, and Partners global access to the latest NVIDIA GPUs on-demand and comprehensive developer solutions to build, deploy and scale AI-enabled products and services.


Google + Meta + Apple New AI — from theneurondaily.com by Grant Harve

What else Google announced:

  • Deep Research: New feature that can explore topics and compile reports.
  • Project Astra: AI agent that can use Google Search, Lens, and Maps, understands multiple languages, and has 10-minute conversation memory.
  • Project Mariner: A browser control agent that can complete web tasks (83.5% success rate on WebVoyager benchmark). Read more about Mariner here.
  • Agents to help you play (or test) video games.

AI Agents: Easier To Build, Harder To Get Right — from forbes.com by Andres Zunino

The swift progress of artificial intelligence (AI) has simplified the creation and deployment of AI agents with the help of new tools and platforms. However, deploying these systems beneath the surface comes with hidden challenges, particularly concerning ethics, fairness and the potential for bias.

The history of AI agents highlights the growing need for expertise to fully realize their benefits while effectively minimizing risks.

 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

US College Closures Are Expected to Soar, Fed Research Says — from bloomberg.com

  • Fed research created predictive model of college stress
  • Worst-case scenario forecasts 80 additional closures

The number of colleges that close each year is poised to significantly increase as schools contend with a slowdown in prospective students.

That’s the finding of a new working paper published by the Federal Reserve Bank of Philadelphia, where researchers created predictive models of schools’ financial distress using metrics like enrollment and staffing patterns, sources of revenue and liquidity data. They overlayed those models with simulations to estimate the likely increase of future closures.

Excerpt from the working paper:

We document a high degree of missing data among colleges that eventually close and show that this is a key impediment to identifying at risk institutions. We then show that modern machine learning techniques, combined with richer data, are far more effective at predicting college closures than linear probability models, and considerably more effective than existing accountability metrics. Our preferred model, which combines an off-the-shelf machine learning algorithm with the richest set of explanatory variables, can significantly improve predictive accuracy even for institutions with complete data, but is particularly helpful for predicting instances of financial distress for institutions with spotty data.


From DSC:
Questions that come to my mind here include:

  • Shouldn’t the public — especially those relevant parents and students — be made more aware of these types of papers and reports?
    .
  • How would any of us like finishing up 1-3 years of school and then being told that our colleges or universities were closing, effective immediately? (This has happened many times already.) and with the demographic cliff starting to hit higher education, this will happen even more now.
    .
    Adding insult to injury…when we transfer to different institutions, we’re told that many of our prior credits don’t transfer — thus adding a significant amount to the overall cost of obtaining our degrees.
    .
  • Would we not be absolutely furious to discover such communications from our prior — and new — colleges and universities?
    .
  • Will all of these types of closures move more people to this vision here?

Relevant excerpts from Ray Schroeder’s recent articles out at insidehighered.com:

Winds of Change in Higher Ed to Become a Hurricane in 2025

A number of factors are converging to create a huge storm. Generative AI advances, massive federal policy shifts, broad societal and economic changes, and the demographic cliff combine to create uncertainty today and change tomorrow.

Higher Education in 2025: AGI Agents to Displace People

The anticipated enrollment cliff, reductions in federal and state funding, increased inflation, and dwindling public support for tuition increases will combine to put even greater pressure on university budgets.


On the positive side of things, the completion rates have been getting better:

National college completion rate ticks up to 61.1% — from highereddive.com by Natalie Schwartz
Those who started at two-year public colleges helped drive the overall increase in students completing a credential.

Dive Brief:

  • Completion rates ticked up to 61.1% for students who entered college in fall 2018, a 0.5 percentage-point increase compared to the previous cohort, according to data released Wednesday by the National Student Clearinghouse Research Center.
  • The increase marks the highest six-year completion rate since 2007 when the clearinghouse began tracking the data. The growth was driven by fewer students stopping out of college, as well as completion gains among students who started at public two-year colleges.
  • “Higher completion rates are welcome news for colleges and universities still struggling to regain enrollment levels from before the pandemic,” Doug Shapiro, the research center’s executive director, said in a statement dated Wednesday.

Addendum:

Attention Please: Professors Struggle With Student Disengagement — from edsurge.com

The stakes are huge, because the concern is that maybe the social contract between students and professors is kind of breaking down. Do students believe that all this college lecturing is worth hearing? Or, will this moment force a change in the way college teaching is done?

 

From DSC:
I opened up a BRAND NEW box of cereal from Post the other day. As I looked down into the package, I realized that it was roughly half full. (This has happened many times before, but it struck me so much this time that I had to take pictures of it and post this item.)
.

 

.
Looks can be deceiving for sure. It looks like I should have been getting a full box of cereal…but no…only about half of the package was full. It’s another example of the shrinkflation of things — which can also be described as people deceptively ripping other people off. 

“As long as I’m earning $$, I don’t care how it impacts others.” <– That’s not me talking, but it’s increasingly the perspective that many Americans have these days. We don’t bother with ethics and morals…how old-fashioned can you get, right? We just want to make as much money as possible and to hell with how our actions/products are impacting others.

Another example from the food industry is one of the companies that I worked for in the 1990’s — Kraft Foods. Kraft has not served peoples’ health well at all. Even when they tried to take noble steps to provide healthier foods, other food executives/companies in the industry wouldn’t hop on board. They just wanted to please Wall Street, not Main Street. So companies like Kraft have contributed to the current situations that we face which involve obesity, diabetes, heart attacks, and other ailments. (Not to mention increased health care costs.) 

The gambling industry doesn’t give a rip about people either. Look out for the consequences.

And the cannabis industry joins the gambling industry...and they’re often right on the doorsteps of universities and colleges.

Bottom line reflection:
There are REAL ramifications when we don’t take Christ’s words/commands to love one another seriously (or even to care about someone at all). We’re experiencing such ramifications EVERY DAY now.

 

(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

Denmark’s Gefion: The AI supercomputer that puts society first — from blog.aiport.tech by Daniel Nest
Can it help us reimagine what “AI success” looks like?

In late October 2024, NVIDIA’s Jensen Huang and Denmark’s King Frederik X symbolically plugged in the country’s new AI supercomputer, Gefion.

  1. Societal impact vs. monetization
  2. Public-private cooperation vs. venture capital
  3. Powered by renewable energy
 

Miscommunication Leads AI-Based Hiring Tools Astray — from adigaskell.org

Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.

The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.

Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.


AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents — from digit.in by Jayesh Shinde

In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.

Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.


2025: The year ‘invisible’ AI agents will integrate into enterprise hierarchies  — from venturebeat.com by Taryn Plumb

In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.

According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.

However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.

“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.


State of AI Report 2024 Summary — from ai-supremacy.com by Michael Spencer
Part I, Consolidation, emergence and adoption. 


Which AI Image Model Is the Best Speller? Let’s Find Out! — from whytryai.com by Daniel Nest
I test 7 image models to find those that can actually write.

The contestants
I picked 7 participants for today’s challenge:

  1. DALL-E 3 by OpenAI (via Microsoft Designer)
  2. FLUX1.1 [pro] by Black Forest Labs (via Glif)
  3. Ideogram 2.0 by Ideogram (via Ideogram)
  4. Imagen 3 by Google (via Image FX)
  5. Midjourney 6.1 by Midjourney (via Midjourney)
  6. Recraft V3 by Recraft (via Recraft)
  7. Stable Diffusion 3.5 Large by Stability AI (via Hugging Face)

How to get started with AI agents (and do it right) — from venturebeat.com by Taryn Plumb

So how can enterprises choose when to adopt third-party models, open source tools or build custom, in-house fine-tuned models? Experts weigh in.


OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI — from bloomberg.com (behind firewall)
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.


OpenAI and others seek new path to smarter AI as current methods hit limitations — from reuters.com by Krystal Hu and Anna Tong

Summary

  • AI companies face delays and challenges with training new large language models
  • Some researchers are focusing on more time for inference in new models
  • Shift could impact AI arms race for resources like chips and energy

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools — from blogs.nvidia.com by Spencer Huang
New Project GR00T workflows and AI world model development technologies to accelerate robot dexterity, control, manipulation and mobility.


How Generative AI is Revolutionizing Product Development — from intelligenthq.com

A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.


What are AI Agents: How To Create a Based AI Agent — from ccn.com by Lorena Nessi

Key Takeaways

  • AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
  • The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
  • AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
  • Securing API keys in development is crucial to protect the agent from unauthorized access.

What are AI Agents and How Are They Used in Different Industries? — from rtinsights.com by Salvatore Salamone
AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.

 
© 2025 | Daniel Christian