Tech Trends 2025 — from deloitte.com by Deloitte Insights
In Deloitte’s 16th annual Tech Trends report, AI is the common thread of nearly every trend. Moving forward, it will be part of the substructure of everything we do.

We propose that the future of technology isn’t so much about more AI as it is about ubiquitous AI. We expect that, going forward, AI will become so fundamentally woven into the fabric of our lives that it’s everywhere, and so foundational that we stop noticing it.

AI will eventually follow a similar path, becoming so ubiquitous that it will be a part of the unseen substructure of everything we do, and we eventually won’t even know it’s there. It will quietly hum along in the background, optimizing traffic in our cities, personalizing our health care, and creating adaptative and accessible learning paths in education. We won’t “use” AI. We’ll just experience a world where things work smarter, faster, and more intuitively—like magic, but grounded in algorithms. We expect that it will provide a foundation for business and personal growth while also adapting and sustaining itself over time.

Nowhere is this AI-infused future more evident than in this year’s Tech Trends report, which each year explores emerging trends across the six macro forces of information technology (figure 1). Half of the trends that we’ve chronicled are elevating forces—interaction, information, and computation—that underpin innovation and growth. The other half—the grounding forces of the business of technology, cyber and trust, and core modernization—help enterprises seamlessly operate while they grow.

 

60 Minutes Overtime
Sal Khan wants an AI tutor for every student: here’s how it’s working at an Indiana high school — from cbsnews.com by Anderson Cooper, Aliza Chasan, Denise Schrier Cetta, and Katie Brennan

“I mean, that’s what I’ll always want for my own children and, frankly, for anyone’s children,” Khan said. “And the hope here is that we can use artificial intelligence and other technologies to amplify what a teacher can do so they can spend more time standing next to a student, figuring them out, having a person-to-person connection.”

“After a week you start to realize, like, how you can use it,” Brockman said. “That’s been one of the really important things about working with Sal and his team, to really figure out what’s the right way to sort of bring this to parents and to teachers and to classrooms and to do that in a way…so that the students really learn and aren’t just, you know, asking for the answers and that the parents can have oversight and the teachers can be involved in that process.”


Nectir lets teachers tailor AI chatbots to provide their students with 24/7 educational support — from techcrunch.com by Lauren Forristal

More than 100 colleges and high schools are turning to a new AI tool called Nectir, allowing teachers to create a personalized learning partner that’s trained on their syllabi, textbooks, and assignments to help students with anything from questions related to their coursework to essay writing assistance and even future career guidance.

With Nectir, teachers can create an AI assistant tailored to their specific needs, whether for a single class, a department, or the entire campus. There are various personalization options available, enabling teachers to establish clear boundaries for the AI’s interactions, such as programming the assistant to assist only with certain subjects or responding in a way that aligns with their teaching style.

“It’ll really be that customized learning partner. Every single conversation that a student has with any of their assistants will then be fed into that student profile for them to be able to see based on what the AI thinks, what should I be doing next, not only in my educational journey, but in my career journey,” Ghai said. 


How Will AI Influence Higher Ed in 2025? — from insidehighered.com by Kathryn Palmer
No one knows for sure, but Inside Higher Ed asked seven experts for their predictions.

As the technology continues to evolve at a rapid pace, no one knows for sure how AI will influence higher education in 2025. But several experts offered Inside Higher Ed their predictions—and some guidance—for how colleges and universities will have to navigate AI’s potential in the new year.


How A.I. Can Revive a Love of Learning — from nytimes.com by Anant Agarwal
Modern technology offers new possibilities for transforming teaching.

In the short term, A.I. will help teachers create lesson plans, find illustrative examples and generate quizzes tailored to each student. Customized problem sets will serve as tools to combat cheating while A.I. provides instant feedback.

In the longer term, it’s possible to imagine a world where A.I. can ingest rich learner data and create personalized learning paths for students, all within a curriculum established by the teacher. Teachers can continue to be deeply involved in fostering student discussions, guiding group projects and engaging their students, while A.I. handles grading and uses the Socratic method to help students discover answers on their own. Teachers provide encouragement and one-on-one support when needed, using their newfound availability to give students some extra care.

Let’s be clear: A.I. will never replace the human touch that is so vital to education. No algorithm can replicate the empathy, creativity and passion a teacher brings to the classroom. But A.I. can certainly amplify those qualities. It can be our co-pilot, our chief of staff helping us extend our reach and improve our effectiveness.


Dancing with the Devil We Know: OpenAI and the Future of Education — from nickpotkalitsky.substack.com by Nick Potkalitsky
Analyzing OpenAI’s Student Writing Guide and Latest AI Tools

Today, I want to reflect on two recent OpenAI developments that highlight this evolution: their belated publication of advice for students on integrating AI into writing workflows, and last week’s launch of the full GPTo1 Pro version. When OpenAI released their student writing guide, there were plenty of snarky comments about how this guidance arrives almost a year after they thoroughly disrupted the educational landscape. Fair enough – I took my own side swipes initially. But let’s look at what they’re actually advising, because the details matter more than the timing.


Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise — from studentsupportaccelerator.org by Rose E.Wang, Ana T. Ribeiro, Carly D. Robinson, Susanna Loeb, and Dora Demszky


Pandemic, Politics, Pre-K & More: 12 Charts That Defined Education in 2024 — from the74million.org
From the spread of AI to the limits of federal COVID aid, these research findings captured the world of education this year.

Tutoring programs exploded in the last five years as states and school districts searched for ways to counter plummeting achievement during COVID. But the cost of providing supplemental instruction to tens of millions of students can be eye-watering, even as the results seem to taper off as programs serve more students.

That’s where artificial intelligence could prove a decisive advantage. A report circulated in October by the National Student Support Accelerator found that an AI-powered tutoring assistant significantly improved the performance of hundreds of tutors by prompting them with new ways to explain concepts to students. With the help of the tool, dubbed Tutor CoPilot, students assigned to the weakest tutors began posting academic results nearly equal to those assigned to the strongest. And the cost to run the program was just $20 per pupil.


On Capacity, Sustainability, And Attention — from marcwatkins.substack.com by Marc Watkins

Faculty must have the time and support necessary to come to terms with this new technology and that requires us to change how we view professional development in higher education and K-12. We cannot treat generative AI as a one-off problem that can be solved by a workshop, an invited talk, or a course policy discussion. Generative AI in education has to be viewed as a continuum. Faculty need a myriad of support options each semester:

  • Course buyouts
  • Fellowships
  • Learning communities
  • Reading groups
  • AI Institutes and workshops
  • Funding to explore the scholarship of teaching and learning around generative AI

New in 2025 and What Edleaders Should Do About It — from gettingsmart.com by Tom Vander Ark and Mason Pashia

Key Points

  • Education leaders should focus on integrating AI literacy, civic education, and work-based learning to equip students for future challenges and opportunities.
  • Building social capital and personalized learning environments will be crucial for student success in a world increasingly influenced by AI and decentralized power structures.
 

1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.


How Agentic AI is Revolutionizing Customer Service — from customerthink.com by Devashish Mamgain

Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.

This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.

Characteristics of Agentic AI

    • Autonomy: Achieves complex objectives without requiring human collaboration.
    • Language Comprehension: Understands nuanced human speech and text effectively.
    • Rationality: Makes informed, contextual decisions using advanced reasoning engines.
    • Adaptation: Adjusts plans and goals in dynamic situations.
    • Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.

Clio: A system for privacy-preserving insights into real-world AI use — from anthropic.com

How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?

Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.


Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper
Google’s Veo 2, Kling 1.6, Pika 2.0 & more

AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.

Here’s a comprehensive look at the current landscape:

  • Veo 2…
  • Pika 2.0…
  • Runway’s Gen-3…
  • Luma AI Dream Machine…
  • Hailuo’s MiniMax…
  • OpenAI’s Sora…
  • Hunyuan Video by Tencent…

There are several other video models and platforms, including …

 

Best of 2024 — from wondertools.substack.com by Jeremy Caplan
12 of my favorites this year

I tested hundreds of new tools this year. Many were duplicative. A few stuck with me because they’re so useful. The dozen noted below are helping me mine insights from notes, summarize meetings, design visuals— even code a little, without being a developer. You can start using any of these in minutes — no big budget or prompt engineering PhD required.

 

The legal tech trends that defined 2024 — from abajournal.com by Nicole Black

The year 2024 was one of change. In the midst of a largely unpopular back-to-office push, technological innovation and development occurred at a rapid clip. Legal professionals approached technology with a newfound curiosity and appreciation gained as a result of pandemic-era remote work experimentation. The increasing demand for generative artificial intelligence tools fueled heavy investments in the legal technology industry.

Simultaneously, law firm technology adoption was supported by a flurry of guidance released by ethics committees across the country. As technology upended traditional ways of working, some state bars reconsidered bar exam requirements and others experimented with loosening licensing regulations.

No matter where you looked, change was occurring at a breakneck pace as technology’s impact on the practice of law became inescapable. Through it all, a few key trends emerged that offer clues on where we’re headed in the coming year.

Meanwhile, some states, including Oregon, Washington and Nevada, explored allowing unlicensed professionals to practice law in limited areas, such as family law and small claims. These efforts seek to improve access to justice, representing a revised perspective on long-standing rules about who can —and cannot—deliver legal services.


What to Expect in 2025: AI Legal Tech and Regulation (65 Expert Predictions) — from natlawreview.com by Oliver Roberts

As 2024 comes to a close, it’s time to look ahead to how AI will shape the law and legal practice in 2025. Over the past year, we’ve witnessed growing adoption of AI across the legal sector, substantial investments in legal AI startups, and a rise in state-level AI regulations. While the future of 2025 remains uncertain, industry leaders are already sharing their insights.

Along with 2025 predictions from The National Law Review’s Editor-in-Chief Oliver Roberts, this article presents 65 expert predictions on AI and the law in 2025 from federal judges, startup founders, CEOs, and leaders of AI practice groups at global law firms.


The Potential of GenAI to Promote Access to Justice — from law.com by Joanne Sprague
GenAI-assisted legal support is not a substitute for lawyers, but may help legal aid professionals serve more clients efficiently and effectively.

Generative AI (GenAI) has been heralded as a transformative force, poised to revolutionize everything from medicine to education to law. While GenAI won’t perform surgery or earn diplomas, it holds the promise of enabling lawyers to get due process for more of their clients or even empowering individuals to represent themselves in court. The harsh reality is that low-income individuals do not receive sufficient legal help for 92% of their civil legal problems, and legal aid organizations must turn away one of every two requests they get, according to the 2022 Justice Gap Report. GenAI-assisted legal support is not a substitute for lawyers, but may help legal aid professionals serve more clients efficiently and effectively.

If implemented equitably, GenAI could democratize legal knowledge and empower individuals to navigate the complexities of the justice system more easily.

In her new book “Law Democratized,” Renee Knake Jefferson says that GenAI “has the potential to become the single most important tool in solving the legal justice crisis … if harnessed to do so ethically.” With GenAI, we can envision a possible future of informed self-representation and legal decision-making regardless of ability to pay.


Experimenting in the sandbox — from nationalmagazine.ca by Julie Sobowale
Ontario Bar Association launches AI platform for lawyers to learn tech

The Ontario Bar Association has launched a new, free interactive learning platform for lawyers looking to learn about generative AI.

The new learning platform, created to clarify some of that and help lawyers navigate this technology, is part of OBA’s Real Intelligence on AI project. It is being spearheaded by Colin Lachance, the association’s innovator-in-residence.

Users can ask questions to LawQI, an AI assistant specializing in Canadian law, and work through learning modules about prompt engineering, different generative AI tools and best practices. The portal is free for OBA members and Ontario law students.

“Lawyers need to know how AI works,” says Lachance, principal at PGYA Consulting and former president and CEO of the Canadian Legal Information Institute (CanLII).

“I wanted to create an environment where lawyers can experiment. By using the technology, you learn how to use it.”


The Innovation Strategist: Nicole Black — from substack.com by Tom Martin and Nicole Black
Where I interview Nicole Black about how she merged her legal expertise with her passion for technology to become a leading voice in legal innovation

Excerpt from Key Takeaways:

  • Her role as employee #1 at MyCase in 2012 allowed her to combine her legal expertise with her passion for technology, leading to her current position as Principal Legal Insight Strategist at Affinipay
  • She believes generative AI will fundamentally transform the legal profession, potentially more significantly than previous technological innovations like PCs and the internet
  • Her advice for new lawyers includes actively experimenting with AI tools like ChatGPT and preparing for significant changes in legal education and entry-level legal work

Legal Liabilities of AI for Attorneys and Small Firms — from ethicalailawinstitute.org by Trent Kubasiak

Many small firms and solo attorneys could be in for a nasty shock when it comes to the use of AI. A detailed report from NYU’s Journal of Legislation and Public Policy is shedding light on the potential legal liabilities of using generative AI. Co-authored by EqualAI CEO Miriam Vogel, former Homeland Security Secretary Michael Chertoff, and others, the report underscores a widespread misconception—that liability for AI-related outcomes rests solely with the developers of these technologies.

For attorneys and small business owners, this misconception can be dangerous. As Vogel explains, “There are so many laws on the books that people need to know are applicable.” From lending and housing regulations to employment law, the use of AI—even indirectly—can expose firms to significant risks.


Challenges And Opportunities Of Digital Transformation In US Law Firms — from forbes.com by Chad Sands

So, what is driving the transformation?

Some adoption of new “legal tech” is literally being forced by legacy software companies who are shutting down older, server-based technology platforms. But most law firms are now increasingly becoming more proactive in planning and executing their digital transformation strategies on their own.

This is no longer a choice or matter of “Why should we?”

It’s a question of “When will we?”

There are several factors driving this shift, one being client expectations.


Fresh Voices On Legal Tech with Ilona Logvinova — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Ilona Logvinova

The world of AI and legal tech is evolving ever more rapidly, and it is all too common for lawyers to feel intimidated at the thought of keeping up with the constant barrage of change. How should lawyers maintain their tech competence? Dennis and Tom talk with Ilona Logvinova about her work in tech and AI innovations for lawyers. She shares her career journey and offers perspectives on leveraging technology to create new and better opportunities for attorneys to thrive in their work.


AI Insights for Legal: Ten Key Takeaways from Summit AI New York — from techlawcrossroads.com by Stephen Embry

Despite the shortcomings, it still was a good Conference. (i.e., the Summit AI New York). I learned some things and confirmed that many of the AI related issues being faced by legal are also being faced by many other businesses. What were my top ten takeaways? Here they are in no particular order:

 

Episode 302: A Practical Roadmap for AI in K-12 Education with Mike Kentz & Nick Potkalitsky, PhD

In this episode of My EdTech Life, I had the pleasure of interviewing Mike Kentz and Nick Potkalitsky, PhD, to discuss their new book, AI in Education: The K-12 Roadmap to Teacher-Led Transformation. We dive into the transformative power of AI in education, exploring its potential for personalization, its impact on traditional teaching practices, and the critical need for teacher-driven experimentation.


Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education — from er.educause.edu by Katalin Wargo and Brier Anderson
Navigating the complexities of artificial intelligence (AI) while upholding ethical standards requires a balanced approach that considers the benefits and risks of AI adoption.

As artificial intelligence (AI) continues to transform the world—including higher education—the need for responsible use has never been more critical. While AI holds immense potential to enhance teaching and learning, ethical considerations around social inequity, environmental concerns, and dehumanization continue to emerge. College and university centers for teaching and learning (CTLs), tasked with supporting faculty in best instructional practices, face growing pressure to take a balanced approach to adopting new technologies. This challenge is compounded by an unpredictable and rapidly evolving landscape. New AI tools surface almost daily. With each new tool, the educational possibilities and challenges increase exponentially. Keeping up is virtually impossible for CTLs, which historically have been institutional hubs for innovation. In fact, as of this writing, the There’s an AI for That website indicates that there are 23,208 AIs for 15,636 tasks for 4,875 jobs—with all three numbers increasing daily.

To support college and university faculty and, by extension, learners in navigating the complexities of AI integration while upholding ethical standards, CTLs must prioritize a balanced approach that considers the benefits and risks of AI adoption. Teaching and learning professionals need to expand their resources and support pathways beyond those solely targeting how to leverage AI or mitigate academic integrity violations. They need to make a concerted effort to promote critical AI literacy, grapple with issues of social inequity, examine the environmental impact of AI technologies, and promote human-centered design principles.1


5 Free AI Tools For Learning & Exploration — from whytryai.com by Daniel Nest
Have fun exploring new topics with these interactive sites.

We’re truly spoiled for choice when it comes to AI learning tools.

In principle, any free LLM can become an endlessly patient tutor or an interactive course-maker.

If that’s not enough, tools like NotebookLM’s “Audio Overviews” and ElevenLabs’ GenFM can turn practically any material into a breezy podcast.

But what if you’re looking to explore new topics in a way that’s more interactive than vanilla chatbots and more open-ended than source-grounded NotebookLM?

Well, then you might want to give one of these free-to-try learning tools a go.

 

How to adopt GenAI within your legal department or law practice — from legaldive.com by Justin Bachman
Capital spending and ROI will be top of mind for legal executives moving forward with AI legal tools in 2025. Here are some best practices.

Dive Brief:

  • Debate about whether to use artificial intelligence in legal practice has ended. Discussions today are focused on costs, returns and how to best incorporate the technology into departments and outside law firms, according to an expert panel of legal tech specialists.
  • Buyers of generative AI solutions will encounter “a lot of confusion and a lot of opacity” on pricing, with multiple commercial models, George Socha, senior vice president of brand awareness at legal tech firm Reveal, said on a recent webinar. Long-term or firm contracts are best avoided for most legal tech customers, said Lee Wielenga, chief information officer at U.S. Legal Support.
  • Legal executives should consider small-group pilot projects for AI tools, focused on areas where routine, mundane tasks would benefit from automation, according to the panel. Software used in a business setting typically comes with permissioning access for employees, and generative AI adoption is likely to follow similar models.

Along the lines of legaltech, also see:

 

Introducing Gemini 2.0: our new AI model for the agentic era — from blog.google by Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

We’re getting 2.0 into the hands of developers and trusted testers today. And we’re working quickly to get it into our products, leading with Gemini and Search. Starting today our Gemini 2.0 Flash experimental model will be available to all Gemini users. We’re also launching a new feature called Deep Research, which uses advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports on your behalf. It’s available in Gemini Advanced today.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

.

Try Deep Research and our new experimental model in Gemini, your AI assistant — from blog.google by Dave Citron
Deep Research rolls out to Gemini Advanced subscribers today, saving you hours of time. Plus, you can now try out a chat optimized version of 2.0 Flash Experimental in Gemini on the web.

Today, we’re sharing the latest updates to Gemini, your AI assistant, including Deep Research — our new agentic feature in Gemini Advanced — and access to try Gemini 2.0 Flash, our latest experimental model.

Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.1


Google Unveils A.I. Agent That Can Use Websites on Its Own — from nytimes.com by Cade Metz and Nico Grant (NOTE: This is a GIFTED article for/to you.)
The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user.

Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.

Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.


Gemini 2.0 is the next chapter for Google AI — from axios.com by Ina Fried

Google Gemini 2.0 — a major upgrade to the core workings of Google’s AI that the company launched Wednesday — is designed to help generative AI move from answering users’ questions to taking action on its own…

The big picture: Hassabis said building AI systems that can take action on their own has been DeepMind’s focus since its early days teaching computers to play games such as chess and Go.

  • “We were always working towards agent-based systems,” Hassabis said. “From the beginning, they were able to plan and then carry out actions and achieve objectives.”
  • Hassabis said AI systems that can act as semi-autonomous agents also represent an important intermediate step on the path toward artificial general intelligence (AGI) — AI that can match or surpass human capabilities.
  • “If we think about the path to AGI, then obviously you need a system that can reason, break down problems and carry out actions in the world,” he said.

AI Agents vs. AI Assistants: Know the Key Differences — from aithority.com by Rishika Patel

The same paradigm applies to AI systems. AI assistants function as reactive tools, completing tasks like answering queries or managing workflows upon request. Think of chatbots or scheduling tools. AI agents, however, work autonomously to achieve set objectives, making decisions and executing tasks dynamically, adapting as new information becomes available.

Together, AI assistants and agents can enhance productivity and innovation in business environments. While assistants handle routine tasks, agents can drive strategic initiatives and problem-solving. This powerful combination has the potential to elevate organizations, making processes more efficient and professionals more effective.


Discover how to accelerate AI transformation with NVIDIA and Microsoft — from ignite.microsoft.com

Meet NVIDIA – The Engine of AI. From gaming to data science, self-driving cars to climate change, we’re tackling the world’s greatest challenges and transforming everyday life. The Microsoft and NVIDIA partnership enables Startups, ISVs, and Partners global access to the latest NVIDIA GPUs on-demand and comprehensive developer solutions to build, deploy and scale AI-enabled products and services.


Google + Meta + Apple New AI — from theneurondaily.com by Grant Harve

What else Google announced:

  • Deep Research: New feature that can explore topics and compile reports.
  • Project Astra: AI agent that can use Google Search, Lens, and Maps, understands multiple languages, and has 10-minute conversation memory.
  • Project Mariner: A browser control agent that can complete web tasks (83.5% success rate on WebVoyager benchmark). Read more about Mariner here.
  • Agents to help you play (or test) video games.

AI Agents: Easier To Build, Harder To Get Right — from forbes.com by Andres Zunino

The swift progress of artificial intelligence (AI) has simplified the creation and deployment of AI agents with the help of new tools and platforms. However, deploying these systems beneath the surface comes with hidden challenges, particularly concerning ethics, fairness and the potential for bias.

The history of AI agents highlights the growing need for expertise to fully realize their benefits while effectively minimizing risks.

 

Nieman Lab.Predictions for Journalism, 2025. — from niemanlab.org by Mira Lowe
Prediction: Journalism education leads the change we seek

In this evolving landscape, journalism educators continue to prepare their students for technological advancements and shifting consumer behaviors driving the industry. In 2025, college journalism programs will push forward and adapt to meet workforce demands, student expectations and community needs. Keep an eye on developments in these key areas:

  • Ethics and misinformation:…
  • Global perspectives:…
  • Artificial intelligence: …
  • Local news and community engagement:…
  • Diversity and belonging:…

“Training the next generation of journalists means preparing them to be global citizens.”


Why You Need a News Vacation — from davidepstein.substack.com by David Epstein
We’re not made for 24/7 feeds of catastrophe

I’ve noticed a small trend among journalists I follow — writing about their own versions of news vacations. Amanda Ripley, the author of High Conflict (who I interviewed recently), wrote in 2022 about how she stopped reading the news after two decades as a news junkie. The news, she wrote, had become too constant, and too uniformly negative with too few “solutions stories,” and left her feeling hopeless and like she had a lack of agency in the world.

I don’t think this means that we should abandon keeping up with the news. I think it would be disastrous if we all did that. (And I happen to have an extra-special place in my heart for local news.) But I think regular breaks are healthy, and helpful. As I prepare to turn in the first draft of my new book, I’m grateful that focusing on the project forced me into a few breaks from the news, and to turn to slower sources of information and wisdom. From now on, I plan to take a news vacation every year.

From DSC:
My news consumption varies throughout the days, weeks, and months. Sometimes I catch a lot of it…sometimes not much at all. I get bummed out about a lot of what’s called news these days. I call the news on TV the “Death & Dying Report.” Even the weather seems to have to be deadly if it is to qualify for coverage. Then there’s the “Who killed who report” (dating back years ago if current material is running scarce). Then there’s the “How are we going to survive this new virus or that new disease?” Etc.

It’s an agenda of fear that the networks are focused upon. Hmmm…I’ll stop there for today.

 

Just 10% of law firms have a GenAI policy, new Thomson Reuters report shows — from legaltechnology.com by Caroline Hill

Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.

While Thomson Reuters 2024 Generative AI in Professional Services report shows AI views among legal professionals are rapidly shifting (85% of law firms and corporate legal teams now think AI can be applied to their work) the report shows that legal organisations have a way to go in terms of setting the ground rules for the use of AI. Just 8% said that GenAI is covered under their existing technology policy, while 75% of firms said they don’t have a policy and 7% said they don’t know.


Artificial Lawyer’s 2025 Predictions – Part One — from artificiallawyer.com

I expect to see a lot more agentic AI systems in law: services that break complex tasks into component parts or checklists, complete them with a mix of software, lawyers, and allied legal professionals, then reassemble them into complex first drafts of legal work.

Ed Walters

Integrated Legal Ecosystems
The silos that have historically divided in-house legal teams, external counsel, and other stakeholders will evolve, thanks to the rise of integrated legal ecosystems. End-to-end platforms will provide seamless collaboration, real-time updates, and shared knowledge, enabling all parties to work more effectively toward common goals. Legal departments will turn to these unified solutions to streamline legal requests, offer self-service options, and improve service delivery.

Sacha Kirk

2025 will be the year of agentic AI. It will be the year that end users of legal services finally get legal work resolved for them autonomously.

Richard Mabey


AI Enhances the Human Art of In-House Counsel Leadership — from news.bloomberglaw.com by Eric Dodson Greenberg (behind a paywall)

Transactional lawyers stored prized forms in file drawers or consulted the law library’s bound volumes of forms. The arrival of databases revolutionized access to precedent, enabling faster and broader access to legal resources.

There is little nostalgia for the labor-intensive methods of the past and no argument that some arcane legal skill was lost in the transition. What we gained was increased capacity to focus on higher-order skills and more sophisticated, value-driven legal work. AI offers a similar leap, automating repetitive or foundational tasks to free lawyers for more sophisticated work.

 

Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar

Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.

Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?

The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.


Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer


Top AI tools for business — from theneuron.ai


This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends

Another busy AI news week, so I organized it into categories:

  • Image to 3D
  • AI Video
  • AI Image Models & Tools
  • AI Assistants / LLMs
  • AI Creative Workflow: Luma AI Boards

Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.

Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.


What Is Agentic AI?  — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.


 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

US College Closures Are Expected to Soar, Fed Research Says — from bloomberg.com

  • Fed research created predictive model of college stress
  • Worst-case scenario forecasts 80 additional closures

The number of colleges that close each year is poised to significantly increase as schools contend with a slowdown in prospective students.

That’s the finding of a new working paper published by the Federal Reserve Bank of Philadelphia, where researchers created predictive models of schools’ financial distress using metrics like enrollment and staffing patterns, sources of revenue and liquidity data. They overlayed those models with simulations to estimate the likely increase of future closures.

Excerpt from the working paper:

We document a high degree of missing data among colleges that eventually close and show that this is a key impediment to identifying at risk institutions. We then show that modern machine learning techniques, combined with richer data, are far more effective at predicting college closures than linear probability models, and considerably more effective than existing accountability metrics. Our preferred model, which combines an off-the-shelf machine learning algorithm with the richest set of explanatory variables, can significantly improve predictive accuracy even for institutions with complete data, but is particularly helpful for predicting instances of financial distress for institutions with spotty data.


From DSC:
Questions that come to my mind here include:

  • Shouldn’t the public — especially those relevant parents and students — be made more aware of these types of papers and reports?
    .
  • How would any of us like finishing up 1-3 years of school and then being told that our colleges or universities were closing, effective immediately? (This has happened many times already.) and with the demographic cliff starting to hit higher education, this will happen even more now.
    .
    Adding insult to injury…when we transfer to different institutions, we’re told that many of our prior credits don’t transfer — thus adding a significant amount to the overall cost of obtaining our degrees.
    .
  • Would we not be absolutely furious to discover such communications from our prior — and new — colleges and universities?
    .
  • Will all of these types of closures move more people to this vision here?

Relevant excerpts from Ray Schroeder’s recent articles out at insidehighered.com:

Winds of Change in Higher Ed to Become a Hurricane in 2025

A number of factors are converging to create a huge storm. Generative AI advances, massive federal policy shifts, broad societal and economic changes, and the demographic cliff combine to create uncertainty today and change tomorrow.

Higher Education in 2025: AGI Agents to Displace People

The anticipated enrollment cliff, reductions in federal and state funding, increased inflation, and dwindling public support for tuition increases will combine to put even greater pressure on university budgets.


On the positive side of things, the completion rates have been getting better:

National college completion rate ticks up to 61.1% — from highereddive.com by Natalie Schwartz
Those who started at two-year public colleges helped drive the overall increase in students completing a credential.

Dive Brief:

  • Completion rates ticked up to 61.1% for students who entered college in fall 2018, a 0.5 percentage-point increase compared to the previous cohort, according to data released Wednesday by the National Student Clearinghouse Research Center.
  • The increase marks the highest six-year completion rate since 2007 when the clearinghouse began tracking the data. The growth was driven by fewer students stopping out of college, as well as completion gains among students who started at public two-year colleges.
  • “Higher completion rates are welcome news for colleges and universities still struggling to regain enrollment levels from before the pandemic,” Doug Shapiro, the research center’s executive director, said in a statement dated Wednesday.

Addendum:

Attention Please: Professors Struggle With Student Disengagement — from edsurge.com

The stakes are huge, because the concern is that maybe the social contract between students and professors is kind of breaking down. Do students believe that all this college lecturing is worth hearing? Or, will this moment force a change in the way college teaching is done?

 

Closing the digital use divide with active and engaging learning — from eschoolnews.com by Laura Ascione
Students offered insight into how to use active learning, with digital tools, to boost their engagement

When it comes to classroom edtech use, digital tools have a drastically different impact when they are used actively instead of passively–a critical difference examined in the 2023-2024 Speak Up Research by Project Tomorrow.

Students also outlined their ideal active learning technologies:

  • Collaboration tools to support projects
  • Student-teacher communication tools
  • Online databases for self-directed research
  • Multi-media tools for creating new content
  • Online and digital games
  • AI tools to support personalized learning
  • Coding and computer programming resources
  • Online animations, simulations, and virtual labs
  • Virtual reality equipment and content
 

How AI is transforming learning for dyslexic students — from eschoolnews.com by Samay Bhojwani, University of Nebraska–Lincoln
As schools continue to adopt AI-driven tools, educators can close the accessibility gap and help dyslexic students thrive

Many traditional methods lack customization and don’t empower students to fully engage with content on their terms. Every dyslexic student experiences challenges differently, so a more personalized approach is essential for fostering comprehension, engagement, and academic growth.

Artificial intelligence is increasingly recognized for its potential to transform educational accessibility. By analyzing individual learning patterns, AI-powered tools can tailor content to meet each student’s specific needs. For dyslexic students, this can mean summarizing complex texts, providing auditory support, or even visually structuring information in ways that aid comprehension.


NotebookLM How-to Guide 2024 — from ai-supremacy.com by Michael Spencer and Alex McFarland
With Audio Version | A popular guide reloaded.

In this guide, I’ll show you:

  1. How to use the new advanced audio customization features
  2. Two specific workflows for synthesizing information (research papers and YouTube videos)
  3. Pro tips for maximizing results with any type of content
  4. Common pitfalls to avoid (learned these the hard way)

The State of Instructional Design 2024: A Field on the Brink of Disruption? — from drphilippahardman.substack.com by Dr. Philippa Hardman
My hot takes from a global survey I ran with Synthesia

As I mentioned on LinkedIn, earlier this week Synthesia published the results of a global survey that we ran together the state of instructional design in 2024.


Boundless Socratic Learning: Google DeepMind’s Vision for AI That Learns Without Limits — from by Giorgio Fazio

Google DeepMind researchers have unveiled a groundbreaking framework called Boundless Socratic Learning (BSL), a paradigm shift in artificial intelligence aimed at enabling systems to self-improve through structured language-based interactions. This approach could mark a pivotal step toward the elusive goal of artificial superintelligence (ASI), where AI systems drive their own development with minimal human input.

The promise of Boundless Socratic Learning lies in its ability to catalyze a shift from human-supervised AI to systems that evolve and improve autonomously. While significant challenges remain, the introduction of this framework represents a step toward the long-term goal of open-ended intelligence, where AI is not just a tool but a partner in discovery.


5 courses to take when starting out a career in Agentic AI — from techloy.com by David Adubiina
This will help you join the early train of experts who are using AI agents to solve real world problems.

This surge in demand is creating new opportunities for professionals equipped with the right skills. If you’re considering a career in this innovative field, the following five courses will provide a solid foundation when starting a career in Agentic AI.



 
© 2024 | Daniel Christian