AI educators are coming to this school – and it’s part of a trend — from techradar.com by Eric Hal Schwartz
Two hours of lessons, zero teachers

  • An Arizona charter school will use AI instead of human teachers for two hours a day on academic lessons.
  • The AI will customize lessons in real-time to match each student’s needs.
  • The company has only tested this idea at private schools before but claims it hugely increases student academic success.

One school in Arizona is trying out a new educational model built around AI and a two-hour school day. When Arizona’s Unbound Academy opens, the only teachers will be artificial intelligence algorithms in a perfect utopia or dystopia, depending on your point of view.


AI in Instructional Design: reflections on 2024 & predictions for 2025 — from drphilippahardman.substack.com by Dr. Philippa Hardman
Aka, four new year’s resolutions for the AI-savvy instructional designer.


Debating About AI: A Free Comprehensive Guide to the Issues — from stefanbauschard.substack.com by Stefan Bauschard

In order to encourage and facilitate debate on key controversies related to AI, I put together this free 130+ page guide to the main arguments and ideas related to the controversies.


Universities need to step up their AGI game — from futureofbeinghuman.com by Andrew Maynard
As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions

And because of this, I think there’s a unique opportunity for universities (research universities in particular) to up their game and play a leadership role in navigating the coming advanced AI transition.

Of course, there are already a number of respected university-based initiatives that are working on parts of the challenge. Stanford HAI (Human-centered Artificial Intelligence) is one that stands out, as does the Leverhulm Center for the Future of Intelligence at the University of Cambridge, and the Center for Governance of AI at the University of Oxford. But these and other initiatives are barely scratching the surface of what is needed to help successfully navigate advanced AI transitions.

If universities are to be leaders rather than bystanders in ensuring human flourishing in an age of AI, there’s an urgent need for bolder and more creative forward-looking initiatives that support research, teaching, thought leadership, and knowledge mobilization, at the intersection of advanced AI and all aspects of what it means to thrive and grow as a species.


 

 

How AI Is Changing Education: The Year’s Top 5 Stories — from edweek.org by Alyson Klein

Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.

Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.

Here are the five most popular stories that Education Week published in 2024 about AI in schools.


What’s next with AI in higher education? — from msn.com by Science X Staff

Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:

1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change


5 Ways Teachers Can Use NotebookLM Today — from classtechtips.com by Dr. Monica Burns

 


AI in 2024: Insights From our 5 Million Readers — from linkedin.com by Generative AI

Checking the Pulse: The Impact of AI on Everyday Lives
So, what exactly did our users have to say about how AI transformed their lives this year?
.

Top 2024 Developments in AI

  1. Video Generation…
  2. AI Employees…
  3. Open Source Advancements…

Getting ready for 2025: your AI team members (Gift lesson 3/3) — from flexos.com by Daan van Rossum

And that’s why today, I’ll tell you exactly which AI tools I’ve recommended for the top 5 use cases to almost 200 business leaders who took the Lead with AI course.

1. Email Management: Simplifying Communication with AI

  • Microsoft Copilot for Outlook. …
  • Gemini AI for Gmail. …
  • Grammarly. …

2. Meeting Management: Maximize Your Time

  • Otter.ai. …
  • Copilot for Microsoft Teams. …
  • Other AI Meeting Assistants. Zoom AI Companion, Granola, and Fathom

3. Research: Streamlining Information Gathering

  • ChatGPT. …
  • Perplexity. …
  • Consensus. …

…plus several more items and tools that were mentioned by Daan.

 

60 Minutes Overtime
Sal Khan wants an AI tutor for every student: here’s how it’s working at an Indiana high school — from cbsnews.com by Anderson Cooper, Aliza Chasan, Denise Schrier Cetta, and Katie Brennan

“I mean, that’s what I’ll always want for my own children and, frankly, for anyone’s children,” Khan said. “And the hope here is that we can use artificial intelligence and other technologies to amplify what a teacher can do so they can spend more time standing next to a student, figuring them out, having a person-to-person connection.”

“After a week you start to realize, like, how you can use it,” Brockman said. “That’s been one of the really important things about working with Sal and his team, to really figure out what’s the right way to sort of bring this to parents and to teachers and to classrooms and to do that in a way…so that the students really learn and aren’t just, you know, asking for the answers and that the parents can have oversight and the teachers can be involved in that process.”


Nectir lets teachers tailor AI chatbots to provide their students with 24/7 educational support — from techcrunch.com by Lauren Forristal

More than 100 colleges and high schools are turning to a new AI tool called Nectir, allowing teachers to create a personalized learning partner that’s trained on their syllabi, textbooks, and assignments to help students with anything from questions related to their coursework to essay writing assistance and even future career guidance.

With Nectir, teachers can create an AI assistant tailored to their specific needs, whether for a single class, a department, or the entire campus. There are various personalization options available, enabling teachers to establish clear boundaries for the AI’s interactions, such as programming the assistant to assist only with certain subjects or responding in a way that aligns with their teaching style.

“It’ll really be that customized learning partner. Every single conversation that a student has with any of their assistants will then be fed into that student profile for them to be able to see based on what the AI thinks, what should I be doing next, not only in my educational journey, but in my career journey,” Ghai said. 


How Will AI Influence Higher Ed in 2025? — from insidehighered.com by Kathryn Palmer
No one knows for sure, but Inside Higher Ed asked seven experts for their predictions.

As the technology continues to evolve at a rapid pace, no one knows for sure how AI will influence higher education in 2025. But several experts offered Inside Higher Ed their predictions—and some guidance—for how colleges and universities will have to navigate AI’s potential in the new year.


How A.I. Can Revive a Love of Learning — from nytimes.com by Anant Agarwal
Modern technology offers new possibilities for transforming teaching.

In the short term, A.I. will help teachers create lesson plans, find illustrative examples and generate quizzes tailored to each student. Customized problem sets will serve as tools to combat cheating while A.I. provides instant feedback.

In the longer term, it’s possible to imagine a world where A.I. can ingest rich learner data and create personalized learning paths for students, all within a curriculum established by the teacher. Teachers can continue to be deeply involved in fostering student discussions, guiding group projects and engaging their students, while A.I. handles grading and uses the Socratic method to help students discover answers on their own. Teachers provide encouragement and one-on-one support when needed, using their newfound availability to give students some extra care.

Let’s be clear: A.I. will never replace the human touch that is so vital to education. No algorithm can replicate the empathy, creativity and passion a teacher brings to the classroom. But A.I. can certainly amplify those qualities. It can be our co-pilot, our chief of staff helping us extend our reach and improve our effectiveness.


Dancing with the Devil We Know: OpenAI and the Future of Education — from nickpotkalitsky.substack.com by Nick Potkalitsky
Analyzing OpenAI’s Student Writing Guide and Latest AI Tools

Today, I want to reflect on two recent OpenAI developments that highlight this evolution: their belated publication of advice for students on integrating AI into writing workflows, and last week’s launch of the full GPTo1 Pro version. When OpenAI released their student writing guide, there were plenty of snarky comments about how this guidance arrives almost a year after they thoroughly disrupted the educational landscape. Fair enough – I took my own side swipes initially. But let’s look at what they’re actually advising, because the details matter more than the timing.


Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise — from studentsupportaccelerator.org by Rose E.Wang, Ana T. Ribeiro, Carly D. Robinson, Susanna Loeb, and Dora Demszky


Pandemic, Politics, Pre-K & More: 12 Charts That Defined Education in 2024 — from the74million.org
From the spread of AI to the limits of federal COVID aid, these research findings captured the world of education this year.

Tutoring programs exploded in the last five years as states and school districts searched for ways to counter plummeting achievement during COVID. But the cost of providing supplemental instruction to tens of millions of students can be eye-watering, even as the results seem to taper off as programs serve more students.

That’s where artificial intelligence could prove a decisive advantage. A report circulated in October by the National Student Support Accelerator found that an AI-powered tutoring assistant significantly improved the performance of hundreds of tutors by prompting them with new ways to explain concepts to students. With the help of the tool, dubbed Tutor CoPilot, students assigned to the weakest tutors began posting academic results nearly equal to those assigned to the strongest. And the cost to run the program was just $20 per pupil.


On Capacity, Sustainability, And Attention — from marcwatkins.substack.com by Marc Watkins

Faculty must have the time and support necessary to come to terms with this new technology and that requires us to change how we view professional development in higher education and K-12. We cannot treat generative AI as a one-off problem that can be solved by a workshop, an invited talk, or a course policy discussion. Generative AI in education has to be viewed as a continuum. Faculty need a myriad of support options each semester:

  • Course buyouts
  • Fellowships
  • Learning communities
  • Reading groups
  • AI Institutes and workshops
  • Funding to explore the scholarship of teaching and learning around generative AI

New in 2025 and What Edleaders Should Do About It — from gettingsmart.com by Tom Vander Ark and Mason Pashia

Key Points

  • Education leaders should focus on integrating AI literacy, civic education, and work-based learning to equip students for future challenges and opportunities.
  • Building social capital and personalized learning environments will be crucial for student success in a world increasingly influenced by AI and decentralized power structures.
 

Episode 302: A Practical Roadmap for AI in K-12 Education with Mike Kentz & Nick Potkalitsky, PhD

In this episode of My EdTech Life, I had the pleasure of interviewing Mike Kentz and Nick Potkalitsky, PhD, to discuss their new book, AI in Education: The K-12 Roadmap to Teacher-Led Transformation. We dive into the transformative power of AI in education, exploring its potential for personalization, its impact on traditional teaching practices, and the critical need for teacher-driven experimentation.


Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education — from er.educause.edu by Katalin Wargo and Brier Anderson
Navigating the complexities of artificial intelligence (AI) while upholding ethical standards requires a balanced approach that considers the benefits and risks of AI adoption.

As artificial intelligence (AI) continues to transform the world—including higher education—the need for responsible use has never been more critical. While AI holds immense potential to enhance teaching and learning, ethical considerations around social inequity, environmental concerns, and dehumanization continue to emerge. College and university centers for teaching and learning (CTLs), tasked with supporting faculty in best instructional practices, face growing pressure to take a balanced approach to adopting new technologies. This challenge is compounded by an unpredictable and rapidly evolving landscape. New AI tools surface almost daily. With each new tool, the educational possibilities and challenges increase exponentially. Keeping up is virtually impossible for CTLs, which historically have been institutional hubs for innovation. In fact, as of this writing, the There’s an AI for That website indicates that there are 23,208 AIs for 15,636 tasks for 4,875 jobs—with all three numbers increasing daily.

To support college and university faculty and, by extension, learners in navigating the complexities of AI integration while upholding ethical standards, CTLs must prioritize a balanced approach that considers the benefits and risks of AI adoption. Teaching and learning professionals need to expand their resources and support pathways beyond those solely targeting how to leverage AI or mitigate academic integrity violations. They need to make a concerted effort to promote critical AI literacy, grapple with issues of social inequity, examine the environmental impact of AI technologies, and promote human-centered design principles.1


5 Free AI Tools For Learning & Exploration — from whytryai.com by Daniel Nest
Have fun exploring new topics with these interactive sites.

We’re truly spoiled for choice when it comes to AI learning tools.

In principle, any free LLM can become an endlessly patient tutor or an interactive course-maker.

If that’s not enough, tools like NotebookLM’s “Audio Overviews” and ElevenLabs’ GenFM can turn practically any material into a breezy podcast.

But what if you’re looking to explore new topics in a way that’s more interactive than vanilla chatbots and more open-ended than source-grounded NotebookLM?

Well, then you might want to give one of these free-to-try learning tools a go.

 

Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar

Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.

Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?

The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.


Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer


Top AI tools for business — from theneuron.ai


This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends

Another busy AI news week, so I organized it into categories:

  • Image to 3D
  • AI Video
  • AI Image Models & Tools
  • AI Assistants / LLMs
  • AI Creative Workflow: Luma AI Boards

Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.

Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.


What Is Agentic AI?  — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.


 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 

AI Tutors: Hype or Hope for Education? — from educationnext.org by John Bailey and John Warner
In a new book, Sal Khan touts the potential of artificial intelligence to address lagging student achievement. Our authors weigh in.

In Salman Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) (Viking, 2024), the Khan Academy founder predicts that AI will transform education by providing every student with a virtual personalized tutor at an affordable cost. Is Khan right? Is radically improved achievement for all students within reach at last? If so, what sorts of changes should we expect to see, and when? If not, what will hold back the AI revolution that Khan foresees? John Bailey, a visiting fellow at the American Enterprise Institute, endorses Khan’s vision and explains the profound impact that AI technology is already making in education. John Warner, a columnist for the Chicago Tribune and former editor for McSweeney’s Internet Tendency, makes the case that all the hype about AI tutoring is, as Macbeth quips, full of sound and fury, signifying nothing.

 

(An excerpt from Brainyacts #253 12/3/24)

A New Era for Law Firm Learning and Development — from brainyacts.beehiiv.com by Josh Kubicki

By becoming early adopters, law firms can address two critical challenges in professional development:

1. Empowering Educators and Mentors
Generative AI equips legal educators, practice group leaders, and mentors with tools to amplify their impact. It assists with:

  • Content generation: …
  • Research facilitation: …
  • Skill-building frameworks: …


2. Cracking the Personalized Learning Code
Every lawyer’s learning needs are unique. Generative AI delivers hyper-personalized educational experiences that adapt to an individual’s role, practice area, and career stage. This addresses the “Two Sigma Problem” (the dramatic performance gains of one-on-one tutoring) by making tailored learning scalable and actionable. Imagine:

  • AI-driven tutors: …
  • Instant feedback loops: …
  • Adaptive learning models: …

Case Study: Building AI Tutors in Legal Education

Moving Beyond CLEs: A New Vision for Professional Development…


 
 

Below is an excerpt from 2024: The State of Generative AI in the Enterprise — from Menlo Ventures

  • Legal: Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows. The field broadly divides into litigation and transactional law, with numerous subspecialties. Rooted in litigation, Everlaw* focuses on legal holds, e-discovery, and trial preparation, while Harvey and Spellbook are advancing AI in transactional law with solutions for contract review, legal research, and M&A. Specific practice areas are also targeted AI innovations: EvenUp focuses on injury law, Garden on patents and intellectual property, Manifest on immigration and employment law, while Eve* is re-inventing plaintiff casework from client intake to resolution.

Excerpt from Brainyacts #250 (from 11/22/24) — from the Leveraging Generative AI in Client Interviews section

Here’s what the article from Forbes said:

  • CodeSignal, an AI tech company, has launched Conversation Practice, an AI-driven platform to help learners practice critical workplace communication and soft skills.
  • Conversation Practice uses multiple AI models and a natural spoken interface to simulate real-world scenarios and provide feedback.
  • The goal is to address the challenge of developing conversational skills through iterative practice, without the awkwardness of peer role-play.

What I learned about this software changed my perception about how I can prepare in the future for client meetings. Here’s what I’ve taken away from the potential use of this software in a legal practice setting:


Why Technology-Driven Law Firms Are Poised For Long-Term Success — from forbes.com by Daniel Farrar

I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day:

    • Cloud-based case management solutions. These help enhance productivity through collaboration tools and automated workflows while keeping data secure.
    • E-discovery tools. These tools manage vast amounts of data and help speed up litigation processes.
    • Artificial intelligence. AI has helped automate tasks for legal professionals including for case management, research, contract review and predictive analytics.
 

2024: The State of Generative AI in the Enterprise — from menlovc.com (Menlo Ventures)
The enterprise AI landscape is being rewritten in real time. As pilots give way to production, we surveyed 600 U.S. enterprise IT decision-makers to reveal the emerging winners and losers.

This spike in spending reflects a wave of organizational optimism; 72% of decision-makers anticipate broader adoption of generative AI tools in the near future. This confidence isn’t just speculative—generative AI tools are already deeply embedded in the daily work of professionals, from programmers to healthcare providers.

Despite this positive outlook and increasing investment, many decision-makers are still figuring out what will and won’t work for their businesses. More than a third of our survey respondents do not have a clear vision for how generative AI will be implemented across their organizations. This doesn’t mean they’re investing without direction; it simply underscores that we’re still in the early stages of a large-scale transformation. Enterprise leaders are just beginning to grasp the profound impact generative AI will have on their organizations.


Business spending on AI surged 500% this year to $13.8 billion, says Menlo Ventures — from cnbc.com by Hayden Field

Key Points

  • Business spending on generative AI surged 500% this year, hitting $13.8 billion — up from just $2.3 billion in 2023, according to data from Menlo Ventures released Wednesday.
  • OpenAI ceded market share in enterprise AI, declining from 50% to 34%, per the report.
  • Amazon-backed Anthropic doubled its market share from 12% to 24%.

Microsoft quietly assembles the largest AI agent ecosystem—and no one else is close — from venturebeat.com by Matt Marshall

Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting  segments.

The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference [that started on 11/19/24], the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight.


Now Hear This: World’s Most Flexible Sound Machine Debuts — from
Using text and audio as inputs, a new generative AI model from NVIDIA can create any combination of music, voices and sounds.

Along these lines, also see:


AI Agents Versus Human Agency: 4 Ways To Navigate Our AI-Driven World — from forbes.com by Cornelia C. Walther

To understand the implications of AI agents, it’s useful to clarify the distinctions between AI, generative AI, and AI agents and explore the opportunities and risks they present to our autonomy, relationships, and decision-making.

AI Agents: These are specialized applications of AI designed to perform tasks or simulate interactions. AI agents can be categorized into:

    • Tool Agents…
    • Simulation Agents..

While generative AI creates outputs from prompts, AI agents use AI to act with intention, whether to assist (tool agents) or emulate (simulation agents). The latter’s ability to mirror human thought and action offers fascinating possibilities — and raises significant risks.

 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

7 Legal Tech Trends To Watch In 2025 — from lexology.com by Sacha Kirk
Australia, United Kingdom November 25 2024

In-house legal teams are changing from a traditional support function to becoming proactive business enablers. New tools are helping legal departments enhance efficiency, improve compliance, and to deliver greater strategic value.

Here’s a look at seven emerging trends that will shape legal tech in 2025 and insights on how in-house teams can capitalise on these innovations.

1. AI Solutions…
2. Regulatory Intelligence Platforms…

7. Self-Service Legal Tools and Knowledge Management
As the demand on in-house legal teams continues to grow, self-service tools are becoming indispensable for managing routine legal tasks. In 2025, these tools are expected to evolve further, enabling employees across the organisation to handle straightforward legal processes independently. Whether it’s accessing pre-approved templates, completing standard agreements, or finding answers to common legal queries, self-service platforms reduce the dependency on legal teams for everyday tasks.

Advanced self-service tools go beyond templates, incorporating intuitive workflows, approval pathways, and built-in guidance to ensure compliance with legal and organisational policies. By empowering business users to manage low-risk matters on their own, these tools free up legal teams to focus on complex and high-value work.


 

 
© 2025 | Daniel Christian