Defense attorney Jason Lamm won’t be handling the appeal, but said a higher court will likely be asked to weigh in on whether the judge improperly relied on the AI-generated video when sentencing his client.
Courts across the country have been grappling with how to best handle the increasing presence of artificial intelligence in the courtroom. Even before Pelkey’s family used AI to give him a voice for the victim impact portion — believed to be a first in U.S. courts — the Arizona Supreme Court created a committee that researches best AI practices.
In Florida, a judge recently donned a virtual reality headset meant to show the point of view of a defendant who said he was acting in self-defense when he waved a loaded gun at wedding guests. The judge rejected his claim.
Experts say using AI in courtrooms raises legal and ethical concerns, especially if it’s used effectively to sway a judge or jury. And they argue it could have a disproportionate impact on marginalized communities facing prosecution.
…
AI can be very persuasive, Harris said, and scholars are studying the intersection of the technology and manipulation tactics.
April 29 2025: A major new survey, from legal intelligence platform Robin AI, has revealed a severe lack of trust in the legal industry. Just 1 in 10 people across the US and UK said they fully trust law firms, but while increasingly open to AI-powered legal services, few are ready to let technology take over without human oversight.
Perspectus Global polled a representative sample of 4,152 people across both markets. An overwhelming majority see Big Law as “expensive”, “elitist” or “intimidating” but only 30% of respondents would allow a robot lawyer — that is, an AI system acting alone — to represent them in a legal matter. On average, respondents said they would need a 57% discount to choose an AI lawyer over a human.
In just three years, the company, which builds software for analyzing and drafting documents using legally tuned large language models, has drawn blue-chip law firms, Silicon Valley investors, and a stampede of rivals hoping to catch its momentum. Harvey has raised over half a billion dollars in capital, sending its valuation soaring to $3 billion.
According to a new report from Enkrypt AI, multimodal models have opened the door to sneakier attacks (like Ocean’s Eleven, but with fewer suits and more prompt injections).
Naturally, Enkrypt decided to run a few experiments… and things escalated quickly.
They tested two of Mistral’s newest models—Pixtral-Large and Pixtral-12B, built to handle words and visuals.
What they found? Yikes:
The models are 40x more likely to generate dangerous chemical / biological / nuclear info.
And 60x more likely to produce child sexual exploitation material compared to top models like OpenAI’s GPT-4o or Anthropic’s Claude 3.7 Sonnet.
.Get the 2025 Student Guide to Artificial Intelligence — from studentguidetoai.org This guide is made available under a Creative Commons license by Elon University and the American Association of Colleges and Universities (AAC&U). .
Agentic AI is taking these already huge strides even further. Rather than simply asking a question and receiving an answer, an AI agent can assess your current level of understanding and tailor a reply to help you learn. They can also help you come up with a timetable and personalized lesson plan to make you feel as though you have a one-on-one instructor walking you through the process. If your goal is to learn to speak a new language, for example, an agent might map out a plan starting with basic vocabulary and pronunciation exercises, then progress to simple conversations, grammar rules and finally, real-world listening and speaking practice.
…
For instance, if you’re an entrepreneur looking to sharpen your leadership skills, an AI agent might suggest a mix of foundational books, insightful TED Talks and case studies on high-performing executives. If you’re aiming to master data analysis, it might point you toward hands-on coding exercises, interactive tutorials and real-world datasets to practice with.
The beauty of AI-driven learning is that it’s adaptive. As you gain proficiency, your AI coach can shift its recommendations, challenge you with new concepts and even simulate real-world scenarios to deepen your understanding.
Ironically, the very technology feared by workers can also be leveraged to help them. Rather than requiring expensive external training programs or lengthy in-person workshops, AI agents can deliver personalized, on-demand learning paths tailored to each employee’s role, skill level, and career aspirations. Given that 68% of employees find today’s workplace training to be overly “one-size-fits-all,” an AI-driven approach will not only cut costs and save time but will be more effective.
This is one reason why I don’t see AI-embedded classrooms and AI-free classrooms as opposite poles. The bone of contention, here, is not whether we can cultivate AI-free moments in the classroom, but for how long those moments are actually sustainable.
Can we sustain those AI-free moments for an hour? A class session? Longer?
…
Here’s what I think will happen. As AI becomes embedded in society at large, the sustainability of imposed AI-free learning spaces will get tested. Hard. I think it’ll become more and more difficult (though maybe not impossible) to impose AI-free learning spaces on students.
However, consensual and hybrid AI-free learning spaces will continue to have a lot of value. I can imagine classes where students opt into an AI-free space. Or they’ll even create and maintain those spaces.
Duolingo’s AI Revolution — from drphilippahardman.substack.com by Dr. Philippa Hardman What 148 AI-Generated Courses Tell Us About the Future of Instructional Design & Human Learning
Last week, Duolingo announced an unprecedented expansion: 148 new language courses created using generative AI, effectively doubling their content library in just one year. This represents a seismic shift in how learning content is created — a process that previously took the company 12 years for their first 100 courses.
As CEO Luis von Ahn stated in the announcement, “This is a great example of how generative AI can directly benefit our learners… allowing us to scale at unprecedented speed and quality.”
In this week’s blog, I’ll dissect exactly how Duolingo has reimagined instructional design through AI, what this means for the learner experience, and most importantly, what it tells us about the future of our profession.
Medical education is experiencing a quiet revolution—one that’s not taking place in lecture theatres or textbooks, but with headsets and holograms. At the heart of this revolution are Mixed Reality (MR) AI Agents, a new generation of devices that combine the immersive depth of mixed reality with the flexibility of artificial intelligence. These technologies are not mere flashy gadgets; they’re revolutionising the way medical students interact with complicated content, rehearse clinical skills, and prepare for real-world situations. By combining digital simulations with the physical world, MR AI Agents are redefining what it means to learn medicine in the 21st century.
4 Reasons To Use Claude AI to Teach — from techlearning.com by Erik Ofgang Features that make Claude AI appealing to educators include a focus on privacy and conversational style.
After experimenting using Claude AI on various teaching exercises, from generating quizzes to tutoring and offering writing suggestions, I found that it’s not perfect, but I think it behaves favorably compared to other AI tools in general, with an easy-to-use interface and some unique features that make it particularly suited for use in education.
Higher education is in a period of massive transformation and uncertainty. Not only are current events impacting how institutions operate, but technological advancement—particularly in AI and virtual reality—are reshaping how students engage with content, how cognition is understood, and how learning itself is documented and valued.
Our newly released 2025 EDUCAUSE Horizon Report | Teaching and Learning Edition captures the spirit of this transformation and how you can respond with confidence through the lens of emerging trends, key technologies and practices, and scenario-based foresight.
So this edition is simple: a quick, practical guide to the major generative AI models available in 2025 so far. What they’re good at, what to use them for, and where they might fit into your legal work—from document summarization to client communication to research support.
From DSC: This comprehensive, highly informational posting lists what the model is, its strengths, the best legal use cases for it, and responsible use tips as well.
Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more.
Later, the guys answer a listener’s question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated.
If you follow legal tech at all, you would be justified in suspecting that Tom Martin has figured out how to use artificial intelligence to clone himself.
While running LawDroid, his legal tech company, the Vancouver-based Martin also still manages a law practice in California, oversees an annual legal tech awards program, teaches a law school course on generative AI, runs an annual AI conference, hosts a podcast, and recently launched a legal tech consultancy.
In January 2023, less than two months after ChatGPT first launched, Martin’s company was one of the first to launch a gen AI assistant specifically for lawyers, called LawDroid Copilot. He has since also launched LawDroid Builder, a no-code platform for creating custom AI agents.
In a profession that’s actively contemplating its future in the face of AI, legal organization leaders who demonstrate a genuine desire to invest in the next generation of legal professionals will undoubtedly set themselves apart
Artificial intelligence (AI) is here. And it’s already reshaping the way law firms operate. Whether automating repetitive tasks, improving risk management, or boosting efficiency, AI presents a genuine opportunity for forward-thinking legal practices. But with new opportunities come new responsibilities. And as firms explore AI tools, it’s essential they consider how to govern them safely and ethically. That’s where an AI policy becomes indispensable.
So, what can AI actually do for your firm right now? Let’s take a closer look.
Global leader brings its trusted brand and powerful network to enable payments with new technologies
Launches new innovations and partnerships to drive flexibility, security and acceptance
SAN FRANCISCO–(BUSINESS WIRE)–The future of commerce is on display at the Visa Global Product Drop with powerful AI-enabled advancements allowing consumers to find and buy with AI plus the introduction of new strategic partnerships and product innovations.
Collaborates with Anthropic, IBM, Microsoft, Mistral AI, OpenAI, Perplexity, Samsung, Stripe and more
Will make shopping experiences more personal, more secure and more convenient as they become powered by AI
Introduced [on April 30th] at the Visa Global Product Drop, Visa Intelligent Commerce enables AI to find and buy. It is a groundbreaking new initiative that opens Visa’s payment network to the developers and engineers building the foundational AI agents transforming commerce.
In today’s newsletter, I’m unpacking why your next major buyers won’t be people at all. They’ll be AI agents, and your brand might already be invisible to them. We’ll dig into why traditional marketing strategies are breaking down in the age of autonomous AI shoppers, what “AI optimization” (AIO) really means, and the practical steps you can take right now to make sure your business stays visible and competitive as the new digital gatekeepers take over more digital tasks.
AI platforms and AI agents—the digital assistants that browse and actually do things powered by models like GPT-4o, Claude 3.7 Sonnet, and Gemini 2.5 Pro—are increasingly becoming the gatekeepers between your business and potential customers.
…
“AI is the new front door to your business for millions of consumers.”
The 40-Point (ish) AI Agent Marketing Playbook
Here’s the longer list. I went ahead and broke these into four categories so you can more easily assign owners: Content, Structure & Design, Technical & Dev, and AI Strategy & Testing. I look forward to seeing how this space, and by extension my advice, changes in the coming months.
During a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference on Tuesday, Microsoft CEO Satya Nadella said that 20% to 30% of code inside the company’s repositories was “written by software” — meaning AI.
In just six months, the consumer AI landscape has been redrawn. Some products surged, others stalled, and a few unexpected players rewrote the leaderboard overnight. Deepseek rocketed from obscurity to a leading ChatGPT challenger. AI video models advanced from experimental to fairly dependable (at least for short clips!). And so-called “vibe coding” is changing who can create with AI, not just who can use it. The competition is tighter, the stakes are higher, and the winners aren’t just launching, they’re sticking.
We turned to the data to answer: Which AI apps are people actively using? What’s actually making money, beyond being popular? And which tools are moving beyond curiosity-driven dabbling to become daily staples?
This is the fourth installment of the Top 100 Gen AI Consumer Apps, our bi-annual ranking of the top 50 AI-first web products (by unique monthly visits, per Similarweb) and top 50 AI-first mobile apps (by monthly active users, per Sensor Tower). Since our last report in August 2024, 17 new companies have entered the rankings of top AI-first web products.
The AI search landscape is transforming at breakneck speed. New “Deep Research” tools from ChatGPT, Gemini and Perplexity autonomously search and gather information from dozens — even hundreds — of sites, then analyze and synthesize it to produce comprehensive reports. While a human might take days or weeks to produce these 30-page citation-backed reports, AI Deep Research reports are ready in minutes.
What’s in this post
Examples of each report type I generated for my research, so you can form your own impressions.
Tips on why & how to use Deep Research and how to craft effective queries.
Comparison of key features and strengths/limitations of the top platforms
As AI agents transition from experimental systems to production-scale applications, their growing autonomy introduces novel security challenges. In a comprehensive new report, “AI Agents Are Here. So Are the Threats,” Palo Alto Networks’ Unit 42 reveals how today’s agentic architectures—despite their innovation—are vulnerable to a wide range of attacks, most of which stem not from the frameworks themselves, but from the way agents are designed, deployed, and connected to external tools.
To evaluate the breadth of these risks, Unit 42 researchers constructed two functionally identical AI agents—one built using CrewAI and the other with AutoGen. Despite architectural differences, both systems exhibited the same vulnerabilities, confirming that the underlying issues are not framework-specific. Instead, the threats arise from misconfigurations, insecure prompt design, and insufficiently hardened tool integrations—issues that transcend implementation choices.
LLMs Can Learn Complex Math from Just One Example: Researchers from University of Washington, Microsoft, and USC Unlock the Power of 1-Shot Reinforcement Learning with Verifiable Reward — from marktechpost.com by Sana Hassan
DC: THIS could unfortunately be the ROI companies will get from large investments in #AI — reduced headcount/employees/contract workers. https://t.co/zEWlqCSWzI
Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.
According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”
Something strange, and potentially alarming, is happening to the job market for young, educated workers.
According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.
What’s going on? I see three plausible explanations, and each might be a little bit true.
The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.
Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.
Key takeaway: Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be.Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.
…
A.4 How will AI disrupt this segment? Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.
No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”
Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?
But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.
…
In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.
This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”
I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!
DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.
What’s impressive about it?
Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.
The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.
From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25
All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.
That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.
… Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”
FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.
College is often advertised as the best four years of one’s life, but many Americans now have regrets.
More than a third of all graduates now say their degree was a “waste of money,” according to a new survey by Indeed. This frustration is especially pronounced among Gen Z, with 51% expressing remorse—compared to 41% of millennials and just 20% of baby boomers.
Overall, a growing share of college-educated workers are questioning the return on investment (ROI) of their degree, Kyle M.K., a career trend expert at Indeed, told Fortune. It’s something that’s not all too surprising considering that the average cost of a bachelor’s degree has doubled in the last two decades to over $38,000, and total student loan debt has ballooned to nearly $2 trillion.
“Another 38% feel student loans have limited their career growth more than their diploma has accelerated it,” M.K. said.
“AI won’t invalidate a solid education, but it will reward those who keep upgrading their toolkit.”
Report Highlights. The average cost of college* in the United States is $38,270 per student per year, including books, supplies, and daily living expenses.
The average cost of college has more than doubled in the 21st century; the compound annual growth rate (CAGR) of tuition is 4.04%.
The average in-state student attending a public 4-year institution and living on-campus spends $27,146 for one academic year.
The average cost of in-state tuition alone is $9,750; out-of-state tuition averages $28,386.
The average private, nonprofit university student spends $58,628 per academic year living on campus, $38,421 of it on tuition and fees.
Considering student loan interest and loss of income, investing in a bachelor’s degree can ultimately cost in excess of $500,000.
.
From DSC: Reminds me of a graphic that Yohan Na and I created back in 2009: .
Sam Altman’s Eye-Scanning Orb Is Now Coming to the US — from wired.com by Lauren Goode At a buzzy event in San Francisco, World announced a series of Apple-like stores, a partnership with dating giant Match Group, and a new mini gadget to scan your eyeballs.
The device-and-app combo scans people’s irises, creates a unique user ID, stores that information on the blockchain, and uses it as a form of identity verification. If enough people adopt the app globally, the thinking goes, it could ostensibly thwart scammers.
…
The bizarre identity verification process requires that users get their eyeballs scanned, so Tools for Humanity is expanding its physical footprint to make that a possibility.
…
But World is also a for-profit cryptocurrency company that wants to build a borderless, “globally inclusive” financial network. And its approach has been criticized by privacy advocates and regulators. In its early days, World was explicitly marketing its services to countries with a high percentage of unbanked or underbanked citizens, and offering free crypto as an incentive for people to sign up and have their irises scanned.
From DSC: If people and governments could be trusted with the level of power a global ID network/service could bring, this could be a great technology. But I could easily see it being abused. Heck, even our own President doesn’t listen to the Judicial Branch of our government! He’s in contempt of court, essentially. But he doesn’t seem to care.
We are entering a new reality—one in which AI can reason and solve problems in remarkable ways. This intelligence on tap will rewrite the rules of business and transform knowledge work as we know it. Organizations today must navigate the challenge of preparing for an AI-enhanced future, where AI agents will gain increasing levels of capability over time that humans will need to harness as they redesign their business. Human ambition, creativity, and ingenuity will continue to create new economic value and opportunity as we redefine work and workflows.
As a result, a new organizational blueprint is emerging, one that blends machine intelligence with human judgment, building systems that are AI-operated but human-led. Like the Industrial Revolution and the internet era, this transformation will take decades to reach its full promise and involve broad technological, societal, and economic change.
To help leaders understand how knowledge work will evolve, Microsoft analyzed survey data from 31,000 workers across 31 countries, LinkedIn labor market trends, and trillions of Microsoft 365 productivity signals. We also spoke with AI-native startups, academics, economists, scientists, and thought leaders to explore what work could become. The data and insights point to the emergence of an entirely new organization, a Frontier Firm that looks markedly different from those we know today. Structured around on-demand intelligence and powered by “hybrid” teams of humans + agents, these companies scale rapidly, operate with agility, and generate value faster.
Frontier Firms are already taking shape, and within the next 2–5 years we expect that every organization will be on their journey to becoming one. 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. Adoption is accelerating: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.
The time to act is now. The question for every leader and employee is: how will you adapt?
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.
In the latest research paper from Anthropic’s Societal Impacts team, we describe a practical way we’ve developed to observe Claude’s values—and provide the first large-scale results on how Claude expresses those values during real-world conversations. We also provide an open dataset for researchers to run further analysis of the values and how often they arise in conversations.
Per the Rundown AI
Why it matters: AI is increasingly shaping real-world decisions and relationships, making understanding their actual values more crucial than ever. This study also moves the alignment discussion toward more concrete observations, revealing that AI’s morals and values may be more contextual and situational than a static point of view.
In just under two years, Adobe Firefly has revolutionized the creative industry and generated more than 22 billion assets worldwide. Today at Adobe MAX London, we’re unveiling the latest release of Firefly, which unifies AI-powered tools for image, video, audio, and vector generation into a single, cohesive platform and introduces many new capabilities.
The new Firefly features enhanced models, improved ideation capabilities, expanded creative options, and unprecedented control. This update builds on earlier momentum when we introduced the Firefly web app and expanded into video and audio with Generate Video, Translate Video, and Translate Audio features.
Why it matters: OpenAI’s recent image generator and other rivals have shaken up creative workflows, but Adobe’s IP-safe focus and the addition of competing models into Firefly allow professionals to remain in their established suite of tools — keeping users in the ecosystem while still having flexibility for other model strengths.
AI agents arrive in US classrooms — from zdnet.com by Radhika Rajkumar Kira AI’s personalized learning platform is currently being implemented in Tennessee schools. How will it change education?
AI for education is a new but rapidly expanding field. Can it support student outcomes and help teachers avoid burnout?
On Wednesday, AI education company Kira launched a “fully AI-native learning platform” for K-12 education, complete with agents to assist teachers with repetitive tasks. The platform hosts assignments, analyzes progress data, offers administrative assistance, helps build lesson plans and quizzes, and more.
“Unlike traditional tools that merely layer AI onto existing platforms, Kira integrates artificial intelligence directly into every educational workflow — from lesson planning and instruction to grading, intervention, and reporting,” the release explains. “This enables schools to improve student outcomes, streamline operations, and provide personalized support at scale.”
“Teachers today are overloaded with repetitive tasks. A.I. agents can change that, and free up their time to give more personalized help to students,” Ng said in a statement.
Kira was co-founded by Andrea Pasinetti and Jagriti Agrawal, both longtime collaborators of Ng. The platform embeds A.I. directly into lesson planning, instruction, grading and reporting. Teachers can instantly generate standards-aligned lesson plans, monitor student progress in real time and receive automated intervention strategies when a student falls behind.
Students, in turn, receive on-demand tutoring tailored to their learning styles. A.I. agents adapt to each student’s pace and mastery level, while grading is automated with instant feedback—giving educators time to focus on teaching.
‘Using GenAI is easier than asking my supervisor for support’ — from timeshighereducation.com Doctoral researchers are turning to generative AI to assist in their research. How are they using it, and how can supervisors and candidates have frank discussions about using it responsibly?
Generative AI is increasingly the proverbial elephant in the supervisory room. As supervisors, you may be concerned about whether your doctoral researchers are using GenAI. It can be a tricky topic to broach, especially when you may not feel confident in understanding the technology yourself.
While the potential impact of GenAI use among undergraduate and postgraduate taught students, especially, is well discussed (and it is increasingly accepted that students and staff need to become “AI literate”), doctoral researchers often slip through the cracks in institutional guidance and policymaking.
When used thoughtfully and transparently, generative artificial intelligence can augment creativity and challenge assumptions, making it an excellent tool for exploring and developing ideas.
…
The glaring contrast between the perceived ubiquity of GenAI and its actual use also reveals fundamental challenges associated with the practical application of these tools. This article explores two key questions about GenAI to address common misconceptions and encourage broader adoption and more effective use of these tools in higher education.
Like many of you, I spent the first part of this week at Learning Technologies in London, where I was lucky enough to present a session on the current state of AI and L&D.
In this week’s blog post, I summarise what I covered and share an audio summary of my paper for you to check out.
Bridging the AI Trust Gap— from chronicle.com by Ian Wilhelm, Derek Bruff, Gemma Garcia, and Lee Rainie
In a 2024 Chronicle survey, 86 percent of administrators agreed with the statement: “Generative artificial intelligence tools offer an opportunity for higher education to improve how it educates, operates, and conducts research.” In contrast, just 55 percent of faculty agreed, showing the stark divisions between faculty and administrative perspectives on adopting AI.
Among many faculty members, a prevalent distrust of AI persists — and for valid reasons. How will it impact in-class instruction? What does the popularity of generative AI tools portend for the development of critical thinking skills for Gen-Z students? How can institutions, at the administrative level, develop policies to safeguard against students using these technologies as tools for cheating?
Given this increasing ‘trust gap,’ how can faculty and administrators work together to preserve academic integrity as AI seeps into all areas of academia, from research to the classroom?
Join us for “Bridging the AI Trust Gap,” an extended, 75-minute Virtual Forum exploring the trust gap on campus about AI, the contours of the differences, and what should be done about it.
What if the key to better legal work isn’t just smarter tools but more inclusive ones? Susan Tanner, Associate Professor at the University of Louisville Brandeis School of Law, joins Zack Glaser to explore how AI and universal design can improve legal education and law firm operations. Susan shares how tools like generative AI can support neurodiverse thinkers, enhance client communication, and reduce anxiety for students and professionals alike. They also discuss the importance of inclusive design in legal tech and how law firms can better support their teams by embracing different ways of thinking to build a more accessible, future-ready practice. The conversation emphasizes the need for educators and legal professionals to adapt to the evolving landscape of AI, ensuring that they leverage its capabilities to better serve their clients and students.
Copilot is a powerful tool for lawyers, but are you making the most of it within your Microsoft apps? Tom Mighell is flying solo at ABA TECHSHOW 2025 and welcomes Microsoft’s own Ben Schorr to the podcast. Ben shares expert insights into how lawyers can implement Copilot’s AI-assistance to work smarter, not harder. From drafting documents to analyzing spreadsheets to streamlining communication, Copilot can handle the tedious tasks so you can focus on what really matters. Ben shares numerous use-cases and capabilities for attorneys and later gives a sneak peek at Copilot’s coming enhancements.
Another ‘shock’ is coming for American jobs — from washingtonpost.com by Heather Long. DSC: This is a gifted article Millions of workers will need to shift careers. Our country is unprepared.
The United States is on the cusp of a massive economic shift due to AI, and it’s likely to cause greater change than anything President Donald Trump does in his second term. Much good can come from AI, but the country is unprepared to grapple with the need for millions — or perhaps tens of millions — of workers to shift jobs and entire careers.
“There’s a massive risk that entry-level, white-collar work could get automated. What does that do to career ladders?” asked Molly Kinder, a fellow at the Brookings Institution. Her research has found the jobs of marketing analysts are five times as likely to be replaced as those of marketing managers, and sales representative jobs are three times as likely to be replaced as those of sales managers.
Young people working in these jobs will need to be retrained, but it will be hard for them to invest in new career paths. Consider that many college graduates already carry a lot of debt (an average of about $30,000 for those who took student loans).What’s more, the U.S. unemployment insurance system covers only about 57 percent of unemployed workers and replaces only a modest amount of someone’s pay.
From DSC: This is another reason why I think this vision here is at least a part of our future. We need shorter, less expensive credentials.
People don’t have the time to get degrees that take 2+ years to complete (after they have already gone through college once).
They don’t want to come out with more debt on their backs.
With inflation going back up, they won’t have as much money anyway.
Also, they may already have enough debt on their backs.
When “vibe-coding” goes wrong… or, a parable in why you shouldn’t “vibe” your entire company.
Cursor, an AI-powered coding tool that many developers love-to-hate, face-planted spectacularly yesterday when its own AI support bot went off-script and fabricated a company policy, leading to a complete user revolt.
Here’s the short version:
A bug locked Cursor users out when switching devices.
Instead of human help, Cursor’s AI support bot confidently told users this was a new policy (it wasn’t).
No human checked the replies—big mistake.
The fake news spread, and devs canceled subscriptions en masse.
A Reddit thread about it got mysteriously nuked, fueling suspicion.
The reality? Just a bug, plus a bot hallucination… doing maximum damage.
… Why it matters: This is what we’d call “vibe-companying”—blindly trusting AI with critical functions without human oversight.
Think about it like this: this was JUST a startup. If more big corporations continue to lay off entire departments, replaced by AI, these already byzantine companies will become increasingly more opaque, unaccountable systems where no one, human or AI, fully understands what’s happening or who’s responsible.
Our take?Kafka dude has it right. We need to pay attention to WHAT we’re actually automating. Because automating more bureaucracy at scale, with agents we increasingly don’t understand or don’t double check, can potentially make companies less intelligent—and harder to fix when things inevitably go wrong.