Google expands Search Live globally with voice and camera AI — from digitaltrends.com by Varun Mirchandani
The feature is now available in 200+ countries with multilingual support

Think of it as Google Search… but you talk to it. Search Live lets users ask questions using voice or even their phone’s camera, both on Android and iOS, via the Google App, and get spoken responses along with relevant web links.

This is a pretty big shift. Google isn’t just improving search, but it’s also slowly replacing the whole “type and scroll” experience. With Search Live, users can talk, ask follow-ups, and interact naturally, making it feel more like a conversation than a query. It’s basically ChatGPT-style interaction, but baked right into Google Search.

.

 

From DSC:
I have been proposing that the AI-based learning platform of the future will be constantly doing this — every single day. It will know what the in-demand skills are — at any given moment in time. It will then be able to direct you to resources that will help you gain those skills. Though in my vision, the system is querying actual/open job descriptions, not analyzing learning data from enterprise learners. Perhaps I should add that to the vision.


Coursera’s Job Skills Report 2026: Top skills for your students — from coursera.org

The Job Skills Report 2026 analyzes learning data from more than 6 million enterprise learners to identify the future job skills organizations need most. It’s designed for HR and L&D leaders; data, IT, and software & product development leaders; higher education administrators; and government agencies seeking actionable insights on workforce skills trends and AI-driven transformation.

Drawing on data from 6 million enterprise learners across nearly 7,000 organizations, the Job Skills Report 2026 guides you through the skills reshaping the global economy. This year’s analysis spans Data, IT, and Software & Product Development—and the Generative AI skills becoming essential for every role.

 
 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 
 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

How to Get Consistent, On-Brand Course Images from Any AI Image Tool — from drphilippahardman.substack.com by Dr. Philippa Hardman
A 3-step workflow that works every time — whatever AI tool you’re using

Most designers try to describe their way to an image. That’s the wrong approach. The goal is to show the tool the world it should be working in, then give it the minimum it needs to place your subject inside that world.

Every long, over-specified prompt is a sign that your visual inputs aren’t doing enough work.

The fix is an 3-step process which gives you superpowers in AI image generation…


How AI Could Transform, or Replace, the LMS — from futureupodcast.com by Jeff Selingo, Michael Horn, and Matthew Pittinsky

Tuesday, March 10, 2026 – For 30 years now, colleges have relied on the Learning Management System, or LMS, as a key portal for professors and students to teach and learn. It’s a tool that has helped colleges adapt to online learning and bring digital tools to classroom teaching. But generative AI seems poised to disrupt the LMS. And it’s unclear whether the LMS will evolve—or be replaced altogether. For this episode, Jeff and Michael talk with a pioneer of the technology, Matthew Pittinsky, about the lessons of past moments of tech disruption like the smartphone and cloud computing and about what could be different this time. This episode is made with support from Ascendium Education Group.


Gemini, Explained — from wondertools.substack.com by Jeremy Caplan
5 features worth your time — tested and compared

Google’s AI, Gemini, has quickly become one of the AI tools I rely on most. It builds dashboards and creates remarkable infographics. It spins out comprehensive research reports in minutes that would once have taken days to assemble.

It’s improving every month. On March 13, Google announced Ask Maps, so you can query Gemini about things like “Which nearby tennis courts are open with lights so I can play tonight?” On March 10, Gemini added new integrations to build, summarize, and analyze your Google Docs, Sheets, and Slides.

In today’s post below: catch up on the Gemini features worth your time, candid comparisons with other AI tools, and answers to the questions I hear most.


How we’re reimagining Maps with Gemini — from blog.google
Ask Maps answers your real-world questions with a conversation, and Immersive Navigation makes your route more intuitive.

Today, Google Maps is fundamentally changing what a map can do. By bringing together the world’s freshest map with our most capable Gemini models, we’re transforming exploration into a simple conversation and making driving more intuitive than ever with our biggest navigation upgrade in over a decade.

Ask anything about any place
We’re introducing Ask Maps, a new conversational experience that answers complex, real-world questions a map could never answer before. Now you can ask for things like, “My phone is dying — where can I charge it without having to wait in a long line for coffee?” or “Is there a public tennis court with lights on that I can play at tonight?” Previously, finding this information meant lots of research and sifting through reviews. But now, you can just tap the “Ask Maps” button and get your questions answered conversationally, with a customized map to help you visualize your options.

 

Teach Smarter with AI — from wondertools.substack.com by Jeremy Caplan and Lance Eaton
10 tested strategies from two educators who actually use them

I recently talked with Lance Eaton, Senior Associate Director of AI and Teaching & Learning at Northeastern University and writer of AI + Education = Simplified. We traded ideas about what’s actually working. We came up with 10 specific, practical ways anyone who teaches, coaches, or leads can put AI to work.

Watch the full conversation above, or read highlights below.


Beyond Audio Summaries: How to Use NotebookLM to *Actually* Design Better Learning — from drphilippahardman.substack.com by Dr. Philippa Hardman
Five methods to maximise the value of NotebookLM’s features

In practice, what makes NotebookLM different for learning designers is four things:

  • Answers grounded in your sources (with citations):
  • Source toggling:
  • Multi-format studio & multi-source summaries:
  • Persistent workspace:


5 Evidence-Based Methods NotebookLM Operationalises…


Shadow AI Isn’t a Threat: It’s a Signal — from campustechnology.com by Damien Eversmann
Unofficial AI use on campus reveals more about institutional gaps than misbehavior.

Key Takeaways

  • Shadow AI is widespread in higher education: Faculty, researchers, students, and staff are using AI tools outside official IT channels, including consumer platforms and public cloud services that may involve sensitive data.
  • Unauthorized AI use creates data, compliance, and cost risks: Consumer AI tools may store or reuse user data, while uncoordinated adoption drives redundant licenses, unpredictable cloud costs, and weaker security oversight.
  • Institutions are shifting from restriction to enablement: Some campuses are making approved paths easier by offering ready-to-use research environments, campus-managed AI tools, clear guidance on data and vendors, and streamlined approval processes.

How L&D Can Lead in the Age of AI Even If Your Company’s Not Ready — from learningguild.com

How to lead even when your company doesn’t allow AI
Even if your corporation isn’t ready for AI, you can still research tools personally to stay ahead of the curve, so when organizational restrictions lift, you are ready to use AI for learning right away. Here are some tools you can test at home if they’re restricted in your workplace:

  • Content generation – Start testing text-based tools to get a taste of how AI can accelerate content creation. Then take it to the next level by exploring tools that generate voices, music, and sound effects.
  • AI coaching tools – Have AI pose as a customer co-worker or customer to get a taste of what it’s like to use it as a conversation coach. Next, use the voice and video capabilities in an app like ChatGPT to explore how AI can coach someone through tasks.
  • In-the-flow learning assistants – Test turning documents into a conversational avatar and interacting with it to see how it feels. Then think about how the technology could potentially transform static content into dynamic learning experiences for employees.
  • Vibe-coded simulations – Experiment with this technology by creating a simple, fun game. Afterwards, brainstorm some ideas on how it could quickly create simulations for your learners in the future.

The Higher Ed Playbook for AI Affordability — from campustechnology.com by Jason Dunn-Potter

Key Takeaways

  • Affordable AI adoption focuses on evolving existing systems: Universities are embedding AI into current devices, workflows, and legacy systems rather than rebuilding infrastructure or investing in new data centers.
  • Edge AI reduces costs and improves access: Running AI models on local devices or networks lowers cloud processing costs, enhances security, and supports learning use cases such as tutoring, translation, transcription, and adaptive learning.
  • Enterprise integration and governance drive impact: Institutions are applying AI across admissions, advising, facilities, and research workflows, supported by shared resource hubs, data governance, AI literacy, and outcome-driven implementation.
 

“But what’s happening right now is exponential.” — from linkedin.com by Josh Cavalier

Excerpt:

I need to be honest with you. I’ve been running experiments this week with Claude Code and Opus 4.6, and we have reached the precipice in the collapse of time required to produce high-quality text-based ID outputs.

This includes performance consulting reports, learning needs analyses, action mapping, scripts, storyboards, facilitator guides, rubrics, and technical specs.

I just mapped the entire performance consulting process into a multimodal AI integration architecture (diagram image). Every phase. Entry and contracting. Performance analysis. Cause analysis. Solution design. Implementation. Evaluation. Thirty files. System specifications for each. The next step is to vet out each “skill” with an expert performance consultant.

Then I attempted a learning output: an 8-module course built with a cognitive scaffold that moves beyond content delivery to facilitate deliberate practice, meaning-making, and guided reflection within the learner’s own context.

The result:



AI and human-centered learning — from linkedin.com by Patrick Blessinger

Democratizing opportunities

AI adaptive learning can adapt learning in real-time. These tools have the potential to provide a more personalized learning experience, but only if used properly.

The California State University system uses ChatGPT Edu (OpenAI, 2025). Students use it for AI-assisted tutoring, study aids, and writing support. These resources provide 24/7 availability of subject-matter expertise tailored to students’ learning needs. It is not a replacement for professors. Rather, it extends the reach of mentorship by reducing access barriers.

However, we must proceed with intellectual humility and ethical responsibility. Even though AI can customize messages, it cannot replace the encouragement of a teacher or professor, or the social and emotional aspects of learning. It’s at the intersection of humanistic values and knowledge development that education must find its balance.

 

Claude Code Puts Tech Workers on Notice — from builtin.com by Matthew Urwin
Anthropic is flexing its new and improved Claude Code, which used vibe coding to build the company’s latest tool, Cowork. The feat has inspired both excitement and angst within the tech world as the future of work continues to grow more uncertain.

Summary:
Anthropic is becoming the leader in enterprise artificial intelligence, thanks to upgrades made to Claude Code. The coding tool practically built Anthropic’s Cowork product — sparking both excitement around the possibilities of vibe coding and fears around the job outlook of tech workers.

 

Kling 3.0 just launched. The best video model yet. — from heatherbcooper.substack.com by Heather Cooper
& workflows from Imagine Art 1.5 pro, Pixverse Real-Time Video & Genspark

In today’s edition:

  • Kling 3.0: Everyone a Director
  • Character consistency, native audio, 15-second generations & first results
  • Image & Video Prompts
  • Imagine Art 1.5 Pro, Genspark AI Workspace 2.0 & PixVerse Real-Time Video Workflows

Kling 3.0: Everyone a Director
Kling just dropped version 3.0, and it’s a legitimate leap forward for AI video production (Kling is the GOAT). After spending early access time testing the new capabilities, I can confirm this is the most significant update to video generation tools I’ve seen in months.

Key highlights:

  • Character & Element Consistency:
  • Flexible Video Production:
  • Native Audio with Dialogue & Singing:
  • Enhanced Image Generation:
  • Professional Output:
 

Anthropic unveils Claude legal plugin and causes market meltdown — from legaltechnology.com

Generative AI vendor Anthropic has unveiled a legal plugin that helps customise its large language model Claude for legal tasks such as document review, sending public legal software stocks into an ensuing spin today (3 February).

Anthropic entering the legal tech fray comes as part of the launch of a number of different plugins that help users instruct Claude on how to get work done and what tools and data to pull from. A sales plugin, for example could connect Claude to your CRM and knowledge base to help with prospect research and follow ups. The legal plug-in is described as being capable of, for example, reviewing documents, flagging risks, NDA triage, and tracking compliance. The significance is that Anthropic is shifting from model supplier to the application layer and workflow owner.

The announcement is hitting public publishing and legal software companies hard.


Also related/see:

Anthropic’s Legal Plugin for Claude Cowork May Be the Opening Salvo In A Competition Between Foundation Models and Legal Tech Incumbents — from lawnext.com by Bob Ambrogi

Two weeks after introducing a new general-purpose “agentic” work mode called Claude Cowork, Anthropic has now rolled out a legal plugin aimed squarely at the legal workflows of in-house counsel, including contract review, NDA triage, compliance checks, briefings and templated responses.

It is configurable to an organization’s own playbook and risk tolerances, and Anthropic explicitly frames it as assistance, not advice, cautioning that outputs should be reviewed by licensed attorneys.

It may sound like just another feature drop in a crowded AI market. But for legal tech, it is landing more like a tsunami than a drop. For the first time, a foundation-model company is packaging a legal workflow product directly into its platform, rather than merely supplying an API to legal-tech vendors.

 

Farewell to Traditional Universities | What AI Has in Store for Education

Premiered Jan 16, 2026

Description:

What if the biggest change in education isn’t a new app… but the end of the university monopoly on credibility?

Jensen Huang has framed AI as a platform shift—an industrial revolution that turns intelligence into infrastructure. And when intelligence becomes cheap, personal, and always available, education stops being a place you go… and becomes a system that follows you. The question isn’t whether universities will disappear. The question is whether the old model—high cost, slow updates, one-size-fits-all—can survive a world where every student can have a private tutor, a lab partner, and a curriculum designer on demand.

This video explores what AI has in store for education—and why traditional universities may need to reinvent themselves fast.

In this video you’ll discover:

  • How AI tutors could deliver personalized learning at scale
  • Why credentials may shift from “degrees” to proof-of-skill portfolios
  • What happens when the “middle” of studying becomes automated
  • How universities could evolve: research hubs, networks, and high-trust credentialing
  • The risks: cheating, dependency, bias, and widening inequality
  • The 3 skills that become priceless when information is everywhere: judgment, curiosity, and responsibility

From DSC:
There appears to be another, similar video, but with a different date and length of the video. So I’m including this other recording as well here:


The End of Universities as We Know Them: What AI Is Bringing

Premiered Jan 27, 2026

What if universities don’t “disappear”… but lose their monopoly on learning, credentials, and opportunity?

AI is turning education into something radically different: personal, instant, adaptive, and always available. When every student can have a 24/7 tutor, a writing coach, a coding partner, and a study plan designed specifically for them, the old model—one professor, one curriculum, one pace for everyone—starts to look outdated. And the biggest disruption isn’t the classroom. It’s the credential. Because in an AI world, proof of skill can become more valuable than a piece of paper.

This video explores the end of universities as we know them: what AI is bringing, what will break, what will survive, and what replaces the traditional path.

In this video you’ll discover:

  • Why AI tutoring could outperform one-size-fits-all lectures
  • How “degrees” may shift into skill proof: portfolios, projects, and verified competency
  • What happens when the “middle” of studying becomes automated
  • How universities may evolve: research hubs, networks, high-trust credentialing
  • The dark side: cheating, dependency, inequality, and biased evaluation
  • The new advantage: judgment, creativity, and responsibility in a world of instant answers
 

The Learning and Employment Records (LER) Report for 2026: Building the infrastructure between learning and work — from smartresume.com; with thanks to Paul Fain for this resource

Executive Summary (excerpt)

This report documents a clear transition now underway: LERs are moving from small experiments to systems people and organizations expect to rely on. Adoption remains early and uneven, but the forces reshaping the ecosystem are no longer speculative. Federal policy signals, state planning cycles, standards maturation, and employer behavior are aligning in ways that suggest 2026 will mark a shift from exploration to execution.

Across interviews with federal leaders, state CIOs, standards bodies, and ecosystem builders, a consistent theme emerged: the traditional model—where institutions control learning and employment records—no longer fits how people move through education and work. In its place, a new model is being actively designed—one in which individuals hold portable, verifiable records that systems can trust without centralizing control.

Most states are not yet operating this way. But planning timelines, RFP language, and federal signals indicate that many will begin building toward this model in early 2026.

As the ecosystem matures, another insight becomes unavoidable: records alone are not enough. Value emerges only when trusted records can be interpreted through shared skill languages, reused across contexts, and embedded into the systems and marketplaces where decisions are made.

Learning and Employment Records are not a product category. They are a data layer—one that reshapes how learning, work, and opportunity connect over time.

This report is written for anyone seeking to understand how LERs are beginning to move from concept to practice. Whether readers are new to the space or actively exploring implementation, the report focuses on observable signals, emerging patterns, and the practical conditions required to move from experimentation toward durable infrastructure.

 

“The building blocks for a global, interoperable skills ecosystem are already in place. As education and workforce alignment accelerates, the path toward trusted, machine-readable credentials is clear. The next phase depends on credentials that carry value across institutions, industries, states, and borders; credentials that move with learners wherever their education and careers take them. The question now isn’t whether to act, but how quickly we move.”

– Curtiss Barnes, Chief Executive Officer, 1EdTech

 


The above item was from Paul Fain’s recent posting, which includes the following excerpt:

SmartResume just published a guide for making sense of this rapidly expanding landscape. The LER Ecosystem Report was produced in partnership with AACRAO, Credential Engine, 1EdTech, HR Open Standards, and the U.S. Chamber of Commerce Foundation. It was based on interviews and feedback gathered over three years from 100+ leaders across education, workforce, government, standards bodies, and tech providers.

The tools are available now to create the sort of interoperable ecosystem that can make talent marketplaces a reality, the report argues. Meanwhile, federal policy moves and bipartisan attention to LERs are accelerating action at the state level.

“For state leaders, this creates a practical inflection point,” says the report. “LERs are shifting from an innovation discussion to an infrastructure planning conversation.”

 
 
© 2025 | Daniel Christian