The Edtech Insiders Generative AI Map — from edtechinsiders.substack.com by Ben Kornell, Alex Sarlin, Sarah Morin, and Laurence Holt A market map and database featuring 60+ use cases for GenAI in education and 300+ GenAI powered education tools.
Used thoughtfully, ChatGPT can be a powerful tool to help students develop skills of rigorous thinking and clear writing, assisting them in thinking through ideas, mastering complex concepts, and getting feedback on drafts.
There are also ways to use ChatGPT that are counterproductive to learning—like generating an essay instead of writing it oneself, which deprives students of the opportunity to practice, improve their skills, and grapple with the material.
For students committed to becoming better writers and thinkers, here are some ways to use ChatGPT to engage more deeply with the learning process.
The Big Idea: As employers increasingly seek out applicants with AI skills, community colleges are well-positioned to train up the workforce. Partnerships with tech companies, like the AI Incubator Network, are helping some colleges get the resources and funding they need to overhaul programs and create new AI-focused ones.
Along these lines also see:
Practical AI Training — from the-job.beehiiv.com by Paul Fain Community colleges get help from Big Tech to prepare students for applied AI roles at smaller companies.
Miami Dade and other two-year colleges try to be nimble by offering training for AI-related jobs while focusing on local employers. Also, Intel’s business struggles while the two-year sector wonders if Republicans will cut funds for semiconductor production.
In this conversation, Josh Bersin discusses the evolving landscape of AI platforms, particularly focusing on Microsoft’s positioning and the challenges of creating a universal AI agent. He delves into the complexities of government efficiency, emphasizing the institutional challenges faced in re-engineering government operations.
The conversation also highlights the automation of work tasks and the need for businesses to decompose job functions for better efficiency.
Bersin stresses the importance of expertise in HR, advocating for a shift towards full stack professionals who possess a broad understanding of various HR functions.
Finally, he addresses the impending disruption in Learning and Development (L&D) due to AI advancements, predicting a significant transformation in how L&D professionals will manage knowledge and skills.
Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.
The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.
Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.
In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.
Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.
In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.
According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.
However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.
“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.
A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.
AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
Securing API keys in development is crucial to protect the agent from unauthorized access.
What are AI Agents and How Are They Used in Different Industries?— from rtinsights.com by Salvatore Salamone AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.
In September, I partnered with Synthesia to conduct a comprehensive survey exploring the evolving landscape of instructional design.
Our timing was deliberate: as we witness the rapid advancement of AI and increasing pressure on learning teams to drive mass re-skilling and deliver more with less, we wanted to understand how the role of instructional designers is changing.
…
Our survey focused on five key areas that we believed would help surface the most important data about the transformation of our field:
Roles & Responsibilities: who’s designing learning experiences in 2024?
Success Metrics: how do you and the organisations you work for measure the value of instructional design?
Workload & Workflow: how much time do we spend on different aspects of our job, and why?
Challenges & Barriers: what sorts of obstacles prevent us from producing optimal work?
Tools & Technology: what tools do we use, and is the tooling landscape changing?
I wish I could write that the last two years have made me more confident, more self-assured that AI is here to augment workers rather than replace them.
But I can’t.
I wish I could write that I know where schools and colleges will end up. I wish I could say that AI Agents will help us get where we need to be.
But I can’t.
At this point, today, I’m at a loss. I’m not sure where the rise of AI agents will take us, in terms of how we work and learn. I’m in the question-asking part of my journey. I have few answers.
So, let’s talk about where (I think) AI Agents will take education. And who knows? Maybe as I write I’ll come up with something more concrete.
It’s worth a shot, right?
From DSC: I completely agree with Jason’s following assertion:
A good portion of AI advancement will come down to employee replacement. And AI Agents push companies towards that.
THAT’s where/what the ROI will be for corporations. They will make their investments up in the headcount area, and likely in other areas as well (product design, marketing campaigns, engineering-related items, and more). But how much time it takes to get there is a big question mark.
One last quote here…it’s too good not to include:
Behind these questions lies a more abstract, more philosophical one: what is the relationship between thinking and doing in a world of AI Agents and other kinds of automation?
By examining models across three AI families—Claude, ChatGPT, and Gemini—I’ve started to identify each model’s strengths, limitations, and typical pitfalls.
Spoiler: my findings underscore that until we have specialised, fine-tuned AI copilots for instructional design, we should be cautious about relying on general-purpose models and ensure expert oversight in all ID tasks.
From DSC — I’m going to (have Nick) say this again:
I simply asked my students to use AI to brainstorm their own learning objectives. No restrictions. No predetermined pathways. Just pure exploration. The results? Astonishing.
Students began mapping out research directions I’d never considered. They created dialogue spaces with AI that looked more like intellectual partnerships than simple query-response patterns.
Google Workspace for Education admins can now turn on the Gemini app with added data protection as an additional service for their teen users (ages 13+ or the applicable age in your country) in the following languages and countries. With added data protection, chats are not reviewed by human reviewers or otherwise used to improve AI models. The Gemini app will be a core service in the coming weeks for Education Standard and Plus users, including teens,
Recently, I spoke with several teachers regarding their primary questions and reflections on using AI in teaching and learning. Their thought-provoking responses challenge us to consider not only what AI can do but what it means for meaningful and equitable learning environments. Keeping in mind these reflections, we can better understand how we move forward toward meaningful AI integration in education.
We’re introducing FrontierMath, a benchmark of hundreds of original, expert-crafted mathematics problems designed to evaluate advanced reasoning capabilities in AI systems. These problems span major branches of modern mathematics—from computational number theory to abstract algebraic geometry—and typically require hours or days for expert mathematicians to solve.
The demand for artificial intelligence courses in UK universities has surged dramatically over the past five years, with enrollments increasing by 453%, according to a recent study by Currys, a UK tech retailer.
The study, which analyzed UK university admissions data and surveyed current students and recent graduates, reveals how the growing influence of AI is shaping students’ educational choices and career paths.
This growth reflects the broader trend of AI integration across industries, creating new opportunities while transforming traditional roles. With AI’s influence on career prospects rising, students and graduates are increasingly drawn to AI-related courses to stay competitive in a rapidly changing job market.
Doing the Best You Can With the Time You Have — by Jay Schauer These strategies can help overwhelmed teachers prioritize tasks and find a balance between perfectionism and efficiency. .
How to Support Teachers’ Emotional Health — by Hedreich Nichols Emotional well-being plays a major role in teachers’ job satisfaction, and it’s essential that they have effective resources for support.
Teachers cannot be expected to teach SEL effectively without first being intentional about their own emotional health. If we want educators to guide students through emotional regulation, they must have the time, space, and support to do that work themselves. This goes beyond surface-level wellness initiatives—teachers need opportunities to reflect on their emotional triggers, manage their own stresses, and receive genuine support from their schools. Only when teachers are empowered to process their own emotional challenges can they truly foster a healthy social and emotional environment for their students.
In Praise of the Humble Document Camera — by Emily Rankin Revisiting a simple edtech tool can help you introduce rigor and engage students more deeply in their lessons.
4 Ways to Use a Document Camera in Your Classroom— by Emily Rankin If a document camera is gathering dust in a classroom, its lack of impact is probably linked to the user, not what the gadget is capable of. Case in point, I wasn’t using mine regularly because I didn’t know the value it could add to my teaching and learning. Here are some of the practices I now know are possible:
One factor to consider is the subject. In math, students need opportunities to work on rich tasks and solve problems in ways that make sense to them. However, that doesn’t mean direct instruction is totally absent from math time. The questions below can guide you in deciding whether to use direct instruction, when it would be appropriate, and who else in the classroom you might involve.
Increasing Talk Time in World Language Classes— by Kate Good Teachers can experiment with a variety of strategies to build and assess students’ ability to converse in the target language.
To capitalize on my students’ (seemingly inexhaustible) desire to chat, I work to increase student talk time in our Spanish immersion classes. I use several strategies to build and assess students’ oral language.
Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.
After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.
…
They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.
…
Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.
A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.
The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.
Understand the impact…
Understand the different types of AI tools…
Focus on institutional data and knowledge repositories…
Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.
That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.
For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.
Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.
…
More realistic challenges that the firm has listed elsewhere include:
Agentic AI proliferating without governance or tracking;
Agentic AI making decisions that are not trustworthy;
All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly Looking for specific solutions at the conference exhibit hall, with an educator focus
Here are some notable trends:
Alignment with campus policies: …
Choose your own AI adventure: …
Integrate AI throughout a workflow: …
Moving from prompt engineering to bot building: …
More complex problem-solving: …
…
Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.
…
The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.
Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.
It’s the beginning of the end for search as we know it.
Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.
The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.
Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.
We’ve added a new analysis tool. The tool helps Claude respond with mathematically precise and reproducible answers. You can then create interactive data visualizations with Artifacts.
We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.
It’s cool, but obviously very dangerous because of prompt injection.Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.
This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers. … We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.
TrustNoAI.
And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.
From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.
For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.
The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”
We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.
On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”
“I miss you, baby sister,” he wrote.
“I miss you too, sweet brother,” the chatbot replied.
Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.
…
On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.
“Please come home to me as soon as possible, my love,” Dany replied.
“What if I told you I could come home right now?” Sewell asked.
“… please do, my sweet king,” Dany replied.
He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.
But the experience he had, of getting emotionally attached to a chatbot, is becoming increasingly common. Millions of people already talk regularly to A.I. companions, and popular social media apps including Instagram and Snapchat are building lifelike A.I. personas into their products.
The technology is also improving quickly. Today’s A.I. companions can remember past conversations, adapt to users’ communication styles, role-play as celebrities or historical figures and chat fluently about nearly any subject. Some can send A.I.-generated “selfies” to users, or talk to them with lifelike synthetic voices.
There is a wide range of A.I. companionship apps on the market.
Mother sues tech company after ‘Game of Thrones’ AI chatbot allegedly drove son to suicide — from usatoday.com by Jonathan Limehouse The mother of 14-year-old Sewell Setzer III is suing Character.AI, the tech company that created a ‘Game of Thrones’ AI chatbot she believes drove him to commit suicide on Feb. 28. Editor’s note: This article discusses suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.
The mother of a 14-year-old Florida boy is suing Google and a separate tech company she believes caused her son to commit suicide after he developed a romantic relationship with one of its AI bots using the name of a popular “Game of Thrones” character, according to the lawsuit.
From my oldest sister:
Another relevant item?
Inside the Mind of an AI Girlfriend (or Boyfriend) — from wired.com by Will Knight Dippy, a startup that offers “uncensored” AI companions, lets you peer into their thought process—sometimes revealing hidden motives.
Despite its limitations, Dippy seems to show how popular and addictive AI companions are becoming. Jagga and his cofounder, Angad Arneja, previously cofounded Wombo, a company that uses AI to create memes including singing photographs. The pair left in 2023, setting out to build an AI-powered office productivity tool, but after experimenting with different personas for their assistant, they became fascinated with the potential of AI companionship.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
DC: I’m really hoping that a variety of AI-based tools, technologies, and services will significantly help with our Access to Justice (#A2J) issues here in America. So this article, per Kristen Sonday at Thomson Reuters — caught my eye.
***
AI for Legal Aid: How to empower clients in need — from thomsonreuters.com by Kristen Sonday In this second part of this series, we look at how AI-driven technologies can empower those legal aid clients who may be most in need
It’s hard to overstate the impact that artificial intelligence (AI) is expected to have on helping low-income individuals achieve better access to justice.And for those legal services organizations (LSOs) that serve on the front lines, too often without sufficient funding, staff, or technology, AI presents perhaps their best opportunity to close the justice gap. With the ability of AI-driven tools to streamline agency operations, minimize administrative work, more effectively reallocate talent, and allow LSOs to more effectively service clients, the implementation of these tools is essential.
Innovative LSOs leading the way
Already many innovative LSOs are taking the lead, utilizing new technology to complete tasks from complex analysis to AI-driven legal research. Here are two compelling examples of how AI is already helping LSOs empower low-income clients in need.
Criminal charges, even those that are eligible for simple, free expungement, can prevent someone from obtaining housing or employment. This is a simple barrier to overcome if only help is available.
… AI offers the capacity to provide quick, accurate information to a vast audience, particularly to those in urgent need. AI can also help reduce the burden on our legal staff…
Everything you thought you knew about being a lawyer is about to change.
Legal Dive spoke with Podinic about the transformative nature of AI, including the financial risks to lawyers’ billing models and how it will force general counsel and chief legal officers to consider how they’ll use the time AI is expected to free up for the lawyers on their teams when they no longer have to do administrative tasks and low-level work.
Traditionally, law firms have been wary of adopting technologies that could compromise data privacy and legal accuracy; however, attitudes are changing
Despite concerns about technology replacing humans in the legal sector, legaltech is more likely to augment the legal profession than replace it entirely
Generative AI will accelerate digital transformation in the legal sector
Episode Notes
As genAI continues to edge into all facets of our lives, Dr. Megan Ma has been exploring integrations for this technology in legal, but, more importantly, how it can help lawyers and law students hone their legal skills. Dennis and Tom talk with Dr. Ma about her work and career path and many of the latest developments in legal tech. They take a deep dive into a variety of burgeoning AI tools and trends, and Dr. Ma discusses how her interdisciplinary mindset has helped her develop a unique perspective on the possibilities for AI in the legal profession and beyond.
Legal tech disruption: Doing it on purpose — from localgovernmentlawyer.co.uk Thomson Reuters looks at the role that a legal technology roadmap can play in improving the operations of in-house legal departments.
Disruption in the legal industry remains a powerful force – from the death of the billable hour to robot lawyers and generative AI. Leaders are facing weighty issues that demand long-term, visionary thinking and that will change the way legal professionals do their jobs.
With half of in-house legal departments increasing their use of legal technology tools, many GCs are taking the initiative to address continued, growing expectations from the business for systems that can make operations better. How can you prepare for a tech or process change so that people come along with you, rather than living in constant fire-fighting mode?
The TLDR here is that, as useful as popular AI tools are for learners, as things stand they only enable us to take the very first steps on what is a long and complex journey of learning.
AI tools like ChatGPT 4o, Claude 3.5 & NotebookLM can help to give us access to information but (for now at least) the real work of learning remains in our – the humans’ – hands.
To which Anna Mills had a solid comment:
It might make a lot of sense to regulate generated audio to require some kind of watermark and/or metadata. Instructors who teach online and assign voice recordings, we need to recognize that these are now very easy and free to auto-generate. In some cases we are assigning this to discourage students from using AI to just autogenerate text responses, but audio is not immune.
From DSC: Whenever we’ve had a flat tire over the years, a tricky part of the repair process is jacking up the car so that no harm is done to the car (or to me!). There are some grooves underneath the Toyota Camry where one is supposed to put the jack. But as the car is very low to the ground, these grooves are very hard to find (even in good weather and light).
What’s needed is a robotic jack with vision.
If the jack had “vision” and had wheels on it, the device could locate the exact location of the grooves, move there, and then ask the owner whether they are ready for the car to be lifted up. The owner could execute that order when they are ready and the robotic jack could safely hoist the car up.
This type of robotic device is already out there in other areas. But this idea for assistance with replacing a flat tire represents an AI and robotic-based, consumer-oriented application that we’ll likely be seeing much more of in the future. Carmakers and suppliers, please add this one to your list!