Closing the digital use divide with active and engaging learning — from eschoolnews.com by Laura Ascione
Students offered insight into how to use active learning, with digital tools, to boost their engagement

When it comes to classroom edtech use, digital tools have a drastically different impact when they are used actively instead of passively–a critical difference examined in the 2023-2024 Speak Up Research by Project Tomorrow.

Students also outlined their ideal active learning technologies:

  • Collaboration tools to support projects
  • Student-teacher communication tools
  • Online databases for self-directed research
  • Multi-media tools for creating new content
  • Online and digital games
  • AI tools to support personalized learning
  • Coding and computer programming resources
  • Online animations, simulations, and virtual labs
  • Virtual reality equipment and content
 

(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 

AI Tutors: Hype or Hope for Education? — from educationnext.org by John Bailey and John Warner
In a new book, Sal Khan touts the potential of artificial intelligence to address lagging student achievement. Our authors weigh in.

In Salman Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) (Viking, 2024), the Khan Academy founder predicts that AI will transform education by providing every student with a virtual personalized tutor at an affordable cost. Is Khan right? Is radically improved achievement for all students within reach at last? If so, what sorts of changes should we expect to see, and when? If not, what will hold back the AI revolution that Khan foresees? John Bailey, a visiting fellow at the American Enterprise Institute, endorses Khan’s vision and explains the profound impact that AI technology is already making in education. John Warner, a columnist for the Chicago Tribune and former editor for McSweeney’s Internet Tendency, makes the case that all the hype about AI tutoring is, as Macbeth quips, full of sound and fury, signifying nothing.

 

How to Secure Your 2025 Legal Tech — from americanbar.org by Rachel Bailey

Summary

  • With firms increasingly open to AI tools, now is an exciting time to do some blue-sky thinking about your firm’s technology as a whole.
  • This is a chance for teams to envision the future of their firm’s technology landscape and make bold choices that align with long-term goals.
  • Learn six tips that will improve your odds of approval for your legal tech budget.

Also relevant, see:


Why Technology-Driven Law Firms Are Poised For Long-Term Success — from forbes.com by Daniel Farrar

Client expectations have shifted significantly in today’s technology-driven world. Quick communication and greater transparency are now a priority for clients throughout the entire case life cycle. This growing demand for tech-enhanced processes comes not only from clients but also from staff, and is set to rise even further as more advances become available.

I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day…


Just 10% of law firms have a GenAI policy, new Thomson Reuters report shows — from legaltechnology.com by Caroline Hill

Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.


AI & Law Symposium: Students Exploring Innovation, Challenges, and Legal Implications of a Technological Revolution — from allard.ubc.ca

Artificial Intelligence (AI) has been rapidly deployed around the world in a growing number of sectors, offering unprecedented opportunities while raising profound legal and ethical questions. This symposium will explore the transformative power of AI, focusing on its benefits, limitations, and the legal challenges it poses.

AI’s ability to revolutionize sectors such as healthcare, law, and business holds immense potential, from improving efficiency and access to services, to providing new tools for analysis and decision-making. However, the deployment of AI also introduces significant risks, including bias, privacy concerns, and ethical dilemmas that challenge existing legal and regulatory frameworks. As AI technologies continue to evolve, it is crucial to assess their implications critically to ensure responsible and equitable development.


The role of legal teams in creating AI ethics guardrails — from legaldive.com by Catherine Dawson
For organizations to balance the benefits of artificial intelligence with its risk, it’s important for counsel to develop policy on data governance and privacy.


How Legal Aid and Tech Collaboration Can Bridge the Justice Gap — from law.com by Kelli Raker and Maya Markovich
“Technology, when thoughtfully developed and implemented, has the potential to expand access to legal services significantly,” write Kelli Raker and Maya Markovich.

Challenges and Concerns
Despite the potential benefits, legal aid organizations face several hurdles in working with new technologies:

1. Funding and incentives: Most funding for legal aid is tied to direct legal representation, leaving little room for investment in general case management or exploration of innovative service delivery methods to exponentially scale impact.

2. Jurisdictional inconsistency: The lack of a unified court system or standardized forms across regions makes it challenging to develop accurate and widely applicable tech solutions in certain types of matters.

3. Organizational capacity: Many legal aid organizations lack the time and resources to thoroughly evaluate new tech offerings or collaboration opportunities or identify internal workflows and areas of unmet need with the highest chance for impact.

4. Data privacy and security: Legal aid providers need assurance that tech protects client data and avoids misuse of sensitive information.

5. Ethical considerations: There’s significant concern about the accuracy of information produced by consumer-facing technology and the potential for inadvertent unauthorized practice of law.

 
 

Below is an excerpt from 2024: The State of Generative AI in the Enterprise — from Menlo Ventures

  • Legal: Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows. The field broadly divides into litigation and transactional law, with numerous subspecialties. Rooted in litigation, Everlaw* focuses on legal holds, e-discovery, and trial preparation, while Harvey and Spellbook are advancing AI in transactional law with solutions for contract review, legal research, and M&A. Specific practice areas are also targeted AI innovations: EvenUp focuses on injury law, Garden on patents and intellectual property, Manifest on immigration and employment law, while Eve* is re-inventing plaintiff casework from client intake to resolution.

Excerpt from Brainyacts #250 (from 11/22/24) — from the Leveraging Generative AI in Client Interviews section

Here’s what the article from Forbes said:

  • CodeSignal, an AI tech company, has launched Conversation Practice, an AI-driven platform to help learners practice critical workplace communication and soft skills.
  • Conversation Practice uses multiple AI models and a natural spoken interface to simulate real-world scenarios and provide feedback.
  • The goal is to address the challenge of developing conversational skills through iterative practice, without the awkwardness of peer role-play.

What I learned about this software changed my perception about how I can prepare in the future for client meetings. Here’s what I’ve taken away from the potential use of this software in a legal practice setting:


Why Technology-Driven Law Firms Are Poised For Long-Term Success — from forbes.com by Daniel Farrar

I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day:

    • Cloud-based case management solutions. These help enhance productivity through collaboration tools and automated workflows while keeping data secure.
    • E-discovery tools. These tools manage vast amounts of data and help speed up litigation processes.
    • Artificial intelligence. AI has helped automate tasks for legal professionals including for case management, research, contract review and predictive analytics.
 

2024: The State of Generative AI in the Enterprise — from menlovc.com (Menlo Ventures)
The enterprise AI landscape is being rewritten in real time. As pilots give way to production, we surveyed 600 U.S. enterprise IT decision-makers to reveal the emerging winners and losers.

This spike in spending reflects a wave of organizational optimism; 72% of decision-makers anticipate broader adoption of generative AI tools in the near future. This confidence isn’t just speculative—generative AI tools are already deeply embedded in the daily work of professionals, from programmers to healthcare providers.

Despite this positive outlook and increasing investment, many decision-makers are still figuring out what will and won’t work for their businesses. More than a third of our survey respondents do not have a clear vision for how generative AI will be implemented across their organizations. This doesn’t mean they’re investing without direction; it simply underscores that we’re still in the early stages of a large-scale transformation. Enterprise leaders are just beginning to grasp the profound impact generative AI will have on their organizations.


Business spending on AI surged 500% this year to $13.8 billion, says Menlo Ventures — from cnbc.com by Hayden Field

Key Points

  • Business spending on generative AI surged 500% this year, hitting $13.8 billion — up from just $2.3 billion in 2023, according to data from Menlo Ventures released Wednesday.
  • OpenAI ceded market share in enterprise AI, declining from 50% to 34%, per the report.
  • Amazon-backed Anthropic doubled its market share from 12% to 24%.

Microsoft quietly assembles the largest AI agent ecosystem—and no one else is close — from venturebeat.com by Matt Marshall

Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting  segments.

The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference [that started on 11/19/24], the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight.


Now Hear This: World’s Most Flexible Sound Machine Debuts — from
Using text and audio as inputs, a new generative AI model from NVIDIA can create any combination of music, voices and sounds.

Along these lines, also see:


AI Agents Versus Human Agency: 4 Ways To Navigate Our AI-Driven World — from forbes.com by Cornelia C. Walther

To understand the implications of AI agents, it’s useful to clarify the distinctions between AI, generative AI, and AI agents and explore the opportunities and risks they present to our autonomy, relationships, and decision-making.

AI Agents: These are specialized applications of AI designed to perform tasks or simulate interactions. AI agents can be categorized into:

    • Tool Agents…
    • Simulation Agents..

While generative AI creates outputs from prompts, AI agents use AI to act with intention, whether to assist (tool agents) or emulate (simulation agents). The latter’s ability to mirror human thought and action offers fascinating possibilities — and raises significant risks.

 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

Skill-Based Training: Embrace the Benefits; Stay Wary of the Hype — from learningguild.com by Paige Yousey

1. Direct job relevance
One of the biggest draws of skill-based training is its direct relevance to employees’ daily roles. By focusing on teaching job-specific skills, this approach helps workers feel immediately empowered to apply what they learn, leading to a quick payoff for both the individual and the organization. Yet, while this tight focus is a major benefit, it’s important to consider some potential drawbacks that could arise from an overly narrow approach.

Be wary of:

  • Overly Narrow Focus: Highly specialized training might leave employees with little room to apply their skills to broader challenges, limiting versatility and growth potential.
  • Risk of Obsolescence: Skills can quickly become outdated, especially in fast-evolving industries. L&D leaders should aim for regular updates to maintain relevance.
  • Neglect of Soft Skills: While technical skills are crucial, ignoring soft skills like communication and problem-solving may lead to a lack of balanced competency.

2. Enhanced job performance…
3. Addresses skill gaps…

…and several more areas to consider


Another item from Paige Yousey

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

AI-driven course design

Strengths

  • Content creation and updates: AI streamlines the creation of training materials by identifying resource gaps and generating tailored content, while also refreshing existing materials based on industry trends and employee feedback to maintain relevance.
  • Data-driven insights: Use AI tools to provide valuable analytics to inform course development and instructional strategies, helping learner designers identify effective practices and improve overall learning outcomes.
  • Efficiency: Automating repetitive tasks, such as learner assessments and administrative duties, enables L&D professionals to concentrate on developing impactful training programs and fostering learner engagement.

Concerns

  • Limited understanding of context: AI may struggle to understand the specific educational context or the unique needs of diverse learner populations, potentially hindering effectiveness.
  • Oversimplification of learning: AI may reduce complex educational concepts to simple metrics or algorithms, oversimplifying the learning process and neglecting deeper cognitive development.
  • Resistance to change: Learning leaders may face resistance from staff who are skeptical about integrating AI into their training practices.

Also from the Learning Guild, see:

Use Twine to Easily Create Engaging, Immersive Scenario-Based Learning — from learningguild.com by Bill Brandon

Scenario-based learning immerses learners in realistic scenarios that mimic real-world challenges they might face in their roles. These learning experiences are highly relevant and relatable. SBL is active learning. Instead of passively consuming information, learners actively engage with the content by making decisions and solving problems within the scenario. This approach enhances critical thinking and decision-making skills.

SBL can be more effective when storytelling techniques create a narrative that guides learners through the scenario to maintain engagement and make the learning memorable. Learners receive immediate feedback on their decisions and learn from their mistakes. Reflection can deepen their understanding. Branching scenarios add simulated complex decision-making processes and show the outcome of various actions through interactive scenarios where learner choices lead to different outcomes.

Embrace the Future: Why L&D Leaders Should Prioritize AI Digital Literacy — from learningguild.com by Dr. Erica McCaig

The role of L&D leaders in AI digital literacy
For L&D leaders, developing AI digital literacy within an organization requires a well-structured curriculum and development plan that equips employees with the knowledge, skills, and ethical grounding needed to thrive in an AI-augmented workplace. This curriculum should encompass a range of competencies that enhance technical understanding and foster a mindset ready for innovation and responsible use of AI. Key areas to focus on include:

  • Understanding AI Fundamentals: …
  • Proficiency with AI Tools: …
  • Ethical Considerations: …
  • Cultivating Critical Thinking: …
 

How to use NotebookLM for personalized knowledge synthesis — from ai-supremacy.com by Michael Spencer and Alex McFarland
Two powerful workflows that unlock everything else. Intro: Golden Age of AI Tools and AI agent frameworks begins in 2025.

What is Google Learn about?
Google’s new AI tool, Learn About, is designed as a conversational learning companion that adapts to individual learning needs and curiosity. It allows users to explore various topics by entering questions, uploading images or documents, or selecting from curated topics. The tool aims to provide personalized responses tailored to the user’s knowledge level, making it user-friendly and engaging for learners of all ages.

Is Generative AI leading to a new take on Educational technology? It certainly appears promising heading into 2025.

The Learn About tool utilizes the LearnLM AI model, which is grounded in educational research and focuses on how people learn. Google insists that unlike traditional chatbots, it emphasizes interactive and visual elements in its responses, enhancing the educational experience. For instance, when asked about complex topics like the size of the universe, Learn About not only provides factual information but also includes related content, vocabulary building tools, and contextual explanations to deepen understanding.

 

What Teacher Pay and Benefits Look Like, in Charts — from edweek.org by Sarah D. Sparks


Special education staffing shortages put students’ futures at risk. How to solve that is tricky. — from chalkbeat.org by Kalyn Belsha

The debate comes as the number of students with disabilities is growing. Some 7.5 million students required special education services as of the 2022-23 school year, the latest federal data shows, or around 15% of students. That was up from 7.1 million or 14% of students in the 2018-19 school year, just before the pandemic hit.

It’s unclear if the rise is due to schools getting better at identifying students with disabilities or if more children have needs now. Many young children missed early intervention and early special education services during the pandemic, and many educators say they are seeing higher behavioral needs and wider academic gaps in their classrooms.

“Students are arriving in our classrooms with a high level of dysregulation, which is displayed through their fight, flight, or freeze responses,” Tiffany Anderson, the superintendent of Topeka, Kansas’ public schools, wrote in her statement. “Students are also displaying more physically aggressive behavior.”


Expanding Access, Value and Experiences Through Credentialing — from gettingsmart.com by Nate McClennen, Tom Vander Ark and Mason Pashia
A Landscape Analysis of Credentialing and Its Impact on K-12

Executive Summary

This report examines the evolving landscape of credentialing and learner records within global education systems, highlighting a shift from traditional time-based signals—such as courses and grades—to competency-based signals (credentials and learner records).

Also recommended by Getting Smart, see:


Retrieval practice improves learning for neurodiverse students — from by Pooja K. Agarwal, Ph.D.

In my 15+ years of teaching, I have had students with autism spectrum disorder, ADHD, dyslexia, and a range of learning disabilities. I have grown in my understanding of inclusive teaching practices and I strive to incorporate universal design principles in my teaching.

From my classroom experience, I know that retrieval practice improves learning for all of my students, including those who are neurodiverse. But what have researchers found about retrieval practice with neurodiverse learners?

(Side note: If you’d like an intro on neurodiversity and what it means in the classroom, I recommend this podcast episode from The Learning Scientists and this podcast episode from Teaching in Higher Ed. For teaching tips, I recommend this article from the University of Illinois CITL.)


Instructure Is Ready To Lead The Next Evolution In Learning — from forbes.com by Ray Ravaglia

Learning Management In The AI Future
While LMS platforms like Canvas have positively impacted education, they’ve rarely lived up to their potential for personalized learning. With the advent of artificial intelligence (AI), this is set to change in revolutionary ways.

The promise of AI lies in its ability to automate repetitive tasks associated with student assessment and management, freeing educators to focus on education. More significantly, AI has the potential to go beyond the narrow focus on the end products of learning (like assignments) to capture insights into the learning process itself. This means analyzing the entire transcript of activities within the LMS, providing a dynamic, data-driven view of student progress rather than just seeing signposts of where students have been and what they have taken away.

Things become more potent by moving away from a particular student’s traversal of a specific course to looking at large aggregations of students traversing similar courses. This is why Instructure’s acquisition of Parchment, a company specializing in credential and transcript management, is so significant.


Sharpen your students’ interview skills — from timeshighereducation.com by Lewis Humphreys (though higher education-related, this is still solid information for those in K12)
The employees of the future will need to showcase their skills in job interviews. Make sure they’re prepared for each setting, writes Lewis Humphreys

In today’s ultra-competitive job market, strong interview skills are paramount for students taking their first steps into the professional world. University careers services play a crucial role in equipping our students with the tools and confidence needed to excel in a range of interview settings. From pre-recorded video interviews to live online sessions and traditional face-to-face meetings, students must be adaptable and well-prepared. Here, I’ll explore ways universities can teach interview skills to students and graduates, helping them to present themselves and their skills in the best light possible.

 

Introducing Copilot Actions, new agents, and tools to empower IT teams — from microsoft.com by Jared Spataro

[On November 19th] at Microsoft Ignite 2024, we’re accelerating our ambition to empower every employee with Copilot as a personal assistant and to transform every business process with agents built in Microsoft Copilot Studio.

Announcements include:

  • Copilot Actions in Microsoft 365 Copilot to help you automate everyday repetitive tasks.
  • New agents in Microsoft 365 to unlock SharePoint knowledge, provide real-time language interpretation in Microsoft Teams meetings, and automate employee self-service.
  • The Copilot Control System to help IT professionals confidently manage Copilot and agents securely.

These announcements build on our wave 2 momentum, including the new autonomous agent capabilities that we announced in October 2024.

Per the Rundown AI:
By integrating AI agents directly into Microsoft’s billion-plus users’ daily workflows, this release could normalize agentic AI faster than any previous rollout. Just as users now reach for specific apps or plugins to solve particular problems, specialized agents could soon become the natural first stop for getting work done.

Along these lines, also see:

AI agents — what they are, and how they’ll change the way we work — from news.microsoft.com by Susanna Ray

An agent takes the power of generative AI a step further, because instead of just assisting you, agents can work alongside you or even on your behalf. Agents can do a range of things, from responding to questions to more complicated or multistep assignments. What sets them apart from a personal assistant is that they can be tailored to have a particular expertise.

For example, you could create an agent to know everything about your company’s product catalog so it can draft detailed responses to customer questions or automatically compile product details for an upcoming presentation.

Microsoft pitches AI ‘agents’ that can perform tasks on their own at Ignite 2024 — from techxplore.com
Microsoft CEO Satya Nadella told customers at a conference in Chicago on Tuesday that the company is teaching a new set of artificial intelligence tools how to “act on our behalf across our work and life.”


From DSC:
I am not trying to push all things AI. There are serious concerns that I and others have with agents and other AI-based technologies especially:

  • When competitive juices get going and such forces throw people and companies into a sort of an AI arms race, and
  • When many people haven’t yet obtained the wisdom of reflecting on things like “just because we CAN build this doesn’t mean we SHOULD build it”, or
  • When governments seek to be the leader of AI due to military applications (and yes, I’m looking at the U.S. Federal Government especially here)
  • Etc, etc. 

But there are also areas where I’m more hopeful and positive about AI-related technologies — such as providing personalized learning and productivity tools (like those from Microsoft above).

 

VR training aims to help doctors avoid bias — from inavateonthenet.net

A new virtual reality training programme aims to tackle biases in healthcare settings, aimed at improving recognition, understanding, and addressing implicit bias towards black mothers.

Participants in the program at the University of Illinois Urbana-Champaign underwent a series of three modules, with the first module focusing on implicit bias and how it can negatively affect a patient at a doctor’s appointment.

 

What DICE does in this posting will be available 24x7x365 in the future [Christian]

From DSC:
First of all, when you look at the following posting:


What Top Tech Skills Should You Learn for 2025? — from dice.com by Nick Kolakowski


…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.

I assert that in the future, people will be able to see this information on a 24x7x365 basis.

  • Which jobs are in demand?
  • What skills do I need to do those jobs?
  • WHERE do I get/develop those skills?


And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.

BUT PEOPLE are the key! Oftentimes, we need to — and prefer to — learn with others!


 

Five key issues to consider when adopting an AI-based legal tech — from legalfutures.co.uk by Mark Hughes

As more of our familiar legal resources have started to embrace a generative AI overhaul, and new players have come to the market, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.

  • Licensing
  • Data protection
  • The data sets
  • …and others

Knowable Introduces Gen AI Tool It Says Will Revolutionize How Companies Interact with their Contracts — from lawnext.com by Bob Ambrogi

Knowable, a legal technology company specializing in helping organizations bring order and organization to their executed agreements, has announced Ask Knowable, a suite of generative AI-powered tools aimed at transforming how legal teams interact with and understand what is in their contracts.

Released today as a commercial preview and set to launch for general availability in March 2025, the feature marks a significant step forward in leveraging large language models to address the complexities of contract management, the company says.


The Global Legal Post teams up with LexisNexis to explore challenges and opportunities of Gen AI adoption — from globallegalpost.com by
Series of articles will investigate key criteria to consider when investing in Gen AI

The Global Legal Post has teamed up with LexisNexis to help inform readers’ decision-making in the selection of generative AI (Gen AI) legal research solutions.

The Generative AI Legal Research Hub in association with LexisNexis will host a series of articles exploring the key criteria law firms and legal departments should consider when seeking to harness the power of Gen AI to improve the delivery of legal services.


Leveraging AI to Grow Your Legal Practice — from americanbar.org

Summary

  • AI-powered tools like chat and scheduling meet clients’ demand for instant, personalized service, improving engagement and satisfaction.
  • Firms using AI see up to a 30% increase in lead conversion, cutting client acquisition costs and maximizing marketing investments.
  • AI streamlines processes, speeds up response times, and enhances client engagement—driving growth and long-term client retention.

How a tech GC views AI-enabled efficiencies and regulation — from legaldive.com by Justin Bachman
PagerDuty’s top in-house counsel sees legal AI tools as a way to scale resources without adding headcount while focusing lawyers on their high-value work.


Innovations in Legal Practice: How Tim Billick’s Firm Stays Ahead with AI and Technology — from techtimes.com by Elena McCormick

Enhancing Client Service through Technology
Beyond internal efficiency, Billick’s firm utilizes technology to improve client communication and engagement. By adopting client-facing AI tools, such as chatbots for routine inquiries and client portals for real-time updates, Practus makes legal processes more transparent and accessible to its clients. According to Billick, this responsiveness is essential in IP law, where clients often need quick updates and answers to time-sensitive questions about patents, trademarks, and licensing agreements.

AI-driven client management software is also part of the firm’s toolkit, enabling Billick and his team to track each client’s case progress and share updates efficiently. The firm’s technology infrastructure supports clients from various sectors, including engineering, software development, and consumer products, tailoring case workflows to meet unique needs within each industry. “Clients appreciate having immediate access to their case status, especially in industries where timing is crucial,” Billick shares.


New Generative AI Study Highlights Adoption, Use and Opportunities in the Legal Industry — from prnewswire.com by Relativity

CHICAGO, Nov. 12, 2024 /PRNewswire/ — Relativity, a global legal technology company, today announced findings from the IDC InfoBrief, Generative AI in Legal 2024, commissioned by Relativity. The study uncovers the rapid increase of generative AI adoption in the legal field, examining how legal professionals are navigating emerging challenges and seizing opportunities to drive legal innovation.

The international study surveyed attorneys, paralegals, legal operations professionals and legal IT professionals from law firms, corporations and government agencies. Respondents were located in Australia, Canada, Ireland, New Zealand, the United Kingdom and the United States. The data uncovered important trends on how generative AI has impacted the legal industry and how legal professionals will use generative AI in the coming years.

 
© 2024 | Daniel Christian