When it comes to classroom edtech use, digital tools have a drastically different impact when they are used actively instead of passively–a critical difference examined in the 2023-2024 Speak Up Research by Project Tomorrow.
Students also outlined their ideal active learning technologies:
How AI is transforming learning for dyslexic students— from eschoolnews.com by Samay Bhojwani, University of Nebraska–Lincoln As schools continue to adopt AI-driven tools, educators can close the accessibility gap and help dyslexic students thrive
Many traditional methods lack customization and don’t empower students to fully engage with content on their terms. Every dyslexic student experiences challenges differently, so a more personalized approach is essential for fostering comprehension, engagement, and academic growth.
…
Artificial intelligence is increasingly recognized for its potential to transform educational accessibility. By analyzing individual learning patterns, AI-powered tools can tailor content to meet each student’s specific needs. For dyslexic students, this can mean summarizing complex texts, providing auditory support, or even visually structuring information in ways that aid comprehension.
NotebookLM How-to Guide 2024 — from ai-supremacy.com by Michael Spencer and Alex McFarland With Audio Version | A popular guide reloaded.
In this guide, I’ll show you:
How to use the new advanced audio customization features
Two specific workflows for synthesizing information (research papers and YouTube videos)
Pro tips for maximizing results with any type of content
Common pitfalls to avoid (learned these the hard way)
Google DeepMind researchers have unveiled a groundbreaking framework called Boundless Socratic Learning (BSL), a paradigm shift in artificial intelligence aimed at enabling systems to self-improve through structured language-based interactions. This approach could mark a pivotal step toward the elusive goal of artificial superintelligence (ASI), where AI systems drive their own development with minimal human input.
The promise of Boundless Socratic Learning lies in its ability to catalyze a shift from human-supervised AI to systems that evolve and improve autonomously. While significant challenges remain, the introduction of this framework represents a step toward the long-term goal of open-ended intelligence, where AI is not just a tool but a partner in discovery.
This surge in demand is creating new opportunities for professionals equipped with the right skills. If you’re considering a career in this innovative field, the following five courses will provide a solid foundation when starting a career in Agentic AI.
AI Tutors: Hype or Hope for Education? — from educationnext.org by John Bailey and John Warner In a new book, Sal Khan touts the potential of artificial intelligence to address lagging student achievement. Our authors weigh in.
In Salman Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) (Viking, 2024), the Khan Academy founder predicts that AI will transform education by providing every student with a virtual personalized tutor at an affordable cost. Is Khan right? Is radically improved achievement for all students within reach at last? If so, what sorts of changes should we expect to see, and when? If not, what will hold back the AI revolution that Khan foresees? John Bailey, a visiting fellow at the American Enterprise Institute, endorses Khan’s vision and explains the profound impact that AI technology is already making in education. John Warner, a columnist for the Chicago Tribune and former editor for McSweeney’s Internet Tendency, makes the case that all the hype about AI tutoring is, as Macbeth quips, full of sound and fury, signifying nothing.
By becoming early adopters, law firms can address two critical challenges in professional development:
1. Empowering Educators and Mentors
Generative AI equips legal educators, practice group leaders, and mentors with tools to amplify their impact. It assists with:
Content generation: …
Research facilitation: …
Skill-building frameworks: …
… 2. Cracking the Personalized Learning Code
Every lawyer’s learning needs are unique. Generative AI delivers hyper-personalized educational experiences that adapt to an individual’s role, practice area, and career stage. This addresses the “Two Sigma Problem” (the dramatic performance gains of one-on-one tutoring) by making tailored learning scalable and actionable. Imagine:
AI-driven tutors: …
Instant feedback loops: …
Adaptive learning models: …
Case Study: Building AI Tutors in Legal Education
… Moving Beyond CLEs: A New Vision for Professional Development…
Client expectations have shifted significantly in today’s technology-driven world. Quick communication and greater transparency are now a priority for clients throughout the entire case life cycle. This growing demand for tech-enhanced processes comes not only from clients but also from staff, and is set to rise even further as more advances become available.
…
I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day…
Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.
Artificial Intelligence (AI) has been rapidly deployed around the world in a growing number of sectors, offering unprecedented opportunities while raising profound legal and ethical questions. This symposium will explore the transformative power of AI, focusing on its benefits, limitations, and the legal challenges it poses.
AI’s ability to revolutionize sectors such as healthcare, law, and business holds immense potential, from improving efficiency and access to services, to providing new tools for analysis and decision-making. However, the deployment of AI also introduces significant risks, including bias, privacy concerns, and ethical dilemmas that challenge existing legal and regulatory frameworks. As AI technologies continue to evolve, it is crucial to assess their implications critically to ensure responsible and equitable development.
The role of legal teams in creating AI ethics guardrails — from legaldive.com by Catherine Dawson For organizations to balance the benefits of artificial intelligence with its risk, it’s important for counsel to develop policy on data governance and privacy.
How Legal Aid and Tech Collaboration Can Bridge the Justice Gap — from law.com by Kelli Raker and Maya Markovich “Technology, when thoughtfully developed and implemented, has the potential to expand access to legal services significantly,” write Kelli Raker and Maya Markovich.
Challenges and Concerns Despite the potential benefits, legal aid organizations face several hurdles in working with new technologies:
1. Funding and incentives: Most funding for legal aid is tied to direct legal representation, leaving little room for investment in general case management or exploration of innovative service delivery methods to exponentially scale impact.
2. Jurisdictional inconsistency: The lack of a unified court system or standardized forms across regions makes it challenging to develop accurate and widely applicable tech solutions in certain types of matters.
3. Organizational capacity: Many legal aid organizations lack the time and resources to thoroughly evaluate new tech offerings or collaboration opportunities or identify internal workflows and areas of unmet need with the highest chance for impact.
4. Data privacy and security: Legal aid providers need assurance that tech protects client data and avoids misuse of sensitive information.
5. Ethical considerations: There’s significant concern about the accuracy of information produced by consumer-facing technology and the potential for inadvertent unauthorized practice of law.
Institutions are balancing capacity issues and rapid technological advancements—including artificial intelligence—while addressing a loss of trust in higher education.
To adapt to the future, technology and data leaders must work strategically to restore trust, prepare for policy updates, and plan for online education growth.
Legal: Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows. The field broadly divides into litigation and transactional law, with numerous subspecialties. Rooted in litigation, Everlaw* focuses on legal holds, e-discovery, and trial preparation, while Harvey and Spellbook are advancing AI in transactional law with solutions for contract review, legal research, and M&A. Specific practice areas are also targeted AI innovations: EvenUp focuses on injury law, Garden on patents and intellectual property, Manifest on immigration and employment law, while Eve* is re-inventing plaintiff casework from client intake to resolution.
CodeSignal, an AI tech company, has launched Conversation Practice, an AI-driven platform to help learners practice critical workplace communication and soft skills.
Conversation Practice uses multiple AI models and a natural spoken interface to simulate real-world scenarios and provide feedback.
The goal is to address the challenge of developing conversational skills through iterative practice, without the awkwardness of peer role-play.
What I learned about this software changed my perception about how I can prepare in the future for client meetings. Here’s what I’ve taken away from the potential use of this software in a legal practice setting:
I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day:
Cloud-based case management solutions. These help enhance productivity through collaboration tools and automated workflows while keeping data secure.
E-discovery tools. These tools manage vast amounts of data and help speed up litigation processes.
Artificial intelligence. AI has helped automate tasks for legal professionals including for case management, research, contract review and predictive analytics.
2024: The State of Generative AI in the Enterprise — from menlovc.com (Menlo Ventures) The enterprise AI landscape is being rewritten in real time. As pilots give way to production, we surveyed 600 U.S. enterprise IT decision-makers to reveal the emerging winners and losers.
This spike in spending reflects a wave of organizational optimism; 72% of decision-makers anticipate broader adoption of generative AI tools in the near future. This confidence isn’t just speculative—generative AI tools are already deeply embedded in the daily work of professionals, from programmers to healthcare providers.
Despite this positive outlook and increasing investment, many decision-makers are still figuring out what will and won’t work for their businesses. More than a third of our survey respondents do not have a clear vision for how generative AI will be implemented across their organizations. This doesn’t mean they’re investing without direction; it simply underscores that we’re still in the early stages of a large-scale transformation. Enterprise leaders are just beginning to grasp the profound impact generative AI will have on their organizations.
Business spending on generative AI surged 500% this year, hitting $13.8 billion — up from just $2.3 billion in 2023, according to data from Menlo Ventures released Wednesday.
OpenAI ceded market share in enterprise AI, declining from 50% to 34%, per the report.
Amazon-backed Anthropic doubled its market share from 12% to 24%.
Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting segments.
…
The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference [that started on 11/19/24], the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight.
To understand the implications of AI agents, it’s useful to clarify the distinctions between AI, generative AI, and AI agents and explore the opportunities and risks they present to our autonomy, relationships, and decision-making.
… AI Agents: These are specialized applications of AI designed to perform tasks or simulate interactions. AI agents can be categorized into:
Tool Agents…
Simulation Agents..
While generative AI creates outputs from prompts, AI agents use AI to act with intention, whether to assist (tool agents) or emulate (simulation agents). The latter’s ability to mirror human thought and action offers fascinating possibilities — and raises significant risks.
Risks on the Horizon: ASL Levels The two key risks Dario is concerned about are:
a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy
These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):
1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.
Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.
Should You Still Learn to Code in an A.I. World? — from nytimes.com by Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.
Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.
For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.
There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).
It’s called Lovable, the “world’s first AI fullstack engineer.”
… Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.
Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.
As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.
From DSC: I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:
This is mind-blowing!
NVIDIA has introduced Edify 3D, a 3D AI generator that lets us create high-quality 3D scenes using just a simple prompt. And all the assets are fully editable!
In-house legal teams are changing from a traditional support function to becoming proactive business enablers. New tools are helping legal departments enhance efficiency, improve compliance, and to deliver greater strategic value.
Here’s a look at seven emerging trends that will shape legal tech in 2025 and insights on how in-house teams can capitalise on these innovations.
1. AI Solutions… 2. Regulatory Intelligence Platforms… … 7. Self-Service Legal Tools and Knowledge Management As the demand on in-house legal teams continues to grow, self-service tools are becoming indispensable for managing routine legal tasks. In 2025, these tools are expected to evolve further, enabling employees across the organisation to handle straightforward legal processes independently. Whether it’s accessing pre-approved templates, completing standard agreements, or finding answers to common legal queries, self-service platforms reduce the dependency on legal teams for everyday tasks.
Advanced self-service tools go beyond templates, incorporating intuitive workflows, approval pathways, and built-in guidance to ensure compliance with legal and organisational policies. By empowering business users to manage low-risk matters on their own, these tools free up legal teams to focus on complex and high-value work.
How to use NotebookLM for personalized knowledge synthesis — from ai-supremacy.com by Michael Spencer and Alex McFarland Two powerful workflows that unlock everything else. Intro: Golden Age of AI Tools and AI agent frameworks begins in 2025.
What is Google Learn about? Google’s new AI tool, Learn About, is designed as a conversational learning companion that adapts to individual learning needs and curiosity. It allows users to explore various topics by entering questions, uploading images or documents, or selecting from curated topics. The tool aims to provide personalized responses tailored to the user’s knowledge level, making it user-friendly and engaging for learners of all ages.
Is Generative AI leading to a new take on Educational technology? It certainly appears promising heading into 2025.
The Learn About tool utilizes the LearnLM AI model, which is grounded in educational research and focuses on how people learn. Google insists that unlike traditional chatbots, it emphasizes interactive and visual elements in its responses, enhancing the educational experience. For instance, when asked about complex topics like the size of the universe, Learn About not only provides factual information but also includes related content, vocabulary building tools, and contextual explanations to deepen understanding.
[On November 19th] at Microsoft Ignite 2024, we’re accelerating our ambition to empower every employee with Copilot as a personal assistant and to transform every business process with agents built in Microsoft Copilot Studio.
Announcements include:
Copilot Actions in Microsoft 365 Copilot to help you automate everyday repetitive tasks.
New agents in Microsoft 365 to unlock SharePoint knowledge, provide real-time language interpretation in Microsoft Teams meetings, and automate employee self-service.
The Copilot Control System to help IT professionals confidently manage Copilot and agents securely.
These announcements build on our wave 2 momentum, including the new autonomous agent capabilities that we announced in October 2024.
Per the Rundown AI:
By integrating AI agents directly into Microsoft’s billion-plus users’ daily workflows, this release could normalize agentic AI faster than any previous rollout. Just as users now reach for specific apps or plugins to solve particular problems, specialized agents could soon become the natural first stop for getting work done.
An agent takes the power of generative AI a step further, because instead of just assisting you, agents can work alongside you or even on your behalf. Agents can do a range of things, from responding to questions to more complicated or multistep assignments. What sets them apart from a personal assistant is that they can be tailored to have a particular expertise.
For example, you could create an agent to know everything about your company’s product catalog so it can draft detailed responses to customer questions or automatically compile product details for an upcoming presentation.
From DSC: I am not trying to push all things AI. There are serious concerns that I and others have with agents and other AI-based technologies especially:
When competitive juices get going and such forces throw people and companies into a sort of an AI arms race, and
When many people haven’t yet obtained the wisdom of reflecting on things like “just because we CAN build this doesn’t mean we SHOULD build it”, or
When governments seek to be the leader of AI due to military applications (and yes, I’m looking at the U.S. Federal Government especially here)
Etc, etc.
But there are also areas where I’m more hopeful and positive about AI-related technologies — such as providing personalized learning and productivity tools (like those from Microsoft above).
…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.
I assert that in the future, people will be able to see this information on a 24x7x365 basis.
Which jobs are in demand?
What skills do I need to do those jobs?
WHERE do I get/develop those skills?
And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.
BUT PEOPLE are the key! Oftentimes, we need to — and prefer to — learn with others!