The Edtech Insiders Generative AI Map — from edtechinsiders.substack.com by Ben Kornell, Alex Sarlin, Sarah Morin, and Laurence Holt
A market map and database featuring 60+ use cases for GenAI in education and 300+ GenAI powered education tools.


A Student’s Guide to Writing with ChatGPT— from openai.com

Used thoughtfully, ChatGPT can be a powerful tool to help students develop skills of rigorous thinking and clear writing, assisting them in thinking through ideas, mastering complex concepts, and getting feedback on drafts.

There are also ways to use ChatGPT that are counterproductive to learning—like generating an essay instead of writing it oneself, which deprives students of the opportunity to practice, improve their skills, and grapple with the material.

For students committed to becoming better writers and thinkers, here are some ways to use ChatGPT to engage more deeply with the learning process.


Community Colleges Are Rolling Out AI Programs—With a Boost from Big Tech — from workshift.org by Colleen Connolly

The Big Idea: As employers increasingly seek out applicants with AI skills, community colleges are well-positioned to train up the workforce. Partnerships with tech companies, like the AI Incubator Network, are helping some colleges get the resources and funding they need to overhaul programs and create new AI-focused ones.

Along these lines also see:

Practical AI Training — from the-job.beehiiv.com by Paul Fain
Community colleges get help from Big Tech to prepare students for applied AI roles at smaller companies.

Miami Dade and other two-year colleges try to be nimble by offering training for AI-related jobs while focusing on local employers. Also, Intel’s business struggles while the two-year sector wonders if Republicans will cut funds for semiconductor production.


Can One AI Agent Do Everything? How To Redesign Jobs for AI? HR Expertise And A Big Future for L&D. — from joshbersin.com by Josh Bersin

Here’s the AI summary, which is pretty good.

In this conversation, Josh Bersin discusses the evolving landscape of AI platforms, particularly focusing on Microsoft’s positioning and the challenges of creating a universal AI agent. He delves into the complexities of government efficiency, emphasizing the institutional challenges faced in re-engineering government operations.

The conversation also highlights the automation of work tasks and the need for businesses to decompose job functions for better efficiency.

Bersin stresses the importance of expertise in HR, advocating for a shift towards full stack professionals who possess a broad understanding of various HR functions.

Finally, he addresses the impending disruption in Learning and Development (L&D) due to AI advancements, predicting a significant transformation in how L&D professionals will manage knowledge and skills.


 

 

Miscommunication Leads AI-Based Hiring Tools Astray — from adigaskell.org

Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.

The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.

Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.


AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents — from digit.in by Jayesh Shinde

In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.

Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.


2025: The year ‘invisible’ AI agents will integrate into enterprise hierarchies  — from venturebeat.com by Taryn Plumb

In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.

According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.

However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.

“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.


State of AI Report 2024 Summary — from ai-supremacy.com by Michael Spencer
Part I, Consolidation, emergence and adoption. 


Which AI Image Model Is the Best Speller? Let’s Find Out! — from whytryai.com by Daniel Nest
I test 7 image models to find those that can actually write.

The contestants
I picked 7 participants for today’s challenge:

  1. DALL-E 3 by OpenAI (via Microsoft Designer)
  2. FLUX1.1 [pro] by Black Forest Labs (via Glif)
  3. Ideogram 2.0 by Ideogram (via Ideogram)
  4. Imagen 3 by Google (via Image FX)
  5. Midjourney 6.1 by Midjourney (via Midjourney)
  6. Recraft V3 by Recraft (via Recraft)
  7. Stable Diffusion 3.5 Large by Stability AI (via Hugging Face)

How to get started with AI agents (and do it right) — from venturebeat.com by Taryn Plumb

So how can enterprises choose when to adopt third-party models, open source tools or build custom, in-house fine-tuned models? Experts weigh in.


OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI — from bloomberg.com (behind firewall)
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.


OpenAI and others seek new path to smarter AI as current methods hit limitations — from reuters.com by Krystal Hu and Anna Tong

Summary

  • AI companies face delays and challenges with training new large language models
  • Some researchers are focusing on more time for inference in new models
  • Shift could impact AI arms race for resources like chips and energy

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools — from blogs.nvidia.com by Spencer Huang
New Project GR00T workflows and AI world model development technologies to accelerate robot dexterity, control, manipulation and mobility.


How Generative AI is Revolutionizing Product Development — from intelligenthq.com

A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.


What are AI Agents: How To Create a Based AI Agent — from ccn.com by Lorena Nessi

Key Takeaways

  • AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
  • The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
  • AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
  • Securing API keys in development is crucial to protect the agent from unauthorized access.

What are AI Agents and How Are They Used in Different Industries? — from rtinsights.com by Salvatore Salamone
AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.

 

How to File a Lawsuit by Yourself: A Simple Guide — from courtroom5.com by Debra Slone

I can say from personal experience that when you know how to file a lawsuit on your own, you’re more likely to assert your rights in all walks of life. Simply understanding the process of taking someone to court can change your life.

Introduction
Filing a civil lawsuit without a lawyer can be challenging, but many people do it to stand up for their rights. At Courtroom5, we understand the courage it takes to represent yourself, and we’re here to help. This guide will give you the essential knowledge and steps to file a civil lawsuit on your own. With clear instructions and practical tools, we aim to make the process easier and boost your confidence as you start your case.

Self-representation, or “pro se” litigation, requires more than basic knowledge of the law. It involves research, preparation, and understanding court rules. Our goal is to break down these tasks, making each step manageable from filing to finishing your case.


From DSC:
I haven’t used this site myself. But I post this item because we have a MAJOR issue here in the U.S. with Access To Justice (A2J) — the vast majority of CIVIL lawsuits are heavily tilted towards those who know how the game is played. The winners know what to do, they’ve been trained. But those without representation lose most of the time.

So I’m hoping that such online-based materials and services — including AI-based tools and platforms — can significantly alter this troublesome situation. So that’s why I’m posting this.


 

 

“The Value of Doing Things: What AI Agents Mean for Teachers” — from nickpotkalitsky.substack.com by guest author Jason Gulya, Professor of English and Applied Media at Berkeley College in New York City

AI Agents make me nervous. Really nervous.

I wish they didn’t.

I wish I could write that the last two years have made me more confident, more self-assured that AI is here to augment workers rather than replace them.

But I can’t.

I wish I could write that I know where schools and colleges will end up. I wish I could say that AI Agents will help us get where we need to be.

But I can’t.

At this point, today, I’m at a loss. I’m not sure where the rise of AI agents will take us, in terms of how we work and learn. I’m in the question-asking part of my journey. I have few answers.

So, let’s talk about where (I think) AI Agents will take education. And who knows? Maybe as I write I’ll come up with something more concrete.

It’s worth a shot, right?

From DSC: 
I completely agree with Jason’s following assertion:

A good portion of AI advancement will come down to employee replacement. And AI Agents push companies towards that. 

THAT’s where/what the ROI will be for corporations. They will make their investments up in the headcount area, and likely in other areas as well (product design, marketing campaigns, engineering-related items, and more). But how much time it takes to get there is a big question mark.

One last quote here…it’s too good not to include:

Behind these questions lies a more abstract, more philosophical one: what is the relationship between thinking and doing in a world of AI Agents and other kinds of automation?


How Good are Claude, ChatGPT & Gemini at Instructional Design? — from drphilippahardman.substack.com by Dr Philippa Hardman
A test of AI’s Instruction Design skills in theory & in practice

By examining models across three AI families—Claude, ChatGPT, and Gemini—I’ve started to identify each model’s strengths, limitations, and typical pitfalls.

Spoiler: my findings underscore that until we have specialised, fine-tuned AI copilots for instructional design, we should be cautious about relying on general-purpose models and ensure expert oversight in all ID tasks.


From DSC — I’m going to (have Nick) say this again:
I simply asked my students to use AI to brainstorm their own learning objectives. No restrictions. No predetermined pathways. Just pure exploration. The results? Astonishing.

Students began mapping out research directions I’d never considered. They created dialogue spaces with AI that looked more like intellectual partnerships than simple query-response patterns. 


The Digital Literacy Quest: Become an AI Hero — from gamma.app

From DSC:
I have not gone through all of these online-based materials, but I like what they are trying to get at:

  • Confidence with AI
    Students gain practical skills and confidence in using AI tools effectively.
  • Ethical Navigation
    Learn to navigate the ethical landscape of AI with integrity and responsibility. Make informed decisions about AI usage.
  • Mastering Essential Skills
    Develop critical thinking and problem-solving skills in the context of AI.

 


Expanding access to the Gemini app for teen students in education — from workspaceupdates.googleblog.com

Google Workspace for Education admins can now turn on the Gemini app with added data protection as an additional service for their teen users (ages 13+ or the applicable age in your country) in the following languages and countries. With added data protection, chats are not reviewed by human reviewers or otherwise used to improve AI models. The Gemini app will be a core service in the coming weeks for Education Standard and Plus users, including teens,


5 Essential Questions Educators Have About AI  — from edsurge.com by Annie Ning

Recently, I spoke with several teachers regarding their primary questions and reflections on using AI in teaching and learning. Their thought-provoking responses challenge us to consider not only what AI can do but what it means for meaningful and equitable learning environments. Keeping in mind these reflections, we can better understand how we move forward toward meaningful AI integration in education.


FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI — from epoch.ai
FrontierMath presents hundreds of unpublished, expert-level mathematics problems that specialists spend days solving. It offers an ongoing measure of AI complex mathematical reasoning progress.

We’re introducing FrontierMath, a benchmark of hundreds of original, expert-crafted mathematics problems designed to evaluate advanced reasoning capabilities in AI systems. These problems span major branches of modern mathematics—from computational number theory to abstract algebraic geometry—and typically require hours or days for expert mathematicians to solve.


Rising demand for AI courses in UK universities shows 453% growth as students adapt to an AI-driven job market — from edtechinnovationhub.com

The demand for artificial intelligence courses in UK universities has surged dramatically over the past five years, with enrollments increasing by 453%, according to a recent study by Currys, a UK tech retailer.

The study, which analyzed UK university admissions data and surveyed current students and recent graduates, reveals how the growing influence of AI is shaping students’ educational choices and career paths.

This growth reflects the broader trend of AI integration across industries, creating new opportunities while transforming traditional roles. With AI’s influence on career prospects rising, students and graduates are increasingly drawn to AI-related courses to stay competitive in a rapidly changing job market.

 

New Partnership Offers Online Tutoring in Michigan Schools — from govtech.com via GSV
The online education nonprofit Michigan Virtual has partnered with Stride Tutoring to offer remote academic support for students in 700 school districts as part of a statewide push to reverse pandemic learning loss.

Online education provider Michigan Virtual is working with a Virginia-based online tutoring company to increase access to personalized academic support for Michigan students, according to a news release last month. The partnership is in line with a statewide push to reverse pandemic learning loss through high-impact tutoring.


Speaking of education — but expanding the scope of this posting to a global scale:

Kids worldwide face huge educational challenges. Is better leadership a solution? — from hechingerreport.org by Liz Willen
Amid dismal data, educators from around the world gather in Brazil and say they can rise to the challenges

While the conversation clearly focused on a continuing worldwide crisis in education, the UNESCO conference I participated in was different. It emphasized a topic of huge importance to improving student outcomes, and coincided with the release of a report detailing how effective leaders can make a big difference in the lives of children.

From DSC:
Leadership is important, for sure. But being a leader in education is very difficult these days — there are many different (and high) expectations and agendas being thrown your way from a variety of shareholders. But I do appreciate those leaders who are trying to create effective learning ecosystems out there!


One more for high school students considering going to college…

 

It’s The End Of The Legal Industry As We Know It — from artificiallawyer.com by Richard Tromans

It’s the end of the legal industry as we know it and I feel fine. I really do.

The legal industry as we know it is already over. The seismic event that triggered this evolutionary shift happened in November 2022. There’s no going back to a pre-genAI world. Change, incremental or otherwise, will be unstoppable. The only question is: at what pace will this change happen?

It’s clear that substantive change at the heart of the legal economy may take a long time – and we should never underestimate the challenge of overturning decades of deeply embedded cultural practices – but, at least it has begun.


AI: The New Legal Powerhouse — Why Lawyers Should Befriend The Machine To Stay Ahead — from today.westlaw.com

(October 24, 2024) – Jeremy Glaser and Sharzaad Borna of Mintz discuss waves of change in the legal profession brought on by AI, in areas such as billing, the work of support staff and junior associates, and ethics.

The dual nature of AI — excitement and fear
AI is evolving at lightning speed, sparking both wonder and worry. As it transforms industries and our daily lives, we are caught between the thrill of innovation and the jitters of uncertainty. Will AI elevate the human experience or just leave us in the dust? How will it impact our careers, privacy and sense of security?

Just as we witnessed with the rise of the internet — and later, social media — AI is poised to redefine how we work and live, bringing a mix of optimism and apprehension. While we grapple with AI’s implications, our clients expect us to lead the charge in leveraging it for their benefit.

However, this shift also means more competition for fewer entry-level jobs. Law schools will play a key role in helping students become more marketable by offering courses on AI tools and technology. Graduates with AI literacy will have an edge over their peers, as firms increasingly value associates who can collaborate effectively with AI tools.


Will YOU use ChatGPT voice mode to lie to your family? Brainyacts #244 — from thebrainyacts.beehiiv.com by Sam Douthit, Aristotle Jones, and Derek Warzel.

Small Law’s Secret Weapon: AI Courtroom Mock Battles — this excerpt is by Brainacts author Josh Kubicki
As many of you know, this semester my law students have the opportunity to write the lead memo for this newsletter, each tackling issues that they believe are both timely and intriguing for our readers. This week’s essay presents a fascinating experiment conducted by three students who explored how small law firms might leverage ChatGPT in a safe, effective manner. They set up ChatGPT to simulate a mock courtroom, even assigning it the persona of a Seventh Circuit Court judge to stage a courtroom dialogue. It’s an insightful take on the how adaptable technology like ChatGPT can offer unique advantages to smaller practices. They share other ideas as well. Enjoy!

The following excerpt was written by Sam Douthit, Aristotle Jones, and Derek Warzel.

One exciting example is a “Courtroom Persona AI” tool, which could let solo practitioners simulate mock trials and practice arguments with AI that mimics specific judges, local courtroom customs, or procedural quirks. Small firms, with their deep understanding of local courts and judicial styles, could take full advantage of this tool to prepare more accurate and relevant arguments. Unlike big firms that have to spread resources across jurisdictions, solo and small firms could use this AI-driven feedback to tailor their strategies closely to local court dynamics, making their preparations sharper and more strategic. Plus, not all solo or small firms have someone to practice with or bounce their ideas off of. For these practitioners, it’s a chance to level up their trial preparation without needing large teams or costly mock trials, gaining a practical edge where it counts most.

Some lawyers have already started to test this out, like the mock trial tested out here. One oversimplified and quick way to try this out is using the ChatGPT app.


The Human in AI-Assisted Dispute Resolution — from jdsupra.com by Epiq

Accountability for Legal Outputs
AI is set to replace some of the dispute resolution work formerly done by lawyers. This work includes summarising documents, drafting legal contracts and filings, using generative AI to produce arbitration submissions for an oral hearing, and, in the not-too-distant future, ingesting transcripts from hearings and comparing them to the documentary record to spot inconsistencies.

As Pendell put it, “There’s quite a bit of lawyering going on there.” So, what’s left for humans?

The common feature in all those examples is that humans must make the judgement call. Lawyers won’t just turn over a first draft of an AI-generated contract or filing to another party or court. The driving factor is that law is still a regulated profession, and regulators will hold humans accountable.

The idea that young lawyers must do routine, menial work as a rite of passage needs to be updated. Today’s AI tools put lawyers at the top of an accountability chain, allowing them to practice law using judgement and strategy as they supervise the work of AI. 


Small law firms embracing AI as they move away from hourly billing — from legalfutures.co.uk by Neil Rose

Small law firms have embraced artificial intelligence (AI), with document drafting or automation the most popular application, according to new research.

The survey also found expectations of a continued move away from hourly billing to fixed fees.

Legal technology provider Clio commissioned UK-specific research from Censuswide as an adjunct to its annual US-focused Legal Trends report, polling 500 solicitors, 82% of whom worked at firms with 20 lawyers or fewer.

Some 96% of them reported that their firms have adopted AI into their processes in some way – 56% of them said it was widespread or universal – while 62% anticipated an increase in AI usage over the next 12 months.

 

Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 



Google’s worst nightmare just became reality — from aidisruptor.ai by Alex McFarland
OpenAI just launched an all-out assault on traditional search engines.

Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.

It’s the beginning of the end for search as we know it.

Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.

The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.




Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com
Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.


 

10 Graphic Design Trends to Pay Attention to in 2025 — from graphicmama.com by Al Boicheva

We’ll go on a hunt for bold, abstract, and naturalist designs, cutting-edge AI tools, and so much more, all pushing boundaries and rethinking what we already know about design. In 2025, we will see new ways to animate ideas, revisit retro styles with a modern twist, and embrace clean, but sophisticated aesthetics. For designers and design enthusiasts alike, these trends are set to bring a new level of excitement to the world of design.

Here are the Top 10 Graphic Design Trends in 2025:

 

AI Tutors Double Rates of Learning in Less Learning Time — by drphilippahardman.substack.com Dr. Philippa Hardman
Inside Harvard’s new groundbreaking study

Conclusion
This Harvard study provides robust evidence that AI tutoring, when thoughtfully designed, can significantly enhance learning outcomes. The combination of doubled learning gains, increased engagement, and reduced time to competency suggests we’re seeing just the beginning of AI’s potential in education and that its potential is significant.

If this data is anything to go by, and if we – as humans – are open and willing to acting on it, it’s possible AI will have a significant and for some deeply positive impact on how we design and deliver learning experiences.

That said, as we look forward, the question shouldn’t just be, “how AI can enhance current educational methods?”, but also “how it might AI transform the very nature of learning itself?”. With continued research and careful implementation, we could be moving toward an era of education that’s more effective but also more accessible than ever before.


Three Quick Examples of Teaching with and about Generative AI — from derekbruff.org Derek Bruff

  • Text-to-Podcast.
  • Assigning Students to Groups.
  • AI Acceptable Use Scale.

Also from Derek’s blog, see:


From Mike Sharples on LinkedIn: 


ChatGPT’s free voice wizard — from wondertools.substack.com by Jeremy Caplan
How and why to try the new Advanced Voice Mode

7 surprisingly practical ways to use voice AI
Opening up ChatGPT’s Advanced Voice Mode (AVM) is like conjuring a tutor eager to help with whatever simple — or crazy — query you throw at it. Talking is more fluid and engaging than typing, especially if you’re out and about. It’s not a substitute for human expertise, but AVM provides valuable machine intelligence.

  • Get a virtual museum tour. …
  • Chat with historical figures….
  • Practice languages. …
  • Explore books. …
  • Others…


Though not AI-related, this is along the lines of edtech:


…which links to:

 

Along these same lines, see:

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.


ZombAIs: From Prompt Injection to C2 with Claude Computer Use — from embracethered.com by Johann Rehberger

A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.

It’s cool, but obviously very dangerous because of prompt injection. Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.

This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers.

We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.

TrustNoAI.

And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.

Also relevant here, see:


Perplexity Grows, GPT Traffic Surges, Gamma Dominates AI Presentations – The AI for Work Top 100: October 2024 — from flexos.work by Daan van Rossum
Perplexity continues to gain users despite recent controversies. Five out of six GPTs see traffic boosts. This month’s highest gainers including Gamma, Blackbox, Runway, and more.


Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report — from ai.wharton.upenn.edu by  Jeremy Korst, Stefano Puntoni, & Mary Purk

From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.


Apple study exposes deep cracks in LLMs’ “reasoning” capabilities — from arstechnica.com by Kyle Orland
Irrelevant red herrings lead to “catastrophic” failure of logical inference.

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”


Google CEO says more than a quarter of the company’s new code is created by AI — from businessinsider.in by Hugh Langley

  • More than a quarter of new code at Google is made by AI and then checked by employees.
  • Google is doubling down on AI internally to make its business more efficient.

Top Generative AI Chatbots by Market Share – October 2024 


Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview — from github.blog

We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.

 

Introducing QuizBot an Innovative AI-Assisted Assessment in Legal Education  — from papers.ssrn.com by Sean A Harrington

Abstract

This Article explores an innovative approach to assessment in legal education: an AI-assisted quiz system implemented in an AI & the Practice of Law course. The system employs a Socratic method-inspired chatbot to engage students in substantive conversations about course materials, providing a novel method for evaluating student learning and engagement. The Article examines the structure and implementation of this system, including its grading methodology and rubric, and discusses its benefits and challenges. Key advantages of the AI-assisted quiz system include enhanced student engagement with course materials, practical experience in AI interaction for future legal practice, immediate feedback and assessment, and alignment with the Socratic method tradition in law schools. The system also presents challenges, particularly in ensuring fairness and consistency in AI-generated questions, maintaining academic integrity, and balancing AI assistance with human oversight in grading.

The Article further explores the pedagogical implications of this innovation, including a shift from memorization to conceptual understanding, the encouragement of critical thinking through AI interaction, and the preparation of students for AI-integrated legal practice. It also considers future directions for this technology, such as integration with other law school courses, potential for longitudinal assessment of student progress, and implications for bar exam preparation and continuing legal education. Ultimately, this Article argues that AI-assisted assessment systems can revolutionize legal education by providing more frequent, targeted, and effective evaluation of student learning. While challenges remain, the benefits of such systems align closely with the evolving needs of the legal profession. The Article concludes with a call for further research and broader implementation of AI-assisted assessment in law schools to fully understand its impact and potential in preparing the next generation of legal professionals for an AI-integrated legal landscape.

Keywords: Legal Education, Artificial Intelligence, Assessment, Socratic Method, Chatbot, Law School Innovation, Educational Technology, Legal Pedagogy, AI-Assisted Learning, Legal Technology, Student Engagement, Formative Assessment, Critical Thinking, Legal Practice, Educational Assessment, Law School Curriculum, Bar Exam Preparation, Continuing Legal Education, Legal Ethics, Educational Analytics


How Legal Startup Genie AI Raises $17.8 Million with Just 13 Slides — from aisecret.us

Genie AI, a London-based legal tech startup, was founded in 2017 by Rafie Faruq and Nitish Mutha. The company has been at the forefront of revolutionizing the legal industry by leveraging artificial intelligence to automate and enhance legal document drafting and review processes. The recent funding round, led by Google Ventures and Khosla Ventures, marks a significant milestone in Genie AI’s growth trajectory.


In-house legal teams are adopting legal tech at lower rate than law firms: survey — from canadianlawyermag.com
The report suggests in-house teams face more barriers to integrating new tools

Law firms are adopting generative artificial intelligence tools at a higher rate than in-house legal departments, but both report similar levels of concerns about data security and ethical implications, according to a report on legal tech usage released Wednesday.

Legal tech company Appara surveyed 443 legal professionals in Canada across law firms and in-house legal departments over the summer, including lawyers, paralegals, legal assistants, law clerks, conveyancers, and notaries.

Twenty-five percent of respondents who worked at law firms said they’ve already invested in generative AI tools, with 24 percent reporting they plan to invest within the following year. In contrast, only 15 percent of respondents who work in-house have invested in these tools, with 26 percent planning investments in the future.


The end of courts? — from jordanfurlong.substack.com by Jordan Furlong
Civil justice systems aren’t serving the public interest. It’s time to break new ground and chart paths towards fast and fair dispute resolution that will meet people’s actual needs.

We need to start simple. System design can get extraordinarily complex very quickly, and complexity is our enemy at this stage. Tom O’Leary nicely inverted Deming’s axiom with a question of his own: “We want the system to work for [this group]. What would need to happen for that to be true?”

If we wanted civil justice systems to work for the ordinary people who enter them seeking solutions to their problems — as opposed to the professionals who administer and make a living off those systems — what would those systems look like? What would be their features? I can think of at least three:

  • Fair: …
  • Fast: …
  • Fine: …

100-Day Dispute Resolution: New Era ADR is Changing the Game (Rich Lee, CEO)

New Era ADR CEO Rich Lee makes a return appearance to Technically Legal to talk about the company’s cutting-edge platform revolutionizing dispute resolution. Rich first came on the podcast in 2021 right as the company launched. Rich discusses the company’s mission to provide a faster, more efficient, and cost-effective alternative to traditional litigation and arbitration, the company’s growth and what he has learned from a few years in.

Key takeaways:

  • New Era ADR offers a unique platform for resolving disputes in under 100 days, significantly faster than traditional methods.
  • The platform leverages technology to streamline processes, reduce costs, and enhance accessibility for all parties involved.
  • New Era ADR boasts a diverse pool of experienced and qualified neutrals, ensuring fair and impartial resolutions.
  • The company’s commitment to innovation is evident in its use of data and technology to drive efficiency and transparency.
 

From DSC:
The following reflections were catalyzed by Jeff Selingo’s Next posting from 10/22, specifically the item:

  • Student fees for athletics, dark money in college sports, and why this all matters to every student, every college.

All of this has big risks for institutions. But whenever I talk to faculty and administrators on campuses about this, many will wave me away and say, “Well, I’m not a college sports fan” or “We’re a Division III school, so that all this doesn’t impact us.”

Nothing is further from the truth, as we explored on a recent episode of the Future U. podcast, where we welcomed in Matt Brown, editor of the Extra Points newsletter, which looks at academic and financial issues in college sports.

As we learned, despite the siloed nature of higher ed, everything is connected to athletics: research, academics, market position. Institutions can rise and fall on the backs of their athletics programs – and we’re not talking about wins and losses, but real budget dollars.

And if you want to know about the impact on students, look no further than the news out of Clemson this week. It is following several other universities in adopting an “athletics fee”: $300 a year. It won’t be the last.  

Give a listen to this episode of Future U. if you want to catch up quick on this complicated subject, and while you’re at it, subscribe wherever you get your podcasts.


Clemson approves new athletics fee for students. Here’s what we know — from sports.yahoo.com by Chapel Fowler
How much are student fees at other schools?

That’s true in the state of South Carolina, when comparing the annual fees of Clemson ($300) and USC ($172) to Coastal Carolina ($2,090). And it holds up nationally, too.



From DSC:
The Bible talks a lot about idols….and I can’t help but wonder, have sports become an idol in our nation?

Don’t get me wrong. Sports can and should be fun for us to play. I played many an hour of sports in my youth and I occasionally play some sports these days. Plus, sports are excellent for helping us keep in shape and take care of our bodies. Sports can help us connect with others and make some fun/good memories with our friends.

So there’s much good to playing sports. But have we elevated sports to places they were never meant to be? To roles they were never meant to play?

 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

DC: I’m really hoping that a variety of AI-based tools, technologies, and services will significantly help with our Access to Justice (#A2J) issues here in America. So this article, per Kristen Sonday at Thomson Reuters — caught my eye.

***

AI for Legal Aid: How to empower clients in need — from thomsonreuters.com by Kristen Sonday
In this second part of this series, we look at how AI-driven technologies can empower those legal aid clients who may be most in need

It’s hard to overstate the impact that artificial intelligence (AI) is expected to have on helping low-income individuals achieve better access to justice. And for those legal services organizations (LSOs) that serve on the front lines, too often without sufficient funding, staff, or technology, AI presents perhaps their best opportunity to close the justice gap. With the ability of AI-driven tools to streamline agency operations, minimize administrative work, more effectively reallocate talent, and allow LSOs to more effectively service clients, the implementation of these tools is essential.

Innovative LSOs leading the way

Already many innovative LSOs are taking the lead, utilizing new technology to complete tasks from complex analysis to AI-driven legal research. Here are two compelling examples of how AI is already helping LSOs empower low-income clients in need.

#A2J #justice #tools #vendors #society #legal #lawfirms #AI #legaltech #legalresearch

Criminal charges, even those that are eligible for simple, free expungement, can prevent someone from obtaining housing or employment. This is a simple barrier to overcome if only help is available.

AI offers the capacity to provide quick, accurate information to a vast audience, particularly to those in urgent need. AI can also help reduce the burden on our legal staff…

 


A legal tech executive explains how AI will fully change the way lawyers work — from legaldive.com by Justin Bachman
A senior executive with ContractPodAi discusses how legal AI poses economic benefits for in-house departments and disruption risks for law firm billing models.

Everything you thought you knew about being a lawyer is about to change.

Legal Dive spoke with Podinic about the transformative nature of AI, including the financial risks to lawyers’ billing models and how it will force general counsel and chief legal officers to consider how they’ll use the time AI is expected to free up for the lawyers on their teams when they no longer have to do administrative tasks and low-level work.


Legaltech will augment lawyers’ capabilities but not replace them, says GlobalData — from globaldata.com

  • Traditionally, law firms have been wary of adopting technologies that could compromise data privacy and legal accuracy; however, attitudes are changing
  • Despite concerns about technology replacing humans in the legal sector, legaltech is more likely to augment the legal profession than replace it entirely
  • Generative AI will accelerate digital transformation in the legal sector
 
© 2024 | Daniel Christian