Tech Trends 2025 — from deloitte.com by Deloitte Insights
In Deloitte’s 16th annual Tech Trends report, AI is the common thread of nearly every trend. Moving forward, it will be part of the substructure of everything we do.

We propose that the future of technology isn’t so much about more AI as it is about ubiquitous AI. We expect that, going forward, AI will become so fundamentally woven into the fabric of our lives that it’s everywhere, and so foundational that we stop noticing it.

AI will eventually follow a similar path, becoming so ubiquitous that it will be a part of the unseen substructure of everything we do, and we eventually won’t even know it’s there. It will quietly hum along in the background, optimizing traffic in our cities, personalizing our health care, and creating adaptative and accessible learning paths in education. We won’t “use” AI. We’ll just experience a world where things work smarter, faster, and more intuitively—like magic, but grounded in algorithms. We expect that it will provide a foundation for business and personal growth while also adapting and sustaining itself over time.

Nowhere is this AI-infused future more evident than in this year’s Tech Trends report, which each year explores emerging trends across the six macro forces of information technology (figure 1). Half of the trends that we’ve chronicled are elevating forces—interaction, information, and computation—that underpin innovation and growth. The other half—the grounding forces of the business of technology, cyber and trust, and core modernization—help enterprises seamlessly operate while they grow.

 

The State of Flexible Work: Statistics from The Flex Index — from flexindex.com

Flex Report Q4 2024
Hybrid and Remote Work by the Numbers

And you thought return to office policy was settled! For a while, it looked like 2-3 days per week in the office would be the future of work in America.

Yet this quarter has brought significant changes to the landscape. Major companies like Amazon, Dell, and The Washington Post announced their plans for a full return to office. Then came a shift in the political atmosphere, with Trump’s victory and potential incoming changes requiring full-time office work for government employees.

These developments raise important questions about where workplace flexibility is headed. Are we witnessing the beginning of a broader shift back to Full Time In Office? Is the era of fully flexible work coming to an end? Or is this simply another evolution in how companies structure their workplace policies?

In this report, we dig into US-wide trends to see if the high-profile shifts toward Full Time In Office reflect broader market movement or just isolated cases. We examine how different industries are approaching flexibility, from Technology’s continued embrace to the challenges faced by sectors dependent on physical presence. Plus, we explore the divide in how companies of different sizes approach workplace flexibility. Are we truly heading back to the office full time, or is the future of work more nuanced than the headlines suggest?

 

Introducing Gemini 2.0: our new AI model for the agentic era — from blog.google by Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

We’re getting 2.0 into the hands of developers and trusted testers today. And we’re working quickly to get it into our products, leading with Gemini and Search. Starting today our Gemini 2.0 Flash experimental model will be available to all Gemini users. We’re also launching a new feature called Deep Research, which uses advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports on your behalf. It’s available in Gemini Advanced today.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

.

Try Deep Research and our new experimental model in Gemini, your AI assistant — from blog.google by Dave Citron
Deep Research rolls out to Gemini Advanced subscribers today, saving you hours of time. Plus, you can now try out a chat optimized version of 2.0 Flash Experimental in Gemini on the web.

Today, we’re sharing the latest updates to Gemini, your AI assistant, including Deep Research — our new agentic feature in Gemini Advanced — and access to try Gemini 2.0 Flash, our latest experimental model.

Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.1


Google Unveils A.I. Agent That Can Use Websites on Its Own — from nytimes.com by Cade Metz and Nico Grant (NOTE: This is a GIFTED article for/to you.)
The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user.

Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.

Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.


Gemini 2.0 is the next chapter for Google AI — from axios.com by Ina Fried

Google Gemini 2.0 — a major upgrade to the core workings of Google’s AI that the company launched Wednesday — is designed to help generative AI move from answering users’ questions to taking action on its own…

The big picture: Hassabis said building AI systems that can take action on their own has been DeepMind’s focus since its early days teaching computers to play games such as chess and Go.

  • “We were always working towards agent-based systems,” Hassabis said. “From the beginning, they were able to plan and then carry out actions and achieve objectives.”
  • Hassabis said AI systems that can act as semi-autonomous agents also represent an important intermediate step on the path toward artificial general intelligence (AGI) — AI that can match or surpass human capabilities.
  • “If we think about the path to AGI, then obviously you need a system that can reason, break down problems and carry out actions in the world,” he said.

AI Agents vs. AI Assistants: Know the Key Differences — from aithority.com by Rishika Patel

The same paradigm applies to AI systems. AI assistants function as reactive tools, completing tasks like answering queries or managing workflows upon request. Think of chatbots or scheduling tools. AI agents, however, work autonomously to achieve set objectives, making decisions and executing tasks dynamically, adapting as new information becomes available.

Together, AI assistants and agents can enhance productivity and innovation in business environments. While assistants handle routine tasks, agents can drive strategic initiatives and problem-solving. This powerful combination has the potential to elevate organizations, making processes more efficient and professionals more effective.


Discover how to accelerate AI transformation with NVIDIA and Microsoft — from ignite.microsoft.com

Meet NVIDIA – The Engine of AI. From gaming to data science, self-driving cars to climate change, we’re tackling the world’s greatest challenges and transforming everyday life. The Microsoft and NVIDIA partnership enables Startups, ISVs, and Partners global access to the latest NVIDIA GPUs on-demand and comprehensive developer solutions to build, deploy and scale AI-enabled products and services.


Google + Meta + Apple New AI — from theneurondaily.com by Grant Harve

What else Google announced:

  • Deep Research: New feature that can explore topics and compile reports.
  • Project Astra: AI agent that can use Google Search, Lens, and Maps, understands multiple languages, and has 10-minute conversation memory.
  • Project Mariner: A browser control agent that can complete web tasks (83.5% success rate on WebVoyager benchmark). Read more about Mariner here.
  • Agents to help you play (or test) video games.

AI Agents: Easier To Build, Harder To Get Right — from forbes.com by Andres Zunino

The swift progress of artificial intelligence (AI) has simplified the creation and deployment of AI agents with the help of new tools and platforms. However, deploying these systems beneath the surface comes with hidden challenges, particularly concerning ethics, fairness and the potential for bias.

The history of AI agents highlights the growing need for expertise to fully realize their benefits while effectively minimizing risks.

 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

From DSC:
I opened up a BRAND NEW box of cereal from Post the other day. As I looked down into the package, I realized that it was roughly half full. (This has happened many times before, but it struck me so much this time that I had to take pictures of it and post this item.)
.

 

.
Looks can be deceiving for sure. It looks like I should have been getting a full box of cereal…but no…only about half of the package was full. It’s another example of the shrinkflation of things — which can also be described as people deceptively ripping other people off. 

“As long as I’m earning $$, I don’t care how it impacts others.” <– That’s not me talking, but it’s increasingly the perspective that many Americans have these days. We don’t bother with ethics and morals…how old-fashioned can you get, right? We just want to make as much money as possible and to hell with how our actions/products are impacting others.

Another example from the food industry is one of the companies that I worked for in the 1990’s — Kraft Foods. Kraft has not served peoples’ health well at all. Even when they tried to take noble steps to provide healthier foods, other food executives/companies in the industry wouldn’t hop on board. They just wanted to please Wall Street, not Main Street. So companies like Kraft have contributed to the current situations that we face which involve obesity, diabetes, heart attacks, and other ailments. (Not to mention increased health care costs.) 

The gambling industry doesn’t give a rip about people either. Look out for the consequences.

And the cannabis industry joins the gambling industry...and they’re often right on the doorsteps of universities and colleges.

Bottom line reflection:
There are REAL ramifications when we don’t take Christ’s words/commands to love one another seriously (or even to care about someone at all). We’re experiencing such ramifications EVERY DAY now.

 

How AI is transforming learning for dyslexic students — from eschoolnews.com by Samay Bhojwani, University of Nebraska–Lincoln
As schools continue to adopt AI-driven tools, educators can close the accessibility gap and help dyslexic students thrive

Many traditional methods lack customization and don’t empower students to fully engage with content on their terms. Every dyslexic student experiences challenges differently, so a more personalized approach is essential for fostering comprehension, engagement, and academic growth.

Artificial intelligence is increasingly recognized for its potential to transform educational accessibility. By analyzing individual learning patterns, AI-powered tools can tailor content to meet each student’s specific needs. For dyslexic students, this can mean summarizing complex texts, providing auditory support, or even visually structuring information in ways that aid comprehension.


NotebookLM How-to Guide 2024 — from ai-supremacy.com by Michael Spencer and Alex McFarland
With Audio Version | A popular guide reloaded.

In this guide, I’ll show you:

  1. How to use the new advanced audio customization features
  2. Two specific workflows for synthesizing information (research papers and YouTube videos)
  3. Pro tips for maximizing results with any type of content
  4. Common pitfalls to avoid (learned these the hard way)

The State of Instructional Design 2024: A Field on the Brink of Disruption? — from drphilippahardman.substack.com by Dr. Philippa Hardman
My hot takes from a global survey I ran with Synthesia

As I mentioned on LinkedIn, earlier this week Synthesia published the results of a global survey that we ran together the state of instructional design in 2024.


Boundless Socratic Learning: Google DeepMind’s Vision for AI That Learns Without Limits — from by Giorgio Fazio

Google DeepMind researchers have unveiled a groundbreaking framework called Boundless Socratic Learning (BSL), a paradigm shift in artificial intelligence aimed at enabling systems to self-improve through structured language-based interactions. This approach could mark a pivotal step toward the elusive goal of artificial superintelligence (ASI), where AI systems drive their own development with minimal human input.

The promise of Boundless Socratic Learning lies in its ability to catalyze a shift from human-supervised AI to systems that evolve and improve autonomously. While significant challenges remain, the introduction of this framework represents a step toward the long-term goal of open-ended intelligence, where AI is not just a tool but a partner in discovery.


5 courses to take when starting out a career in Agentic AI — from techloy.com by David Adubiina
This will help you join the early train of experts who are using AI agents to solve real world problems.

This surge in demand is creating new opportunities for professionals equipped with the right skills. If you’re considering a career in this innovative field, the following five courses will provide a solid foundation when starting a career in Agentic AI.



 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

Skill-Based Training: Embrace the Benefits; Stay Wary of the Hype — from learningguild.com by Paige Yousey

1. Direct job relevance
One of the biggest draws of skill-based training is its direct relevance to employees’ daily roles. By focusing on teaching job-specific skills, this approach helps workers feel immediately empowered to apply what they learn, leading to a quick payoff for both the individual and the organization. Yet, while this tight focus is a major benefit, it’s important to consider some potential drawbacks that could arise from an overly narrow approach.

Be wary of:

  • Overly Narrow Focus: Highly specialized training might leave employees with little room to apply their skills to broader challenges, limiting versatility and growth potential.
  • Risk of Obsolescence: Skills can quickly become outdated, especially in fast-evolving industries. L&D leaders should aim for regular updates to maintain relevance.
  • Neglect of Soft Skills: While technical skills are crucial, ignoring soft skills like communication and problem-solving may lead to a lack of balanced competency.

2. Enhanced job performance…
3. Addresses skill gaps…

…and several more areas to consider


Another item from Paige Yousey

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

AI-driven course design

Strengths

  • Content creation and updates: AI streamlines the creation of training materials by identifying resource gaps and generating tailored content, while also refreshing existing materials based on industry trends and employee feedback to maintain relevance.
  • Data-driven insights: Use AI tools to provide valuable analytics to inform course development and instructional strategies, helping learner designers identify effective practices and improve overall learning outcomes.
  • Efficiency: Automating repetitive tasks, such as learner assessments and administrative duties, enables L&D professionals to concentrate on developing impactful training programs and fostering learner engagement.

Concerns

  • Limited understanding of context: AI may struggle to understand the specific educational context or the unique needs of diverse learner populations, potentially hindering effectiveness.
  • Oversimplification of learning: AI may reduce complex educational concepts to simple metrics or algorithms, oversimplifying the learning process and neglecting deeper cognitive development.
  • Resistance to change: Learning leaders may face resistance from staff who are skeptical about integrating AI into their training practices.

Also from the Learning Guild, see:

Use Twine to Easily Create Engaging, Immersive Scenario-Based Learning — from learningguild.com by Bill Brandon

Scenario-based learning immerses learners in realistic scenarios that mimic real-world challenges they might face in their roles. These learning experiences are highly relevant and relatable. SBL is active learning. Instead of passively consuming information, learners actively engage with the content by making decisions and solving problems within the scenario. This approach enhances critical thinking and decision-making skills.

SBL can be more effective when storytelling techniques create a narrative that guides learners through the scenario to maintain engagement and make the learning memorable. Learners receive immediate feedback on their decisions and learn from their mistakes. Reflection can deepen their understanding. Branching scenarios add simulated complex decision-making processes and show the outcome of various actions through interactive scenarios where learner choices lead to different outcomes.

Embrace the Future: Why L&D Leaders Should Prioritize AI Digital Literacy — from learningguild.com by Dr. Erica McCaig

The role of L&D leaders in AI digital literacy
For L&D leaders, developing AI digital literacy within an organization requires a well-structured curriculum and development plan that equips employees with the knowledge, skills, and ethical grounding needed to thrive in an AI-augmented workplace. This curriculum should encompass a range of competencies that enhance technical understanding and foster a mindset ready for innovation and responsible use of AI. Key areas to focus on include:

  • Understanding AI Fundamentals: …
  • Proficiency with AI Tools: …
  • Ethical Considerations: …
  • Cultivating Critical Thinking: …
 

7 Legal Tech Trends To Watch In 2025 — from lexology.com by Sacha Kirk
Australia, United Kingdom November 25 2024

In-house legal teams are changing from a traditional support function to becoming proactive business enablers. New tools are helping legal departments enhance efficiency, improve compliance, and to deliver greater strategic value.

Here’s a look at seven emerging trends that will shape legal tech in 2025 and insights on how in-house teams can capitalise on these innovations.

1. AI Solutions…
2. Regulatory Intelligence Platforms…

7. Self-Service Legal Tools and Knowledge Management
As the demand on in-house legal teams continues to grow, self-service tools are becoming indispensable for managing routine legal tasks. In 2025, these tools are expected to evolve further, enabling employees across the organisation to handle straightforward legal processes independently. Whether it’s accessing pre-approved templates, completing standard agreements, or finding answers to common legal queries, self-service platforms reduce the dependency on legal teams for everyday tasks.

Advanced self-service tools go beyond templates, incorporating intuitive workflows, approval pathways, and built-in guidance to ensure compliance with legal and organisational policies. By empowering business users to manage low-risk matters on their own, these tools free up legal teams to focus on complex and high-value work.


 

 

Trade School Enrollment Surges Post-Pandemic, Outpacing Traditional Universities — from businesswire.com
New Report Highlights Growth in Healthcare and Culinary Arts Programs

CHICAGO–(BUSINESS WIRE)–A new report released today by Validated Insights, a higher education marketing firm, reveals a significant increase in trade school enrollment following the pandemic, with a 4.9% growth from 2020 to 2023. This surge contrasts sharply with a 0.6% decline in university enrollment during the same period, highlighting a growing preference for career-focused education.

The report highlights the diverse landscape of trade schools, with varying enrollment trends across different categories and subtypes. While some sectors face challenges, others, like Culinary Arts and Beauty and Wellness, present significant growth opportunities and shifting student attitudes.


A trend colleges might not want applicants to notice: It’s becoming easier to get in — from hechingerreport.orgby Jon Marcus
Despite public perception, and for the first time in decades, acceptance rates are going up

As enrollment in colleges and universities continues to decline — down by more than 2 million students, or 10 percent, in the 10 years ending 2022 — they’re not only casting wider nets. Something else dramatic is happening to the college application process, for the first time in decades:

It’s becoming easier to get in.

Colleges and universities, on average, are admitting a larger proportion of their applicants than they did 20 years ago, new research by the conservative think tank the American Enterprise Institute finds.


 

The State of Instructional Design, 2024 — from by Dr. Philippa Hardman
Four initial results from a global survey I ran with Synthesia

In September, I partnered with Synthesia to conduct a comprehensive survey exploring the evolving landscape of instructional design.

Our timing was deliberate: as we witness the rapid advancement of AI and increasing pressure on learning teams to drive mass re-skilling and deliver more with less, we wanted to understand how the role of instructional designers is changing.

Our survey focused on five key areas that we believed would help surface the most important data about the transformation of our field:

    1. Roles & Responsibilities: who’s designing learning experiences in 2024?
    2. Success Metrics: how do you and the organisations you work for measure the value of instructional design?
    3. Workload & Workflow: how much time do we spend on different aspects of our job, and why?
    4. Challenges & Barriers: what sorts of obstacles prevent us from producing optimal work?
    5. Tools & Technology: what tools do we use, and is the tooling landscape changing?
 

It’s The End Of The Legal Industry As We Know It — from artificiallawyer.com by Richard Tromans

It’s the end of the legal industry as we know it and I feel fine. I really do.

The legal industry as we know it is already over. The seismic event that triggered this evolutionary shift happened in November 2022. There’s no going back to a pre-genAI world. Change, incremental or otherwise, will be unstoppable. The only question is: at what pace will this change happen?

It’s clear that substantive change at the heart of the legal economy may take a long time – and we should never underestimate the challenge of overturning decades of deeply embedded cultural practices – but, at least it has begun.


AI: The New Legal Powerhouse — Why Lawyers Should Befriend The Machine To Stay Ahead — from today.westlaw.com

(October 24, 2024) – Jeremy Glaser and Sharzaad Borna of Mintz discuss waves of change in the legal profession brought on by AI, in areas such as billing, the work of support staff and junior associates, and ethics.

The dual nature of AI — excitement and fear
AI is evolving at lightning speed, sparking both wonder and worry. As it transforms industries and our daily lives, we are caught between the thrill of innovation and the jitters of uncertainty. Will AI elevate the human experience or just leave us in the dust? How will it impact our careers, privacy and sense of security?

Just as we witnessed with the rise of the internet — and later, social media — AI is poised to redefine how we work and live, bringing a mix of optimism and apprehension. While we grapple with AI’s implications, our clients expect us to lead the charge in leveraging it for their benefit.

However, this shift also means more competition for fewer entry-level jobs. Law schools will play a key role in helping students become more marketable by offering courses on AI tools and technology. Graduates with AI literacy will have an edge over their peers, as firms increasingly value associates who can collaborate effectively with AI tools.


Will YOU use ChatGPT voice mode to lie to your family? Brainyacts #244 — from thebrainyacts.beehiiv.com by Sam Douthit, Aristotle Jones, and Derek Warzel.

Small Law’s Secret Weapon: AI Courtroom Mock Battles — this excerpt is by Brainacts author Josh Kubicki
As many of you know, this semester my law students have the opportunity to write the lead memo for this newsletter, each tackling issues that they believe are both timely and intriguing for our readers. This week’s essay presents a fascinating experiment conducted by three students who explored how small law firms might leverage ChatGPT in a safe, effective manner. They set up ChatGPT to simulate a mock courtroom, even assigning it the persona of a Seventh Circuit Court judge to stage a courtroom dialogue. It’s an insightful take on the how adaptable technology like ChatGPT can offer unique advantages to smaller practices. They share other ideas as well. Enjoy!

The following excerpt was written by Sam Douthit, Aristotle Jones, and Derek Warzel.

One exciting example is a “Courtroom Persona AI” tool, which could let solo practitioners simulate mock trials and practice arguments with AI that mimics specific judges, local courtroom customs, or procedural quirks. Small firms, with their deep understanding of local courts and judicial styles, could take full advantage of this tool to prepare more accurate and relevant arguments. Unlike big firms that have to spread resources across jurisdictions, solo and small firms could use this AI-driven feedback to tailor their strategies closely to local court dynamics, making their preparations sharper and more strategic. Plus, not all solo or small firms have someone to practice with or bounce their ideas off of. For these practitioners, it’s a chance to level up their trial preparation without needing large teams or costly mock trials, gaining a practical edge where it counts most.

Some lawyers have already started to test this out, like the mock trial tested out here. One oversimplified and quick way to try this out is using the ChatGPT app.


The Human in AI-Assisted Dispute Resolution — from jdsupra.com by Epiq

Accountability for Legal Outputs
AI is set to replace some of the dispute resolution work formerly done by lawyers. This work includes summarising documents, drafting legal contracts and filings, using generative AI to produce arbitration submissions for an oral hearing, and, in the not-too-distant future, ingesting transcripts from hearings and comparing them to the documentary record to spot inconsistencies.

As Pendell put it, “There’s quite a bit of lawyering going on there.” So, what’s left for humans?

The common feature in all those examples is that humans must make the judgement call. Lawyers won’t just turn over a first draft of an AI-generated contract or filing to another party or court. The driving factor is that law is still a regulated profession, and regulators will hold humans accountable.

The idea that young lawyers must do routine, menial work as a rite of passage needs to be updated. Today’s AI tools put lawyers at the top of an accountability chain, allowing them to practice law using judgement and strategy as they supervise the work of AI. 


Small law firms embracing AI as they move away from hourly billing — from legalfutures.co.uk by Neil Rose

Small law firms have embraced artificial intelligence (AI), with document drafting or automation the most popular application, according to new research.

The survey also found expectations of a continued move away from hourly billing to fixed fees.

Legal technology provider Clio commissioned UK-specific research from Censuswide as an adjunct to its annual US-focused Legal Trends report, polling 500 solicitors, 82% of whom worked at firms with 20 lawyers or fewer.

Some 96% of them reported that their firms have adopted AI into their processes in some way – 56% of them said it was widespread or universal – while 62% anticipated an increase in AI usage over the next 12 months.

 

Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 

How Legal Education Must Evolve In The Age Of AI: Insights From An In-House Legal Innovator — from by abovethelaw.com Olga Mack
Traditional legal education has remained largely unchanged for decades, focusing heavily on theoretical knowledge and case law analysis.

As we stand on the brink of a new era defined by artificial intelligence (AI) and data-driven decision-making, the question arises: How should legal education adapt to prepare the next generation of lawyers for the challenges ahead?

Here are three unconventional, actionable insights from our conversation that highlight the need for a radical rethinking of legal education.

  1. Integrate AI Education Into Every Aspect Of Legal Training…
  2. Adopt A ‘Technology-Agnostic’ Approach To AI Training…
  3. Redefine Success In Legal Education To Include Technological Proficiency…
 

10 Graphic Design Trends to Pay Attention to in 2025 — from graphicmama.com by Al Boicheva

We’ll go on a hunt for bold, abstract, and naturalist designs, cutting-edge AI tools, and so much more, all pushing boundaries and rethinking what we already know about design. In 2025, we will see new ways to animate ideas, revisit retro styles with a modern twist, and embrace clean, but sophisticated aesthetics. For designers and design enthusiasts alike, these trends are set to bring a new level of excitement to the world of design.

Here are the Top 10 Graphic Design Trends in 2025:

 
© 2024 | Daniel Christian