2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

Skill-Based Training: Embrace the Benefits; Stay Wary of the Hype — from learningguild.com by Paige Yousey

1. Direct job relevance
One of the biggest draws of skill-based training is its direct relevance to employees’ daily roles. By focusing on teaching job-specific skills, this approach helps workers feel immediately empowered to apply what they learn, leading to a quick payoff for both the individual and the organization. Yet, while this tight focus is a major benefit, it’s important to consider some potential drawbacks that could arise from an overly narrow approach.

Be wary of:

  • Overly Narrow Focus: Highly specialized training might leave employees with little room to apply their skills to broader challenges, limiting versatility and growth potential.
  • Risk of Obsolescence: Skills can quickly become outdated, especially in fast-evolving industries. L&D leaders should aim for regular updates to maintain relevance.
  • Neglect of Soft Skills: While technical skills are crucial, ignoring soft skills like communication and problem-solving may lead to a lack of balanced competency.

2. Enhanced job performance…
3. Addresses skill gaps…

…and several more areas to consider


Another item from Paige Yousey

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

AI-driven course design

Strengths

  • Content creation and updates: AI streamlines the creation of training materials by identifying resource gaps and generating tailored content, while also refreshing existing materials based on industry trends and employee feedback to maintain relevance.
  • Data-driven insights: Use AI tools to provide valuable analytics to inform course development and instructional strategies, helping learner designers identify effective practices and improve overall learning outcomes.
  • Efficiency: Automating repetitive tasks, such as learner assessments and administrative duties, enables L&D professionals to concentrate on developing impactful training programs and fostering learner engagement.

Concerns

  • Limited understanding of context: AI may struggle to understand the specific educational context or the unique needs of diverse learner populations, potentially hindering effectiveness.
  • Oversimplification of learning: AI may reduce complex educational concepts to simple metrics or algorithms, oversimplifying the learning process and neglecting deeper cognitive development.
  • Resistance to change: Learning leaders may face resistance from staff who are skeptical about integrating AI into their training practices.

Also from the Learning Guild, see:

Use Twine to Easily Create Engaging, Immersive Scenario-Based Learning — from learningguild.com by Bill Brandon

Scenario-based learning immerses learners in realistic scenarios that mimic real-world challenges they might face in their roles. These learning experiences are highly relevant and relatable. SBL is active learning. Instead of passively consuming information, learners actively engage with the content by making decisions and solving problems within the scenario. This approach enhances critical thinking and decision-making skills.

SBL can be more effective when storytelling techniques create a narrative that guides learners through the scenario to maintain engagement and make the learning memorable. Learners receive immediate feedback on their decisions and learn from their mistakes. Reflection can deepen their understanding. Branching scenarios add simulated complex decision-making processes and show the outcome of various actions through interactive scenarios where learner choices lead to different outcomes.

Embrace the Future: Why L&D Leaders Should Prioritize AI Digital Literacy — from learningguild.com by Dr. Erica McCaig

The role of L&D leaders in AI digital literacy
For L&D leaders, developing AI digital literacy within an organization requires a well-structured curriculum and development plan that equips employees with the knowledge, skills, and ethical grounding needed to thrive in an AI-augmented workplace. This curriculum should encompass a range of competencies that enhance technical understanding and foster a mindset ready for innovation and responsible use of AI. Key areas to focus on include:

  • Understanding AI Fundamentals: …
  • Proficiency with AI Tools: …
  • Ethical Considerations: …
  • Cultivating Critical Thinking: …
 

7 Legal Tech Trends To Watch In 2025 — from lexology.com by Sacha Kirk
Australia, United Kingdom November 25 2024

In-house legal teams are changing from a traditional support function to becoming proactive business enablers. New tools are helping legal departments enhance efficiency, improve compliance, and to deliver greater strategic value.

Here’s a look at seven emerging trends that will shape legal tech in 2025 and insights on how in-house teams can capitalise on these innovations.

1. AI Solutions…
2. Regulatory Intelligence Platforms…

7. Self-Service Legal Tools and Knowledge Management
As the demand on in-house legal teams continues to grow, self-service tools are becoming indispensable for managing routine legal tasks. In 2025, these tools are expected to evolve further, enabling employees across the organisation to handle straightforward legal processes independently. Whether it’s accessing pre-approved templates, completing standard agreements, or finding answers to common legal queries, self-service platforms reduce the dependency on legal teams for everyday tasks.

Advanced self-service tools go beyond templates, incorporating intuitive workflows, approval pathways, and built-in guidance to ensure compliance with legal and organisational policies. By empowering business users to manage low-risk matters on their own, these tools free up legal teams to focus on complex and high-value work.


 

 

Trade School Enrollment Surges Post-Pandemic, Outpacing Traditional Universities — from businesswire.com
New Report Highlights Growth in Healthcare and Culinary Arts Programs

CHICAGO–(BUSINESS WIRE)–A new report released today by Validated Insights, a higher education marketing firm, reveals a significant increase in trade school enrollment following the pandemic, with a 4.9% growth from 2020 to 2023. This surge contrasts sharply with a 0.6% decline in university enrollment during the same period, highlighting a growing preference for career-focused education.

The report highlights the diverse landscape of trade schools, with varying enrollment trends across different categories and subtypes. While some sectors face challenges, others, like Culinary Arts and Beauty and Wellness, present significant growth opportunities and shifting student attitudes.


A trend colleges might not want applicants to notice: It’s becoming easier to get in — from hechingerreport.orgby Jon Marcus
Despite public perception, and for the first time in decades, acceptance rates are going up

As enrollment in colleges and universities continues to decline — down by more than 2 million students, or 10 percent, in the 10 years ending 2022 — they’re not only casting wider nets. Something else dramatic is happening to the college application process, for the first time in decades:

It’s becoming easier to get in.

Colleges and universities, on average, are admitting a larger proportion of their applicants than they did 20 years ago, new research by the conservative think tank the American Enterprise Institute finds.


 

The State of Instructional Design, 2024 — from by Dr. Philippa Hardman
Four initial results from a global survey I ran with Synthesia

In September, I partnered with Synthesia to conduct a comprehensive survey exploring the evolving landscape of instructional design.

Our timing was deliberate: as we witness the rapid advancement of AI and increasing pressure on learning teams to drive mass re-skilling and deliver more with less, we wanted to understand how the role of instructional designers is changing.

Our survey focused on five key areas that we believed would help surface the most important data about the transformation of our field:

    1. Roles & Responsibilities: who’s designing learning experiences in 2024?
    2. Success Metrics: how do you and the organisations you work for measure the value of instructional design?
    3. Workload & Workflow: how much time do we spend on different aspects of our job, and why?
    4. Challenges & Barriers: what sorts of obstacles prevent us from producing optimal work?
    5. Tools & Technology: what tools do we use, and is the tooling landscape changing?
 

It’s The End Of The Legal Industry As We Know It — from artificiallawyer.com by Richard Tromans

It’s the end of the legal industry as we know it and I feel fine. I really do.

The legal industry as we know it is already over. The seismic event that triggered this evolutionary shift happened in November 2022. There’s no going back to a pre-genAI world. Change, incremental or otherwise, will be unstoppable. The only question is: at what pace will this change happen?

It’s clear that substantive change at the heart of the legal economy may take a long time – and we should never underestimate the challenge of overturning decades of deeply embedded cultural practices – but, at least it has begun.


AI: The New Legal Powerhouse — Why Lawyers Should Befriend The Machine To Stay Ahead — from today.westlaw.com

(October 24, 2024) – Jeremy Glaser and Sharzaad Borna of Mintz discuss waves of change in the legal profession brought on by AI, in areas such as billing, the work of support staff and junior associates, and ethics.

The dual nature of AI — excitement and fear
AI is evolving at lightning speed, sparking both wonder and worry. As it transforms industries and our daily lives, we are caught between the thrill of innovation and the jitters of uncertainty. Will AI elevate the human experience or just leave us in the dust? How will it impact our careers, privacy and sense of security?

Just as we witnessed with the rise of the internet — and later, social media — AI is poised to redefine how we work and live, bringing a mix of optimism and apprehension. While we grapple with AI’s implications, our clients expect us to lead the charge in leveraging it for their benefit.

However, this shift also means more competition for fewer entry-level jobs. Law schools will play a key role in helping students become more marketable by offering courses on AI tools and technology. Graduates with AI literacy will have an edge over their peers, as firms increasingly value associates who can collaborate effectively with AI tools.


Will YOU use ChatGPT voice mode to lie to your family? Brainyacts #244 — from thebrainyacts.beehiiv.com by Sam Douthit, Aristotle Jones, and Derek Warzel.

Small Law’s Secret Weapon: AI Courtroom Mock Battles — this excerpt is by Brainacts author Josh Kubicki
As many of you know, this semester my law students have the opportunity to write the lead memo for this newsletter, each tackling issues that they believe are both timely and intriguing for our readers. This week’s essay presents a fascinating experiment conducted by three students who explored how small law firms might leverage ChatGPT in a safe, effective manner. They set up ChatGPT to simulate a mock courtroom, even assigning it the persona of a Seventh Circuit Court judge to stage a courtroom dialogue. It’s an insightful take on the how adaptable technology like ChatGPT can offer unique advantages to smaller practices. They share other ideas as well. Enjoy!

The following excerpt was written by Sam Douthit, Aristotle Jones, and Derek Warzel.

One exciting example is a “Courtroom Persona AI” tool, which could let solo practitioners simulate mock trials and practice arguments with AI that mimics specific judges, local courtroom customs, or procedural quirks. Small firms, with their deep understanding of local courts and judicial styles, could take full advantage of this tool to prepare more accurate and relevant arguments. Unlike big firms that have to spread resources across jurisdictions, solo and small firms could use this AI-driven feedback to tailor their strategies closely to local court dynamics, making their preparations sharper and more strategic. Plus, not all solo or small firms have someone to practice with or bounce their ideas off of. For these practitioners, it’s a chance to level up their trial preparation without needing large teams or costly mock trials, gaining a practical edge where it counts most.

Some lawyers have already started to test this out, like the mock trial tested out here. One oversimplified and quick way to try this out is using the ChatGPT app.


The Human in AI-Assisted Dispute Resolution — from jdsupra.com by Epiq

Accountability for Legal Outputs
AI is set to replace some of the dispute resolution work formerly done by lawyers. This work includes summarising documents, drafting legal contracts and filings, using generative AI to produce arbitration submissions for an oral hearing, and, in the not-too-distant future, ingesting transcripts from hearings and comparing them to the documentary record to spot inconsistencies.

As Pendell put it, “There’s quite a bit of lawyering going on there.” So, what’s left for humans?

The common feature in all those examples is that humans must make the judgement call. Lawyers won’t just turn over a first draft of an AI-generated contract or filing to another party or court. The driving factor is that law is still a regulated profession, and regulators will hold humans accountable.

The idea that young lawyers must do routine, menial work as a rite of passage needs to be updated. Today’s AI tools put lawyers at the top of an accountability chain, allowing them to practice law using judgement and strategy as they supervise the work of AI. 


Small law firms embracing AI as they move away from hourly billing — from legalfutures.co.uk by Neil Rose

Small law firms have embraced artificial intelligence (AI), with document drafting or automation the most popular application, according to new research.

The survey also found expectations of a continued move away from hourly billing to fixed fees.

Legal technology provider Clio commissioned UK-specific research from Censuswide as an adjunct to its annual US-focused Legal Trends report, polling 500 solicitors, 82% of whom worked at firms with 20 lawyers or fewer.

Some 96% of them reported that their firms have adopted AI into their processes in some way – 56% of them said it was widespread or universal – while 62% anticipated an increase in AI usage over the next 12 months.

 

Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 

How Legal Education Must Evolve In The Age Of AI: Insights From An In-House Legal Innovator — from by abovethelaw.com Olga Mack
Traditional legal education has remained largely unchanged for decades, focusing heavily on theoretical knowledge and case law analysis.

As we stand on the brink of a new era defined by artificial intelligence (AI) and data-driven decision-making, the question arises: How should legal education adapt to prepare the next generation of lawyers for the challenges ahead?

Here are three unconventional, actionable insights from our conversation that highlight the need for a radical rethinking of legal education.

  1. Integrate AI Education Into Every Aspect Of Legal Training…
  2. Adopt A ‘Technology-Agnostic’ Approach To AI Training…
  3. Redefine Success In Legal Education To Include Technological Proficiency…
 

10 Graphic Design Trends to Pay Attention to in 2025 — from graphicmama.com by Al Boicheva

We’ll go on a hunt for bold, abstract, and naturalist designs, cutting-edge AI tools, and so much more, all pushing boundaries and rethinking what we already know about design. In 2025, we will see new ways to animate ideas, revisit retro styles with a modern twist, and embrace clean, but sophisticated aesthetics. For designers and design enthusiasts alike, these trends are set to bring a new level of excitement to the world of design.

Here are the Top 10 Graphic Design Trends in 2025:

 

Along these same lines, see:

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.


ZombAIs: From Prompt Injection to C2 with Claude Computer Use — from embracethered.com by Johann Rehberger

A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.

It’s cool, but obviously very dangerous because of prompt injection. Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.

This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers.

We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.

TrustNoAI.

And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.

Also relevant here, see:


Perplexity Grows, GPT Traffic Surges, Gamma Dominates AI Presentations – The AI for Work Top 100: October 2024 — from flexos.work by Daan van Rossum
Perplexity continues to gain users despite recent controversies. Five out of six GPTs see traffic boosts. This month’s highest gainers including Gamma, Blackbox, Runway, and more.


Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report — from ai.wharton.upenn.edu by  Jeremy Korst, Stefano Puntoni, & Mary Purk

From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.


Apple study exposes deep cracks in LLMs’ “reasoning” capabilities — from arstechnica.com by Kyle Orland
Irrelevant red herrings lead to “catastrophic” failure of logical inference.

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”


Google CEO says more than a quarter of the company’s new code is created by AI — from businessinsider.in by Hugh Langley

  • More than a quarter of new code at Google is made by AI and then checked by employees.
  • Google is doubling down on AI internally to make its business more efficient.

Top Generative AI Chatbots by Market Share – October 2024 


Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview — from github.blog

We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.

 

From DSC:
The following reflections were catalyzed by Jeff Selingo’s Next posting from 10/22, specifically the item:

  • Student fees for athletics, dark money in college sports, and why this all matters to every student, every college.

All of this has big risks for institutions. But whenever I talk to faculty and administrators on campuses about this, many will wave me away and say, “Well, I’m not a college sports fan” or “We’re a Division III school, so that all this doesn’t impact us.”

Nothing is further from the truth, as we explored on a recent episode of the Future U. podcast, where we welcomed in Matt Brown, editor of the Extra Points newsletter, which looks at academic and financial issues in college sports.

As we learned, despite the siloed nature of higher ed, everything is connected to athletics: research, academics, market position. Institutions can rise and fall on the backs of their athletics programs – and we’re not talking about wins and losses, but real budget dollars.

And if you want to know about the impact on students, look no further than the news out of Clemson this week. It is following several other universities in adopting an “athletics fee”: $300 a year. It won’t be the last.  

Give a listen to this episode of Future U. if you want to catch up quick on this complicated subject, and while you’re at it, subscribe wherever you get your podcasts.


Clemson approves new athletics fee for students. Here’s what we know — from sports.yahoo.com by Chapel Fowler
How much are student fees at other schools?

That’s true in the state of South Carolina, when comparing the annual fees of Clemson ($300) and USC ($172) to Coastal Carolina ($2,090). And it holds up nationally, too.



From DSC:
The Bible talks a lot about idols….and I can’t help but wonder, have sports become an idol in our nation?

Don’t get me wrong. Sports can and should be fun for us to play. I played many an hour of sports in my youth and I occasionally play some sports these days. Plus, sports are excellent for helping us keep in shape and take care of our bodies. Sports can help us connect with others and make some fun/good memories with our friends.

So there’s much good to playing sports. But have we elevated sports to places they were never meant to be? To roles they were never meant to play?

 

DC: I’m really hoping that a variety of AI-based tools, technologies, and services will significantly help with our Access to Justice (#A2J) issues here in America. So this article, per Kristen Sonday at Thomson Reuters — caught my eye.

***

AI for Legal Aid: How to empower clients in need — from thomsonreuters.com by Kristen Sonday
In this second part of this series, we look at how AI-driven technologies can empower those legal aid clients who may be most in need

It’s hard to overstate the impact that artificial intelligence (AI) is expected to have on helping low-income individuals achieve better access to justice. And for those legal services organizations (LSOs) that serve on the front lines, too often without sufficient funding, staff, or technology, AI presents perhaps their best opportunity to close the justice gap. With the ability of AI-driven tools to streamline agency operations, minimize administrative work, more effectively reallocate talent, and allow LSOs to more effectively service clients, the implementation of these tools is essential.

Innovative LSOs leading the way

Already many innovative LSOs are taking the lead, utilizing new technology to complete tasks from complex analysis to AI-driven legal research. Here are two compelling examples of how AI is already helping LSOs empower low-income clients in need.

#A2J #justice #tools #vendors #society #legal #lawfirms #AI #legaltech #legalresearch

Criminal charges, even those that are eligible for simple, free expungement, can prevent someone from obtaining housing or employment. This is a simple barrier to overcome if only help is available.

AI offers the capacity to provide quick, accurate information to a vast audience, particularly to those in urgent need. AI can also help reduce the burden on our legal staff…

 


A legal tech executive explains how AI will fully change the way lawyers work — from legaldive.com by Justin Bachman
A senior executive with ContractPodAi discusses how legal AI poses economic benefits for in-house departments and disruption risks for law firm billing models.

Everything you thought you knew about being a lawyer is about to change.

Legal Dive spoke with Podinic about the transformative nature of AI, including the financial risks to lawyers’ billing models and how it will force general counsel and chief legal officers to consider how they’ll use the time AI is expected to free up for the lawyers on their teams when they no longer have to do administrative tasks and low-level work.


Legaltech will augment lawyers’ capabilities but not replace them, says GlobalData — from globaldata.com

  • Traditionally, law firms have been wary of adopting technologies that could compromise data privacy and legal accuracy; however, attitudes are changing
  • Despite concerns about technology replacing humans in the legal sector, legaltech is more likely to augment the legal profession than replace it entirely
  • Generative AI will accelerate digital transformation in the legal sector
 

Fresh Voices on Legal Tech with Megan Ma — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Dr. Megan Ma

Episode Notes
As genAI continues to edge into all facets of our lives, Dr. Megan Ma has been exploring integrations for this technology in legal, but, more importantly, how it can help lawyers and law students hone their legal skills. Dennis and Tom talk with Dr. Ma about her work and career path and many of the latest developments in legal tech. They take a deep dive into a variety of burgeoning AI tools and trends, and Dr. Ma discusses how her interdisciplinary mindset has helped her develop a unique perspective on the possibilities for AI in the legal profession and beyond.

Legal tech disruption: Doing it on purpose — from localgovernmentlawyer.co.uk
Thomson Reuters looks at the role that a legal technology roadmap can play in improving the operations of in-house legal departments.

Disruption in the legal industry remains a powerful force – from the death of the billable hour to robot lawyers and generative AI. Leaders are facing weighty issues that demand long-term, visionary thinking and that will change the way legal professionals do their jobs.

With half of in-house legal departments increasing their use of legal technology tools, many GCs are taking the initiative to address continued, growing expectations from the business for systems that can make operations better. How can you prepare for a tech or process change so that people come along with you, rather than living in constant fire-fighting mode?

 

Employers Say Students Need AI Skills. What If Students Don’t Want Them? — from insidehighered.com by Ashley Mowreader
Colleges and universities are considering new ways to incorporate generative AI into teaching and learning, but not every student is on board with the tech yet. Experts weigh in on the necessity of AI in career preparation and higher education’s role in preparing students for jobs of the future.

Among the 5,025-plus survey respondents, around 2 percent (n=93), provided free responses to the question on AI policy and use in the classroom. Over half (55) of those responses were flat-out refusal to engage with AI. A few said they don’t know how to use AI or are not familiar with the tool, which impacts their ability to apply appropriate use to coursework.

But as generative AI becomes more ingrained into the workplace and higher education, a growing number of professors and industry experts believe this will be something all students need, in their classes and in their lives beyond academia.

From DSC:
I used to teach a Foundations of Information Technology class. Some of the students didn’t want to be there as they began the class, as it was a required class for non-CS majors. But after seeing what various applications and technologies could do for them, a good portion of those same folks changed their minds. But not all. Some students (2% sounds about right) asserted that they would never use technologies in their futures. Good luck with that I thought to myself. There’s hardly a job out there that doesn’t use some sort of technology.

And I still think that today — if not more so. If students want good jobs, they will need to learn how to use AI-based tools and technologies. I’m not sure there’s much of a choice. And I don’t think there’s much of a choice for the rest of us either — whether we’re still working or not. 

So in looking at the title of the article — “Employers Say Students Need AI Skills. What If Students Don’t Want Them?” — those of us who have spent any time working within the world of business already know the answer.

#Reinvent #Skills #StayingRelevant #Surviving #Workplace + several other categories/tags apply.


For those folks who have tried AI:

Skills: However, genAI may also be helpful in building skills to retain a job or secure a new one. People who had used genAI tools were more than twice as likely to think that these tools could help them learn new skills that may be useful at work or in locating a new job. Specifically, among those who had not used genAI tools, 23 percent believed that these tools might help them learn new skills, whereas 50 percent of those who had used the tools thought they might be helpful in acquiring useful skills (a highly statistically significant difference, after controlling for demographic traits).

Source: Federal Reserve Bank of New York

 

Why Jensen Huang and Marc Benioff see ‘gigantic’ opportunity for agentic AI — from venturebeat.com by Taryn Plumb

Going forward, the opportunity for AI agents will be “gigantic,” according to Nvidia founder and CEO Jensen Huang.

Already, progress is “spectacular and surprising,” with AI development moving faster and faster and the industry getting into the “flywheel zone” that technology needs to advance, Huang said in a fireside chat at Salesforce’s flagship event Dreamforce this week.

“This is an extraordinary time,” Huang said while on stage with Marc Benioff, Salesforce chair, CEO and co-founder. “In no time in history has technology moved faster than Moore’s Law. We’re moving way faster than Moore’s Law, are arguably reasonably Moore’s Law squared.”

“We’ll have agents working with agents, agents working with us,” said Huang.

 
© 2025 | Daniel Christian