Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 

How Legal Education Must Evolve In The Age Of AI: Insights From An In-House Legal Innovator — from by abovethelaw.com Olga Mack
Traditional legal education has remained largely unchanged for decades, focusing heavily on theoretical knowledge and case law analysis.

As we stand on the brink of a new era defined by artificial intelligence (AI) and data-driven decision-making, the question arises: How should legal education adapt to prepare the next generation of lawyers for the challenges ahead?

Here are three unconventional, actionable insights from our conversation that highlight the need for a radical rethinking of legal education.

  1. Integrate AI Education Into Every Aspect Of Legal Training…
  2. Adopt A ‘Technology-Agnostic’ Approach To AI Training…
  3. Redefine Success In Legal Education To Include Technological Proficiency…
 



Google’s worst nightmare just became reality — from aidisruptor.ai by Alex McFarland
OpenAI just launched an all-out assault on traditional search engines.

Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.

It’s the beginning of the end for search as we know it.

Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.

The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.




Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com
Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.


 

AI Tutors Double Rates of Learning in Less Learning Time — by drphilippahardman.substack.com Dr. Philippa Hardman
Inside Harvard’s new groundbreaking study

Conclusion
This Harvard study provides robust evidence that AI tutoring, when thoughtfully designed, can significantly enhance learning outcomes. The combination of doubled learning gains, increased engagement, and reduced time to competency suggests we’re seeing just the beginning of AI’s potential in education and that its potential is significant.

If this data is anything to go by, and if we – as humans – are open and willing to acting on it, it’s possible AI will have a significant and for some deeply positive impact on how we design and deliver learning experiences.

That said, as we look forward, the question shouldn’t just be, “how AI can enhance current educational methods?”, but also “how it might AI transform the very nature of learning itself?”. With continued research and careful implementation, we could be moving toward an era of education that’s more effective but also more accessible than ever before.


Three Quick Examples of Teaching with and about Generative AI — from derekbruff.org Derek Bruff

  • Text-to-Podcast.
  • Assigning Students to Groups.
  • AI Acceptable Use Scale.

Also from Derek’s blog, see:


From Mike Sharples on LinkedIn: 


ChatGPT’s free voice wizard — from wondertools.substack.com by Jeremy Caplan
How and why to try the new Advanced Voice Mode

7 surprisingly practical ways to use voice AI
Opening up ChatGPT’s Advanced Voice Mode (AVM) is like conjuring a tutor eager to help with whatever simple — or crazy — query you throw at it. Talking is more fluid and engaging than typing, especially if you’re out and about. It’s not a substitute for human expertise, but AVM provides valuable machine intelligence.

  • Get a virtual museum tour. …
  • Chat with historical figures….
  • Practice languages. …
  • Explore books. …
  • Others…


Though not AI-related, this is along the lines of edtech:


…which links to:

 

Introducing QuizBot an Innovative AI-Assisted Assessment in Legal Education  — from papers.ssrn.com by Sean A Harrington

Abstract

This Article explores an innovative approach to assessment in legal education: an AI-assisted quiz system implemented in an AI & the Practice of Law course. The system employs a Socratic method-inspired chatbot to engage students in substantive conversations about course materials, providing a novel method for evaluating student learning and engagement. The Article examines the structure and implementation of this system, including its grading methodology and rubric, and discusses its benefits and challenges. Key advantages of the AI-assisted quiz system include enhanced student engagement with course materials, practical experience in AI interaction for future legal practice, immediate feedback and assessment, and alignment with the Socratic method tradition in law schools. The system also presents challenges, particularly in ensuring fairness and consistency in AI-generated questions, maintaining academic integrity, and balancing AI assistance with human oversight in grading.

The Article further explores the pedagogical implications of this innovation, including a shift from memorization to conceptual understanding, the encouragement of critical thinking through AI interaction, and the preparation of students for AI-integrated legal practice. It also considers future directions for this technology, such as integration with other law school courses, potential for longitudinal assessment of student progress, and implications for bar exam preparation and continuing legal education. Ultimately, this Article argues that AI-assisted assessment systems can revolutionize legal education by providing more frequent, targeted, and effective evaluation of student learning. While challenges remain, the benefits of such systems align closely with the evolving needs of the legal profession. The Article concludes with a call for further research and broader implementation of AI-assisted assessment in law schools to fully understand its impact and potential in preparing the next generation of legal professionals for an AI-integrated legal landscape.

Keywords: Legal Education, Artificial Intelligence, Assessment, Socratic Method, Chatbot, Law School Innovation, Educational Technology, Legal Pedagogy, AI-Assisted Learning, Legal Technology, Student Engagement, Formative Assessment, Critical Thinking, Legal Practice, Educational Assessment, Law School Curriculum, Bar Exam Preparation, Continuing Legal Education, Legal Ethics, Educational Analytics


How Legal Startup Genie AI Raises $17.8 Million with Just 13 Slides — from aisecret.us

Genie AI, a London-based legal tech startup, was founded in 2017 by Rafie Faruq and Nitish Mutha. The company has been at the forefront of revolutionizing the legal industry by leveraging artificial intelligence to automate and enhance legal document drafting and review processes. The recent funding round, led by Google Ventures and Khosla Ventures, marks a significant milestone in Genie AI’s growth trajectory.


In-house legal teams are adopting legal tech at lower rate than law firms: survey — from canadianlawyermag.com
The report suggests in-house teams face more barriers to integrating new tools

Law firms are adopting generative artificial intelligence tools at a higher rate than in-house legal departments, but both report similar levels of concerns about data security and ethical implications, according to a report on legal tech usage released Wednesday.

Legal tech company Appara surveyed 443 legal professionals in Canada across law firms and in-house legal departments over the summer, including lawyers, paralegals, legal assistants, law clerks, conveyancers, and notaries.

Twenty-five percent of respondents who worked at law firms said they’ve already invested in generative AI tools, with 24 percent reporting they plan to invest within the following year. In contrast, only 15 percent of respondents who work in-house have invested in these tools, with 26 percent planning investments in the future.


The end of courts? — from jordanfurlong.substack.com by Jordan Furlong
Civil justice systems aren’t serving the public interest. It’s time to break new ground and chart paths towards fast and fair dispute resolution that will meet people’s actual needs.

We need to start simple. System design can get extraordinarily complex very quickly, and complexity is our enemy at this stage. Tom O’Leary nicely inverted Deming’s axiom with a question of his own: “We want the system to work for [this group]. What would need to happen for that to be true?”

If we wanted civil justice systems to work for the ordinary people who enter them seeking solutions to their problems — as opposed to the professionals who administer and make a living off those systems — what would those systems look like? What would be their features? I can think of at least three:

  • Fair: …
  • Fast: …
  • Fine: …

100-Day Dispute Resolution: New Era ADR is Changing the Game (Rich Lee, CEO)

New Era ADR CEO Rich Lee makes a return appearance to Technically Legal to talk about the company’s cutting-edge platform revolutionizing dispute resolution. Rich first came on the podcast in 2021 right as the company launched. Rich discusses the company’s mission to provide a faster, more efficient, and cost-effective alternative to traditional litigation and arbitration, the company’s growth and what he has learned from a few years in.

Key takeaways:

  • New Era ADR offers a unique platform for resolving disputes in under 100 days, significantly faster than traditional methods.
  • The platform leverages technology to streamline processes, reduce costs, and enhance accessibility for all parties involved.
  • New Era ADR boasts a diverse pool of experienced and qualified neutrals, ensuring fair and impartial resolutions.
  • The company’s commitment to innovation is evident in its use of data and technology to drive efficiency and transparency.
 

Next-Generation Durable Skills Assessment — from gettingsmart.com by Nate McClennen

Key Points

  • Emphasizing the use of AI, VR, and simulation games, the methods in this article enhance the evaluation of durable skills, making them more accessible and practical for real-world applications.
  • The integration of educational frameworks and workplace initiatives highlights the importance of partnerships in developing reliable systems for assessing transferable skills.

 

Half of Higher Ed Institutions Now Use AI for Outcomes Tracking, But Most Lag in Implementing Comprehensive Learner Records — from prnewswire.com; via GSV

SALT LAKE CITY, Oct. 22, 2024 /PRNewswire/ — Instructure, the leading learning ecosystem and UPCEA, the online and professional education association, announced the results of a survey on whether institutions are leveraging AI to improve learner outcomes and manage records, along with the specific ways these tools are being utilized. Overall, the study revealed interest in the potential of these technologies is far outpacing adoption. Most respondents are heavily involved in developing learner experiences and tracking outcomes, though nearly half report their institutions have yet to adopt AI-driven tools for these purposes. The research also found that only three percent of institutions have implemented Comprehensive Learner Records (CLRs), which provide a complete overview of an individual’s lifelong learning experiences.


New Survey Says U.S. Teachers Colleges Lag on AI Training. Here are 4 Takeaways — from the74million.org by ; via GSV
Most preservice teachers’ programs lack policies on using AI, CRPE finds, and are likely unready to teach future educators about the field.

In the nearly two years since generative artificial intelligence burst into public consciousness, U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests.

Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.

The report, from the Center on Reinventing Public Education, a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.



The 5 Secret Hats Teachers are Wearing Right Now (Thanks to AI!) — from aliciabankhofer.substack.com by Alicia Bankhofer
New, unanticipated roles for educators sitting in the same boat

As beta testers, we’re shaping the tools of tomorrow. As researchers, we’re pioneering new pedagogical approaches. As ethical guardians, we’re ensuring that AI enhances rather than compromises the educational experience. As curators, we’re guiding students through the wealth of information AI provides. And as learners ourselves, we’re staying at the forefront of educational innovation.


 

 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

From DSC:
Whenever we’ve had a flat tire over the years, a tricky part of the repair process is jacking up the car so that no harm is done to the car (or to me!). There are some grooves underneath the Toyota Camry where one is supposed to put the jack. But as the car is very low to the ground, these grooves are very hard to find (even in good weather and light). 

 

What’s needed is a robotic jack with vision.

If the jack had “vision” and had wheels on it, the device could locate the exact location of the grooves, move there, and then ask the owner whether they are ready for the car to be lifted up. The owner could execute that order when they are ready and the robotic jack could safely hoist the car up.

This type of robotic device is already out there in other areas. But this idea for assistance with replacing a flat tire represents an AI and robotic-based, consumer-oriented application that we’ll likely be seeing much more of in the future. Carmakers and suppliers, please add this one to your list!

Daniel

 


Articulate AI & the “Buttonification” of Instructional Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A new trend in AI-UX, and its implications for Instructional Design

1. Using AI to Scale Exceptional Instructional Design Practice
Imagine a bonification system that doesn’t just automate tasks, but scales best practices in instructional design:

  • Evidence-Based Design Button…
  • Learner-Centered Objectives Generator…
    Engagement Optimiser…

2. Surfacing AI’s Instructional Design Thinking
Instead of hiding AI’s decision-making process, what if we built an AI system which invites instructional designers to probe, question, and learn from an expert trained AI?

  • Explain This Design…
  • Show Me Alternatives…
  • Challenge My Assumptions…
  • Learning Science Insights…

By reimagining the role of AI in this way, we would…


Recapping OpenAI’s Education Forum — from marcwatkins.substack.com by Marc Watkins

OpenAI’s Education Forum was eye-opening for a number of reasons, but the one that stood out the most was Leah Belsky acknowledging what many of us in education had known for nearly two years—the majority of the active weekly users of ChatGPT are students. OpenAI has internal analytics that track upticks in usage during the fall and then drops off in the spring. Later that evening, OpenAI’s new CFO, Sarah Friar, further drove the point home with an anecdote about usage in the Philippines jumping nearly 90% at the start of the school year.

I had hoped to gain greater insight into OpenAI’s business model and how it related to education, but the Forum left me with more questions than answers. What app has the majority of users active 8 to 9 months out of the year and dormant for the holidays and summer breaks? What business model gives away free access and only converts 1 out of every 20-25 users to paid users? These were the initial thoughts that I hoped the Forum would address. But those questions, along with some deeper and arguably more critical ones, were skimmed over to drive home the main message of the Forum—Universities have to rapidly adopt AI and become AI-enabled institutions.


Off-Loading in the Age of Generative AI — from insidehighered.com by James DeVaney

As we embrace these technologies, we must also consider the experiences we need to discover and maintain our connections—and our humanity. In a world increasingly shaped by AI, I find myself asking: What are the experiences that define us, and how do they influence the relationships we build, both professionally and personally?

This concept of “off-loading” has become central to my thinking. In simple terms, off-loading is the act of delegating tasks to AI that we would otherwise do ourselves. As AI systems advance, we’re increasingly confronted with a question: Which tasks should we off-load to AI?

 

The Tutoring Revolution — from educationnext.org by Holly Korbey
More families are seeking one-on-one help for their kids. What does that tell us about 21st-century education?

Recent research suggests that the number of students seeking help with academics is growing, and that over the last couple of decades, more families have been turning to tutoring for that help.

What the Future Holds
Digital tech has made private tutoring more accessible, more efficient, and more affordable. Students whose families can’t afford to pay $75 an hour at an in-person center can now log on from home to access a variety of online tutors, including Outschool, Wyzant, and Anchorbridge, and often find someone who can cater to their specific skills and needs—someone who can offer help in French to a student with ADHD, for example. Online tutoring is less expensive than in-person programs. Khan Academy’s Khanmigo chatbot can be a student’s virtual AI tutor, no Zoom meeting required, for $4 a month, and nonprofits like Learn to Be work with homeless shelters and community centers to give virtual reading and math tutoring free to kids who can’t afford it and often might need it the most.

 

The Future of Umpiring in Baseball: Balancing Tradition and Technology — from judgeschlegel.com by Judge Scott Schlegel
This article is not about baseball.

As we look to the future of umpiring in baseball, a balanced approach may offer the best solution. Rather than an all-or-nothing choice between human umpires and full automation, a hybrid system could potentially offer the benefits of both worlds. For instance, automated tracking systems could be used to assist human umpires, providing them with real-time data to inform their calls. This would maintain the human element and authority on the field while significantly enhancing accuracy and consistency.

Such a system would allow umpires to focus more on game management, player interactions, and the myriad other responsibilities that require human judgment and experience. It would preserve the traditional aspects of the umpire’s role that fans and players value, while leveraging technology to address concerns about accuracy and fairness.


Navigating the Intersection of Tradition and AI: The Future of Judicial Decision-Making — from judgeschlegel.com by Judge Scott Schlegel

Introduction
Continuing with our baseball analogy, we now turn our focus to the courtroom.

The intersection of technology and the justice system is a complex and often contentious space, much like the debate over automated umpires in baseball. As Major League Baseball considers whether automated systems should replace the human element in calling balls and strikes, the legal world faces similar questions: How far should we go in allowing technology to aid our decision-making processes, and what is the right balance between innovation and the traditions that define the courtroom?


AI and the rise of the Niche Lawyer — from jordanfurlong.substack.com by Jordan Furlong
A new legal market will create a new type of lawyer: Specialized, flexible, customized, fractional, home-based and online, exclusive, balanced and focused. This could be your future legal career.

Think of a new picture. A lawyer dressed in Professional Casual, or Business Comfortable, an outfit that looks sharp but feels relaxed. A lawyer inside their own apartment, in an extra bedroom, or in a shared workspace on a nearby bus route, taking an Uber to visit some clients and using Zoom to meet with others. A lawyer with a laptop and a tablet and a smartphone and no other capital expenditures. A lawyer whose overhead is only what’s literally over their head.

This lawyer starts work when they feel like it (maybe 7 am, maybe 10; maybe Monday, maybe not) and they stop working when they feel like it (maybe 4 pm, maybe 9). They have as many clients as they need, for whom they provide very specific, very personalized services. They provide some services that aren’t even “legal” to people who aren’t “clients” as we understand both terms. They have essential knowledge and skills that all lawyers share but unique knowledge and skills that hardly any others possess. They make as much money as they need in order to meet the rent and pay down their debts and afford a life with the people they love. They’re in complete charge of their career and their destiny, something they find terrifying and stressful and wonderful and fulfilling.


While We Were Distracted with the New ChatGPT Model, Google Quietly Dropped an AI Bombshell —  from judgeschlegel.com by Judge Scott Schlegel

While the latest ChatGPT model is dominating tech headlines, I was unexpectedly blown away by Google’s recent release of a new NotebookLM feature: Audio Overview. This tool, which transforms written content into simulated conversations, caught me off guard with its capabilities. I uploaded some of my blog posts on AI and the justice system, and what it produced left me speechless. The AI generated podcast-like discussions felt remarkably authentic, complete with nuanced interpretations and even slight misunderstandings of my ideas. This mirrors real-life discussions perfectly – after all, how often do we hear our own thoughts expressed by others and think, “That’s not quite what I meant”?



 

Duolingo Introduces AI-Powered Innovations at Duocon 2024 — from investors.duolingo.com; via Claire Zau

Duolingo’s new Video Call feature represents a leap forward in language practice for learners. This AI-powered tool allows Duolingo Max subscribers to engage in spontaneous, realistic conversations with Lily, one of Duolingo’s most popular characters. The technology behind Video Call is designed to simulate natural dialogue and provides a personalized, interactive practice environment. Even beginner learners can converse in a low-pressure environment because Video Call is designed to adapt to their skill level. By offering learners the opportunity to converse in real-time, Video Call builds the confidence needed to communicate effectively in real-world situations. Video Call is available for Duolingo Max subscribers learning English, Spanish, and French.


And here’s another AI-based learning item:

AI reading coach startup Ello now lets kids create their own stories — from techcrunch.com by Lauren Forristal; via Claire Zau

Ello, the AI reading companion that aims to support kids struggling to read, launched a new product on Monday that allows kids to participate in the story-creation process.

Called “Storytime,” the new AI-powered feature helps kids generate personalized stories by picking from a selection of settings, characters, and plots. For instance, a story about a hamster named Greg who performed in a talent show in outer space.

 

Workera’s CEO was mentored by Andrew Ng. Now he wants an AI agent to mentor you. — from techcrunch.com by Maxwell Zeff; via Claire Zau

On Tuesday, Workera announced Sage, an AI agent you can talk with that’s designed to assess an employee’s skill level, goals, and needs. After taking some short tests, Workera claims Sage will accurately gauge how proficient someone is at a certain skill. Then, Sage can recommend the appropriate online courses through Coursera, Workday, or other learning platform partners. Through chatting with Sage, Workera is designed to meet employees where they are, testing their skills in writing, machine learning, or math, and giving them a path to improve.

From DSC:
This is very much akin to what I’ve been trying to get at with my Learning from the Living [AI-Based Class] Room vision. And as learning agents come onto the scene, this type of vision should take off!

 

Walt Disney’s Wisdom: Lessons for Learning & Development Leaders — from learningguild.com by David Kelly

Here are a few of my favorite [quotes], along with the valuable lessons they offer us in Learning and Development.

  • “Everyone has deadlines.”
  • “I believe in being an innovator.”
  • “Times and conditions change so rapidly that we must keep our aim constantly focused on the future.
  • “I can never stand still. I must explore and experiment.”
  • …and several other quotes.
 
© 2024 | Daniel Christian