Microsoft teams with Khan Academy to make its AI tutor free for K-12 educators and will develop a Phi-3 math model — from venturebeat.com by Ken Yeung

Microsoft is partnering with Khan Academy in a multifaceted deal to demonstrate how AI can transform the way we learn. The cornerstone of today’s announcement centers on Khan Academy’s Khanmigo AI agent. Microsoft says it will migrate the bot to its Azure OpenAI Service, enabling the nonprofit educational organization to provide all U.S. K-12 educators free access to Khanmigo.

In addition, Microsoft plans to use its Phi-3 model to help Khan Academy improve math tutoring and collaborate to generate more high-quality learning content while making more courses available within Microsoft Copilot and Microsoft Teams for Education.


One-Third of Teachers Have Already Tried AI, Survey Finds — from the74million.org by Kevin Mahnken
A RAND poll released last month finds English and social studies teachers embracing tools like ChatGPT.

One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.


Professors Try ‘Restrained AI’ Approach to Help Teach Writing — from edsurge.com by Jeffrey R. Young
Can ChatGPT make human writing more efficient, or is writing an inherently time-consuming process best handled without AI tools?

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.


On Building AI Models for Education — from aieducation.substack.com by Claire Zau
Google’s LearnLM, Khan Academy/MSFT’s Phi-3 Models, and OpenAI’s ChatGPT Edu

This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM. Our understanding of Microsoft/Khan Academy small language models and ChatGPT Edu is limited to the information provided through announcements, leaving us with less “under the hood” visibility into their development.


AI tutors are quietly changing how kids in the US study, and the leading apps are from China — from techcrunch.com by Rita Liao

Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.


Is your school behind on AI? If so, there are practical steps you can take for the next 12 months — from stefanbauschard.substack.com by Stefan Bauschard

If your school (district) or university has not yet made significant efforts to think about how you will prepare your students for a World of AI, I suggest the following steps:

July 24 – Administrator PD & AI Guidance
In July, administrators should receive professional development on AI, if they haven’t already. This should include…

August 24 –Professional Development for Teachers and Staff…
Fall 24 — Parents; Co-curricular; Classroom experiments…
December 24 — Revision to Policy…


New ChatGPT Version Aiming at Higher Ed — from insidehighered.com by Lauren Coffey
ChatGPT Edu, emerging after initial partnerships with several universities, is prompting both cautious optimism and worries.

OpenAI unveiled a new version of ChatGPT focused on universities on Thursday, building on work with a handful of higher education institutions that partnered with the tech giant.

The ChatGPT Edu product, expected to start rolling out this summer, is a platform for institutions intended to give students free access. OpenAI said the artificial intelligence (AI) toolset could be used for an array of education applications, including tutoring, writing grant applications and reviewing résumés.

 

A Guide to the GPT-4o ‘Omni’ Model — from aieducation.substack.com by Claire Zau
The closest thing we have to “Her” and what it means for education / workforce

Today, OpenAI introduced its new flagship model, GPT-4o, that delivers more powerful capabilities and real-time voice interactions to its users. The letter “o” in GPT-4o stands for “Omni”, referring to its enhanced multimodal capabilities. While ChatGPT has long offered a voice mode, GPT-4o is a step change in allowing users to interact with an AI assistant that can reason across voice, text, and vision in real-time.

Facilitating interaction between humans and machines (with reduced latency) represents a “small step for machine, giant leap for machine-kind” moment.

Everyone gets access to GPT-4: “the special thing about GPT-4o is it brings GPT-4 level intelligence to everyone, including our free users”, said CTO Mira Murati. Free users will also get access to custom GPTs in the GPT store, Vision and Code Interpreter. ChatGPT Plus and Team users will be able to start using GPT-4o’s text and image capabilities now

ChatGPT launched a desktop macOS app: it’s designed to integrate seamlessly into anything a user is doing on their keyboard. A PC Windows version is also in the works (notable that a Mac version is being released first given the $10B Microsoft relationship)


Also relevant, see:

OpenAI Drops GPT-4 Omni, New ChatGPT Free Plan, New ChatGPT Desktop App — from theneuron.ai [podcast]

In a surprise launch, OpenAI dropped GPT-4 Omni, their new leading model. They also made a bunch of paid features in ChatGPT free and announced a new desktop app. Pete breaks down what you should know and what this says about AI.


What really matters — from theneurondaily.com

  • Free users get 16 ChatGPT-4o messages per 3 hours.
  • Plus users get 80 ChatGPT-4o messages per 3 hours
  • Teams users 160 ChatGPT-4o messages per 3 hours.
 

io.google/2024

.


How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.

Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.

On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.

Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.


Google I/O 2024: An I/O for a new generation — from blog.google

The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.

In this story:


Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI

  • Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
  • All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
  • Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.

 



New ways to engage with Gemini for Workspace — from workspace.google.com

Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.


Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers

Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.

At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.


Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.


 


Information Age vs Generation Age Technologies for Learning — from opencontent.org by David Wiley

Remember (emphasis DSC)

  • the internet eliminated time and place as barriers to education, and
  • generative AI eliminates access to expertise as a barrier to education.

Just as instructional designs had to be updated to account for all the changes in affordances of online learning, they will need to be dramatically updated again to account for the new affordances of generative AI.


The Curious Educator’s Guide to AI | Strategies and Exercises for Meaningful Use in Higher Ed  — from ecampusontario.pressbooks.pub by Kyle Mackie and Erin Aspenlieder; via Stephen Downes

This guide is designed to help educators and researchers better understand the evolving role of Artificial Intelligence (AI) in higher education. This openly-licensed resource contains strategies and exercises to help foster an understanding of AI’s potential benefits and challenges. We start with a foundational approach, providing you with prompts on aligning AI with your curiosities and goals.

The middle section of this guide encourages you to explore AI tools and offers some insights into potential applications in teaching and research. Along with exposure to the tools, we’ll discuss when and how to effectively build AI into your practice.

The final section of this guide includes strategies for evaluating and reflecting on your use of AI. Throughout, we aim to promote use that is effective, responsible, and aligned with your educational objectives. We hope this resource will be a helpful guide in making informed and strategic decisions about using AI-powered tools to enhance teaching and learning and research.


Annual Provosts’ Survey Shows Need for AI Policies, Worries Over Campus Speech — from insidehighered.com by Ryan Quinn
Many institutions are not yet prepared to help their faculty members and students navigate artificial intelligence. That’s just one of multiple findings from Inside Higher Ed’s annual survey of chief academic officers.

Only about one in seven provosts said their colleges or universities had reviewed the curriculum to ensure it will prepare students for AI in their careers. Thuswaldner said that number needs to rise. “AI is here to stay, and we cannot put our heads in the sand,” he said. “Our world will be completely dominated by AI and, at this point, we ain’t seen nothing yet.”


Is GenAI in education more of a Blackberry or iPhone? — from futureofbeinghuman.com by Andrew Maynard
There’s been a rush to incorporate generative AI into every aspect of education, from K-12 to university courses. But is the technology mature enough to support the tools that rely on it?

In other words, it’s going to mean investing in concepts, not products.

This, to me, is at the heart of an “iPhone mindset” as opposed to a “Blackberry mindset” when it comes to AI in education — an approach that avoids hard wiring in constantly changing technologies, and that builds experimentation and innovation into the very DNA of learning.

For all my concerns here though, maybe there is something to being inspired by the Blackberry/iPhone analogy — not as a playbook for developing and using AI in education, but as a mindset that embraces innovation while avoiding becoming locked in to apps that are detrimentally unreliable and that ultimately lead to dead ends.


Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays — from sciencedirect.com by Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, and Jens Möller

Highlights

  • Randomized-controlled experiments investigating novice and experienced teachers’ ability to identify AI-generated texts.
  • Generative AI can simulate student essay writing in a way that is undetectable for teachers.
  • Teachers are overconfident in their source identification.
  • AI-generated essays tend to be assessed more positively than student-written texts.

Can Using a Grammar Checker Set Off AI-Detection Software? — from edsurge.com by Jeffrey R. Young
A college student says she was falsely accused of cheating, and her story has gone viral. Where is the line between acceptable help and cheating with AI?


Use artificial intelligence to get your students thinking critically — from timeshighereducation.com by Urbi Ghosh
When crafting online courses, teaching critical thinking skills is crucial. Urbi Ghosh shows how generative AI can shape how educators can approach this


ChatGPT shaming is a thing – and it shouldn’t be — from futureofbeinghuman.com by Andrew Maynard
There’s a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it’s a problem.

Excerpt (emphasis DSC):

This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.

From DSC:
Yes…that sounds like how many faculty members viewed online learning, even though they had never taught online before.

 

Are Colleges Ready For an Online-Education World Without OPMs? — from edsurge.com by Robert Ubell (Columnist)
Online Program Management companies have helped hundreds of colleges build online degree programs, but the sector is showing signs of strain.

For more than 15 years, a group of companies known as Online Program Management providers, or OPMs, have been helping colleges build online degree programs. And most of them have relied on an unusual arrangement — where the companies put up the financial backing to help colleges launch programs in exchange for a large portion of tuition revenue.

As a longtime administrator of online programs at colleges, I have mixed feelings about the idea of shutting down the model. And the question boils down to this: Are colleges ready for a world without OPMs?


Guy Raz on Podcasts and Passion: Audio’s Ability to Spark Learning — from michaelbhorn.substack.com by Michael B. Horn

This conversation went in a bunch of unexpected directions. And that’s what’s so fun about it. After all, podcasting is all about bringing audio back and turning learning into leisure. And the question Guy and his partner Mindy Thomas asked a while back was: Why not bring kids in on the fun? Guy shared how his studio, Tinkercast, is leveraging the medium to inspire and educate the next generation of problem solvers.

We discussed the power of audio to capture curiosities and foster imagination, how Tinkercast is doing that in and out of the classroom, and how it can help re-engage students in building needed skills at a critical time. Enjoy!



April 2024 Job Cuts Announced by US-Based Companies Fall; More Cuts Attributed to TX DEI Law, AI in April — from challengergray.com

Excerpt (emphasis DSC):

Education
Companies in the Education industry, which includes schools and universities, cut the second-most jobs last month with 8,092 for a total of 17,892. That is a 635% increase from the 2,435 cuts announced during the first four months of 2023.

“April is typically the time school districts are hiring and setting budgets for the next fiscal year. Certainly, there are budgetary constraints, as labor costs rise, but school systems also have a retention and recruitment issue,” said Challenger.


Lifetime college returns differ significantly by major, research finds — from highereddive.com by Lilah Burke
Engineering and computer science showed the best return out of 10 fields of study that were examined.

Dive Brief:

  • The lifetime rate of return for a college education differs significantly by major, but it also varies by a student’s gender and race or ethnicity, according to new peer-reviewed research published in the American Educational Research Journal.
  • A bachelor’s degree in general provides a roughly 9% rate of return for men, and nearly 10% for women, researchers concluded. The majors with the best returns were computer science and engineering.
  • Black, Hispanic and Asian college graduates had slightly higher rates of return than their White counterparts, the study found.
 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

AI RESOURCES AND TEACHING (Kent State University) — from aiadvisoryboards.wordpress.com

AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.

The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.


A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor — from the74million.org by Greg Toppo
With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits.

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

From DSC:
This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:

Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.

 

Nvidia’s AI boom is only getting started. Just ask CEO Jensen Huang — from fastcompany.com by Harry McCracken
Nvidia’s chips sparked the AI revolution. Now it’s in the business of putting the technology to work in an array of industries.

Nvidia is No. 1 on Fast Company’s list of the World’s 50 Most Innovative Companies of 2024. Explore the full list of companies that are reshaping industries and culture.

Nvidia isn’t just in the business of providing ever-more-powerful computing hardware and letting everybody else figure out what to do with it. Across an array of industries, the company’s technologies, platforms, and partnerships are doing much of the heavy lifting of putting AI to work. In a single week in January 2024, for instance, Nvidia reported that it had begun beta testing its drug discovery platform, demoed software that lets video game characters speak unscripted dialogue, announced deals with four Chinese EV manufacturers that will incorporate Nvidia technology in their vehicles, and unveiled a retail-industry partnership aimed at foiling organized shoplifting.


Johnson & Johnson MedTech Works With NVIDIA to Broaden AI’s Reach in Surgery — from blogs.nvidia.com by David Niewolny

AI — already used to connect, analyze and offer predictions based on operating room data — will be critical to the future of surgery, boosting operating room efficiency and clinical decision-making.

That’s why NVIDIA is working with Johnson & Johnson MedTech to test new AI capabilities for the company’s connected digital ecosystem for surgery. It aims to enable open innovation and accelerate the delivery of real-time insights at scale to support medical professionals before, during and after procedures.

J&J MedTech is in 80% of the world’s operating rooms and trains more than 140,000 healthcare professionals each year through its education programs.


GE and NVIDIA Join Forces to Accelerate Artificial Intelligence Adoption in Healthcare — from nvidianews.nvidia.com

  • New generation of intelligent medical devices will use world’s most advanced AI platform with the goal of improving patient care
  • GE Healthcare is the first medical device company to use the NVIDIA GPU Cloud
  • New Revolution Frontier CT, powered by NVIDIA, is two times faster for image processing, proving performance acceleration has begun

Nvidia Announces Major Deals With Healthcare Companies — from cheddar.com

At the GTC A.I. conference last week, Nvidia launched nearly two dozen new A.I. powered, health care focused tools and deals with companies Johnson & Johnson and GE Healthcare for surgery and medical imaging. The move into health care space for the A.I. company is an effort that’s been under development for a decade.


Nvidia is now powering AI nurses — from byMaxwell Zeff / Gizmodo;; via Claire Zau
The cheap AI agents offer medical advice to patients over video calls in real-time

 

Also see:

Cognition Labs Blob

 

OpenAI’s app store for GPTs will launch next week — from techcrunch.com by Kyle Wiggers

OpenAI plans to launch a store for GPTs, custom apps based on its text-generating AI models (e.g. GPT-4), sometime in the coming week.

The GPT Store was announced last year during OpenAI’s first annual developer conference, DevDay, but delayed in December — almost certainly due to the leadership shakeup that occurred in November, just after the initial announcement.

 

Can new AI help to level up the scales of justice?


From DSC:
As you can see from the above items, Mr. David Goodrich, a great human being and a fellow Instructional Designer, had a great comment and question regarding the source of my hope that AI — and other forms of legaltech — could significantly provide more access to justice here in America. Our civil justice system has some serious problems — involving such areas as housing, employment, healthcare, education, families, and more.

I’d like to respond to that question here.

First of all, I completely get what David is saying. I, too, have serious doubts that our horrible access to justice (#A2J) situation will get better. Why? Because:

  • Many people working within the legal field like it this way, as they are all but assured victory in most of the civil lawsuits out there.
  • The Bar Associations of most of the states do not support changes that would threaten their incomes/livelihoods. This is especially true in California and Florida.
  • The legal field in general is not composed, for the most part, of highly innovative people who make things happen for the benefit of others. For example, the American Bar Association is 20+ years behind in terms of providing the level of online-based learning opportunities that they should be offering. They very tightly control how legal education is delivered in the U.S.

Here are several areas that provide me with hope for our future


There are innovative individuals out there fighting for change.
And though some of these individuals don’t reside in the United States, their work still impacts many here in America. For examples, see:

There are innovative new companies, firms, and other types of organizations out there fighting for change. For examples:

There are innovative new tools and technologies out there such as:

  • Artificial Intelligence (AI) and Machine Learning (ML) 
    • AI and machine learning remain pivotal in legaltech, especially for in-house lawyers who deal with vast quantities of contracts and complex legal matters. In 2024, these technologies will be integral for legal research, contract review, and the drafting of legal documents. Statistics from the Tech & the Law 2023 Report state more than three in five corporate legal departments (61%) have adopted generative AI in some capacity, with 7% actively using generative AI in their day-to-day work. With constant improvements to LLM (Large Language Models) by the big players, i.e. OpenAI, Google, and Microsoft (via OpenAI), 2024 will see more opportunities open and efficiencies gained for legal teams. (Source)
    • From drafting contracts to answering legal questions and summarising legal issues, AI is revolutionising the legal profession and although viewed with a sceptical eye by some law firms, is generally perceived to be capable of bringing huge benefits. (Source)
    • Legal bots like Harvey will assist lawyers with discovery.
  • Technology-assisted review (TAR) in e-discovery
  • Due to COVID 19, there were virtual courtrooms set up and just like with virtual/online-based learning within higher education, many judges, litigants, lawyers, and staff appreciated the time savings and productivity gains. Along these lines, see Richard Susskind’s work. [Richard] predicts a world of online courts, AI-based global legal businesses, disruptive legal technologies, liberalized markets, commoditization, alternative sourcing, simulated practice on the metaverse, and many new legal jobs. (Source)

There are innovative states out there fighting for change. For examples:

  • Utah in 2020 launched a pilot program that suspended ethics rules to allow for non-lawyer ownership of legal services providers and let non-lawyers apply for a waiver to offer certain legal services. (Source)
  • Arizona in 2021 changed its regulatory rules to allow for non-lawyer ownership. (Source)
  • Alaska with their Alaska Legal Services Corporation
  • …and others

And the last one — but certainly not the least one — is where my faith comes into play. I believe that the Triune God exists — The Father, The Son, and The Holy Spirit — and that the LORD is very active in our lives and throughout the globe. And one of the things the LORD values highly is JUSTICE. For examples:

  • Many seek an audience with a ruler, but it is from the Lord that one gets justice. Proverbs 29:26 NIV
  • These are the things you are to do: Speak the truth to each other, and render true and sound judgment in your courts; Zechariah 8:16 NIV
  • …and many others as can be seen below

The LORD values JUSTICE greatly!


So I believe that the LORD will actively help us provide greater access to justice in America.


Well…there you have it David. Thanks for your question/comment! I appreciate it!

 

Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

MIT Technology Review — Big problems that demand bigger energy. — from technologyreview.com by various

Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.

MIT Technology Review interviews many people to weigh in on the underserved issues at the intersections of technology and society.

 

Universities Can’t Accommodate All the Computer Science Majors — from insidehighered.com by Johanna Alonso
High interest in the field has led to overcrowded classes and other issues. Now some institutions are adding requirements to help force students out of the major.

Before this year, if you wanted to major in computer science at the University of Michigan, your only barrier was getting accepted to the university.

But a new model requires all students who want to study computer science—whether they are incoming or already enrolled—to apply for the major separately.

Michael Wellman, Michigan’s chair of computer science and engineering, said that the university has worked for years to try to accommodate everyone who wants to study the subject, hiring as many as six faculty members annually in recent years and even building a new computer science facility. The number of CS degrees awarded rose from 132 in 2012 to 600 in 2022.

 


When schools and families go to court over special education, everyone loses — from wfyi.org by Lee Gaines

While federal law mandates public schools provide an appropriate education to students with disabilities, it’s often up to parents to enforce it.

Schwarten did what few people have the resources to do: she hired a lawyer and requested a due process hearing. It’s like a court case. And it’s intended to resolve disputes between families and schools over special education services.

It’s also a traumatic and adversarial process for families and schools that can rack up hundreds of thousands of dollars in legal fees and destroy relationships between parents and district employees. And even when families win, children don’t always get the public education they deserve.


Future of Learning: Native American students have the least access to computer science — from The Hechinger Report by Javeria Salman

But computer science lessons like the ones at Dzantik’i Heeni Middle School are relatively rare. Despite calls from major employers and education leaders to expand K-12 computer science instruction in response to the workforce’s increasing reliance on digital technology, access to the subject remains low — particularly for Native American students.

Only 67 percent of Native American students attend a school that offers a computer science course, the lowest percentage of any demographic group, according to a new study from the nonprofit Code.org. A recent report from the Kapor Foundation and the American Indian Science and Engineering Society, or AISES, takes a deep look at why Native students’ access to computer and technology courses in K-12 is so low, and examines the consequences.


The Case for Andragogy in Educator Development — from Dialogic #341 by Tom Barrett

Understanding the Disconnect
We often find ourselves in professional development sessions that starkly contrast with the interactive and student-centred learning environments we create. We sit as passive recipients rather than active participants, receiving generic content that seldom addresses our unique experiences or teaching challenges.

This common scenario highlights a significant gap in professional development: the failure to apply the principles of adult learning, or andragogy, which acknowledges that educators, like their students, benefit from a learning process that is personalised, engaging, and relevant.

The irony is palpable — while we foster environments of inquiry and engagement in our classrooms, our learning experiences often lack these elements.

The disconnect prompts a vital question: If we are to cultivate a culture of lifelong learning among our students, shouldn’t we also embody this within our professional growth? It’s time for the professional development of educators to reflect the principles we hold dear in our teaching practices.

 
© 2024 | Daniel Christian