Higher Education Has Not Been Forgotten by Generative AI — from insidehighered.com by Ray Schroeder
The generative AI (GenAI) revolution has not ignored higher education; a whole host of tools are available now and more revolutionary tools are on the way.

Some of the apps that have been developed for general use can be customized for specific topical areas in higher ed. For example, I created a version of GPT, “Ray’s EduAI Advisor,” that builds onto the current GPT-4o version with specific updates and perspectives on AI in higher education. It is freely available to users. With few tools and no knowledge of the programming involved, anyone can build their own GPT to supplement information for their classes or interest groups.

Excerpts from Ray’s EduAI Advisor bot:

AI’s global impact on higher education, particularly in at-scale classes and degree programs, is multifaceted, encompassing several key areas:
1. Personalized Learning…
2. Intelligent Tutoring Systems…
3. Automated Assessment…
4. Enhanced Accessibility…
5. Predictive Analytics…
6. Scalable Virtual Classrooms
7. Administrative Efficiency…
8. Continuous Improvement…

Instructure and Khan Academy Announce Partnership to Enhance Teaching and Learning With Khanmigo, the AI Tool for Education — from instructure.com
Shiren Vijiasingam and Jody Sailor make an exciting announcement about a new partnership sure to make a difference in education everywhere.

 

Rethinking Legal Ops Skills: Generalists Versus Specialists — from abovethelaw.com by Silvie Tucker and Brandi Pack
This ongoing conversation highlights the changing demands on legal ops practitioners.

A thought-provoking discussion is unfolding in the legal operations community regarding one intriguing question: Should legal operations professionals strive to be generalists or specialists?

The conversation is timely as the marketplace consolidates and companies grapple with the best way to fill valuable and limited headcount allotments. It also highlights the evolving landscape of legal operations and the changing demands on its practitioners.

The Evolution of Legal Ops
Over the past decade, the field of legal operations has undergone significant transformation. Initially strictly focused on streamlining processes and reducing costs, the role has expanded to include various responsibilities driven by technological advancements and heightened industry expectations. Key areas of expansion include:


He added: “I have for long been of the view – for decades – that AI will be a vital tool in overcoming the access to justice challenge. Existing and emerging technologies are now very promising.”

Richard Susskind


In A First for Law Practice Management Platforms, Clio Rolls Out An Integrated E-Filing Service in Texas — from lawnext.com by Bob Ambrogi

Last October, during its annual Clio Cloud Conference, the law practice management company Clio announced its plan to roll out an e-filing service, called Clio File, during 2024, starting with Texas, which would make it the first law practice management platform with built-in e-filing. Today, it delivered on that promise, launching Clio File for e-filing in Texas courts.

“Lawyers can now seamlessly submit court documents directly from our flagship practice management product, Clio Manage, streamlining their workflows and simplifying the filing process,” said Chris Stock, vice president of legal content and migrations at Clio. “This is an exciting step in expanding the capabilities of our platform, providing a comprehensive solution for legal documents, from drafting to court filing.”


Just-Launched Quench Uses Gen AI to Bring Greater Speed and Accuracy to Medico-Legal Records Review — from lawnext.com by Bob Ambrogi

A cardiologist with a background in medical technology, computer science and artificial intelligence has launched a product for legal professionals and physician expert witnesses that targets the tedious task of reviewing and analyzing thousands of pages of medical records.

The product, Quench SmartChart, uses generative AI to streamline the medico-legal review process, enabling users to quickly extract, summarize and create chronologies from large, disorganized PDFs of medical records.

The product also includes a natural language chat feature, AskQuench, that lets users interact with and interrogate records to surface essential insights.

 

Emerging Trends in Legal Tech — from legaltalknetwork.com by Rob Joyner & Jared D. Correia

Remote Work Continues to Thrive
In recent years, many firms have adopted a “just make it happen” attitude toward virtual meetings, mobility, and remote work. This has enabled law firms to reevaluate the tools and training necessary for legal professionals to utilize technology effectively, improving upon the traditional in-office setup. When executed correctly, this approach can yield long-lasting benefits for the firm. Implementing a remote work policy can help firms access a global talent pool, reduce operational costs, and create a better work-life balance for their staff.

In a recent episode of Legal Toolkit, Rob Joyner, Senior Vice President of Business Development at Centerbase, and Jared D. Correia, Esq., CEO of Red Cave Law Firm Consulting, discuss the debate between remote and in-office work, as well as the latest advancements in AI and other essential legal technology.

 


Bill Gates Reveals Superhuman AI Prediction — from youtube.com by Rufus Griscom, Bill Gates, Andy Sack, and Adam Brotman

This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.

Key moments:

00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.


OpenAI Introduces CriticGPT: A New Artificial Intelligence AI Model based on GPT-4 to Catch Errors in ChatGPT’s Code Output — from marktechpost.com

The team has summarized their primary contributions as follows.

  1. The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
  1. Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
  1. Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
  1. This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.

Character.AI now allows users to talk with AI avatars over calls — from techcrunch.com by Ivan Mehta

a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.

The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.


Google Translate Just Added 110 More Languages — from lifehacker.com by
You can now use the app to communicate in languages you’ve never even heard of.

Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.




Listen to your favorite books and articles voiced by Judy Garland, James Dean, Burt Reynolds and Sir Laurence Olivier — from elevenlabs.io
ElevenLabs partners with estates of iconic stars to bring their voices to the Reader App

 

Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.

It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!



Text-to-Video Emergence for July 2024 — from ai-supremacy.com by Michael Spencer
Who needs Sora?

There have been some incredible teasers in the text-to-video arena of Generative AI. Namely I’m watching:


“OpenAI seems to have the ability to create video in Sora, send it to ChatGPT for a script, use Voice Engine for voice over and put it all together.”
byu/MassiveWasabi insingularity

 

Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 

Doing Stuff with AI: Opinionated Midyear Edition — from oneusefulthing.org by Ethan Mollick

Every six months or so, I write a guide to doing stuff with AI. A lot has changed since the last guide, while a few important things have stayed the same. It is time for an update.

To learn to do serious stuff with AI, choose a Large Language Model and just use it to do serious stuff – get advice, summarize meetings, generate ideas, write, produce reports, fill out forms, discuss strategy – whatever you do at work, ask the AI to help. A lot of people I talk to seem to get the most benefit from engaging the AI in conversation, often because it gives good advice, but also because just talking through an issue yourself can be very helpful. I know this may not seem particularly profound, but “always invite AI to the table” is the principle in my book that people tell me had the biggest impact on them. You won’t know what AI can (and can’t) do for you until you try to use it for everything you do. And don’t sweat prompting too much, though here are some useful tips, just start a conversation with AI and see where it goes.

You do need to use one of the most advanced frontier models, however.

 
 

AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey
The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.

ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.

Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.

There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.

 

 

io.google/2024

.


How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.

Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.

On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.

Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.


Google I/O 2024: An I/O for a new generation — from blog.google

The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.

In this story:


Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI

  • Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
  • All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
  • Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.

 



New ways to engage with Gemini for Workspace — from workspace.google.com

Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.


Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers

Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.

At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.


Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.


 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

Voice Banks (preserving our voices for AI) — from thebrainyacts.beehiiv.com by Josh Kubicki

The Ethical and Emotional Implications of AI Voice Preservation

Legal Considerations and Voice Rights
From a legal perspective, the burgeoning use of AI in voice cloning also introduces a complex web of rights and permissions. The recent passage of Tennessee’s ELVIS Act, which allows legal action against unauthorized recreations of an artist’s voice, underscores the necessity for robust legal frameworks to manage these technologies. For non-celebrities, the idea of a personal voice bank brings about its own set of legal challenges. How do we regulate the use of an individual’s voice after their death? Who holds the rights to control and consent to the usage of these digital artifacts?

To safeguard against misuse, any system of voice banking would need stringent controls over who can access and utilize these voices. The creation of such banks would necessitate clear guidelines and perhaps even contractual agreements stipulating the terms under which these voices may be used posthumously.

Should we all consider creating voice banks to preserve our voices, allowing future generations the chance to interact with us even after we are gone?

 

 

 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

AI for the physical world — from superhuman.ai by Zain Kahn

Excerpt: (emphasis DSC)

A new company called Archetype is trying to tackle that problem: It wants to make AI useful for more than just interacting with and understanding the digital realm. The startup just unveiled Newton — “the first foundation model that understands the physical world.”

What’s it for?
A warehouse or factory might have 100 different sensors that have to be analyzed separately to figure out whether the entire system is working as intended. Newton can understand and interpret all of the sensors at the same time, giving a better overview of how everything’s working together. Another benefit: You can ask Newton questions in plain English without needing much technical expertise.

How does it work?

  • Newton collects data from radar, motion sensors, and chemical and environmental trackers
  • It uses an LLM to combine each of those data streams into a cohesive package
  • It translates that data into text, visualizations, or code so it’s easy to understand

Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data — from venturebeat.com by Michael Nuñez; via Tom Barrett’s Prompcraft e-newsletter

Apple has entered into a significant agreement with stock photography provider Shutterstock to license millions of images for training its artificial intelligence models. According to a Reuters report, the deal is estimated to be worth between $25 million and $50 million, placing Apple among several tech giants racing to secure vast troves of data to power their AI systems.


 

 
© 2024 | Daniel Christian