1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.


How Agentic AI is Revolutionizing Customer Service — from customerthink.com by Devashish Mamgain

Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.

This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.

Characteristics of Agentic AI

    • Autonomy: Achieves complex objectives without requiring human collaboration.
    • Language Comprehension: Understands nuanced human speech and text effectively.
    • Rationality: Makes informed, contextual decisions using advanced reasoning engines.
    • Adaptation: Adjusts plans and goals in dynamic situations.
    • Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.

Clio: A system for privacy-preserving insights into real-world AI use — from anthropic.com

How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?

Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.


Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper
Google’s Veo 2, Kling 1.6, Pika 2.0 & more

AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.

Here’s a comprehensive look at the current landscape:

  • Veo 2…
  • Pika 2.0…
  • Runway’s Gen-3…
  • Luma AI Dream Machine…
  • Hailuo’s MiniMax…
  • OpenAI’s Sora…
  • Hunyuan Video by Tencent…

There are several other video models and platforms, including …

 

Introducing Gemini 2.0: our new AI model for the agentic era — from blog.google by Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

We’re getting 2.0 into the hands of developers and trusted testers today. And we’re working quickly to get it into our products, leading with Gemini and Search. Starting today our Gemini 2.0 Flash experimental model will be available to all Gemini users. We’re also launching a new feature called Deep Research, which uses advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports on your behalf. It’s available in Gemini Advanced today.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

.

Try Deep Research and our new experimental model in Gemini, your AI assistant — from blog.google by Dave Citron
Deep Research rolls out to Gemini Advanced subscribers today, saving you hours of time. Plus, you can now try out a chat optimized version of 2.0 Flash Experimental in Gemini on the web.

Today, we’re sharing the latest updates to Gemini, your AI assistant, including Deep Research — our new agentic feature in Gemini Advanced — and access to try Gemini 2.0 Flash, our latest experimental model.

Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.1


Google Unveils A.I. Agent That Can Use Websites on Its Own — from nytimes.com by Cade Metz and Nico Grant (NOTE: This is a GIFTED article for/to you.)
The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user.

Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.

Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.


Gemini 2.0 is the next chapter for Google AI — from axios.com by Ina Fried

Google Gemini 2.0 — a major upgrade to the core workings of Google’s AI that the company launched Wednesday — is designed to help generative AI move from answering users’ questions to taking action on its own…

The big picture: Hassabis said building AI systems that can take action on their own has been DeepMind’s focus since its early days teaching computers to play games such as chess and Go.

  • “We were always working towards agent-based systems,” Hassabis said. “From the beginning, they were able to plan and then carry out actions and achieve objectives.”
  • Hassabis said AI systems that can act as semi-autonomous agents also represent an important intermediate step on the path toward artificial general intelligence (AGI) — AI that can match or surpass human capabilities.
  • “If we think about the path to AGI, then obviously you need a system that can reason, break down problems and carry out actions in the world,” he said.

AI Agents vs. AI Assistants: Know the Key Differences — from aithority.com by Rishika Patel

The same paradigm applies to AI systems. AI assistants function as reactive tools, completing tasks like answering queries or managing workflows upon request. Think of chatbots or scheduling tools. AI agents, however, work autonomously to achieve set objectives, making decisions and executing tasks dynamically, adapting as new information becomes available.

Together, AI assistants and agents can enhance productivity and innovation in business environments. While assistants handle routine tasks, agents can drive strategic initiatives and problem-solving. This powerful combination has the potential to elevate organizations, making processes more efficient and professionals more effective.


Discover how to accelerate AI transformation with NVIDIA and Microsoft — from ignite.microsoft.com

Meet NVIDIA – The Engine of AI. From gaming to data science, self-driving cars to climate change, we’re tackling the world’s greatest challenges and transforming everyday life. The Microsoft and NVIDIA partnership enables Startups, ISVs, and Partners global access to the latest NVIDIA GPUs on-demand and comprehensive developer solutions to build, deploy and scale AI-enabled products and services.


Google + Meta + Apple New AI — from theneurondaily.com by Grant Harve

What else Google announced:

  • Deep Research: New feature that can explore topics and compile reports.
  • Project Astra: AI agent that can use Google Search, Lens, and Maps, understands multiple languages, and has 10-minute conversation memory.
  • Project Mariner: A browser control agent that can complete web tasks (83.5% success rate on WebVoyager benchmark). Read more about Mariner here.
  • Agents to help you play (or test) video games.

AI Agents: Easier To Build, Harder To Get Right — from forbes.com by Andres Zunino

The swift progress of artificial intelligence (AI) has simplified the creation and deployment of AI agents with the help of new tools and platforms. However, deploying these systems beneath the surface comes with hidden challenges, particularly concerning ethics, fairness and the potential for bias.

The history of AI agents highlights the growing need for expertise to fully realize their benefits while effectively minimizing risks.

 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 
 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

8 Legal Tech Trends Transforming Practice in 2024 — from lawyer-monthly.com

Thanks to rapid advances in technology, the entire scenario within the legal landscape is changing fast. Fast forward to 2024, and legal tech integration would be the lifeblood of any law firm or legal department if it wishes to stay within the competitive fray.

Innovations such as AI-driven tools for research to blockchain-enabled contracts are thus not only guideline highlights of legal work today. Understanding and embracing these trends will be vital to surviving and thriving in law as the revolution gains momentum and the sands of the world of legal practice continue to shift.

Below are the eight expected trends in legal tech defining the future legal practice.


Building your legal practice’s AI future: Understanding the actual technologies — from thomsonreuters.com by
The implementation of a successful AI strategy for a law firm depends not only on having the right people, but also understanding the tech and how to make it work for the firm

While we’re not delving deep here into how generative artificial intelligence (GenAI) and large language models (LLMs) work, we will talk generally about different categories of tech and emerging GenAI functionalities that are specific for legal.


Ex-Microsoft engineers raise $25M for legal tech startup that uses AI to help lawyers analyze data — from geekwire.com by Taylor Soper

Supio, a Seattle startup founded in 2021 by longtime friends and former Microsoft engineers, raised a $25 million Series A investment to supercharge its software platform designed to help lawyers quickly sort, search, and organize case-related data.

Supio focuses on cases related to personal injury and mass tort plaintiff law (when many plaintiffs file a claim). It specializes in organizing unstructured data and letting lawyers use a chatbot to pull relevant information.

“Most lawyers are data-rich and time-starved, but Supio automates time-sapping manual processes and empowers them to identify critical information to prove and expedite their cases,” Supio CEO and co-founder Jerry Zhou said in a statement.


ILTACON 2024: Large law firms are moving carefully but always forward with their GenAI strategy — from thomsonreuters.com by Zach Warren

NASHVILLE — As the world approaches the two-year mark since the original introduction of OpenAI’s ChatGPT, law firms already have made in-roads into establishing generative artificial intelligence (GenAI) as a part of their firms. Whether for document and correspondence drafting, summarization of meetings and contracts, legal research, or for back-office capabilities, firms have been playing around with a number of use cases to see where the technology may fit into the future.


Thomson Reuters acquires pre-revenue legal LLM developer Safe Sign Technologies – Here’s why — from legaltechnology.com by Caroline Hill

Thomson Reuters announced (on August 21) it has made the somewhat unusual acquisition of UK pre-revenue startup Safe Sign Technologies (SST), which is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.

There isn’t an awful lot of public information available about the company but speaking to Legal IT Insider about the acquisition, Hron explained that SST is focused in part on deep learning research as it pertains to training large language models and specifically legal large language models. The company as yet has no customers and has been focusing exclusively on developing the technology and the models.


Supio brings generative AI to personal injury cases — from techcrunch.com by Kyle Wiggers

Legal work is incredibly labor- and time-intensive, requiring piecing together cases from vast amounts of evidence. That’s driving some firms to pilot AI to streamline certain steps; according to a 2023 survey by the American Bar Association, 35% of law firms now use AI tools in their practice.

OpenAI-backed Harvey is among the big winners so far in the burgeoning AI legal tech space, alongside startups such as Leya and Klarity. But there’s room for one more, says Jerry Zhou and Kyle Lam, the co-founders of an AI platform for personal injury law called Supio, which emerged from stealth Tuesday with a $25 million investment led by Sapphire Ventures.

Supio uses generative AI to automate bulk data collection and aggregation for legal teams. In addition to summarizing info, the platform can organize and identify files — and snippets within files — that might be useful in outlining, drafting and presenting a case, Zhou said.


 

ILTACON 2024: Selling legal tech’s monorail — from abajournal.com by Nicole Black

The bottom line: The promise of GenAI for our profession is great, but all signs point to the realization of its potential being six months out or more. So the question remains: Will generative AI change the legal landscape, ushering in an era of frictionless, seamless legal work? Or have we reached the pinnacle of its development, left only with empty promises? I think it’s the former since there is so much potential, and many companies are investing significantly in AI development, but only time will tell.


From LegalZoom to AI-Powered Platforms: The Rise of Smart Legal Services — from tmcnet.com by Artem Vialykh

In today’s digital age, almost every industry is undergoing a transformation driven by technological innovation, and the legal field is no exception. Traditional legal services, often characterized by high fees, time-consuming processes, and complex paperwork, are increasingly being challenged by more accessible, efficient, and cost-effective alternatives.

LegalZoom, one of the pioneers in offering online legal services, revolutionized the way individuals and small businesses accessed legal assistance. However, with the advent of artificial intelligence (AI) and smart technologies, we are witnessing the rise of even more sophisticated platforms that are poised to reshape the legal landscape further.

The Rise of AI-Powered Legal Platforms
AI-powered legal platforms represent the next frontier in legal services. These platforms leverage the power of artificial intelligence, machine learning, and natural language processing to provide legal services that are not only more efficient but also more accurate and tailored to the needs of the user.

AI-powered platforms offer many advantages, with one of them being their ability to rapidly process and analyze large amounts of data quickly. This capability allows them to provide users with precise legal advice and document generation in a fraction of the time it would take a human attorney. For example, AI-driven platforms can review and analyze contracts, identify potential legal risks, and even suggest revisions, all in real-time. This level of automation significantly reduces the time and cost associated with traditional legal services.


AI, Market Dynamics, and the Future of Legal Services with Harbor’s Zena Applebaum — from geeklawblog.com by Greg Lambert

Zena talks about the integration of generative AI (Gen AI) into legal research tools, particularly at Thomson Reuters, where she previously worked. She emphasizes the challenges in managing expectations around AI’s capabilities while ensuring that the products deliver on their promises. The legal industry has high expectations for AI to simplify the time-consuming and complex nature of legal research. However, Applebaum highlights the need for balance, as legal research remains inherently challenging, and overpromising on AI’s potential could lead to dissatisfaction among users.

Zena shares her outlook on the future of the legal industry, particularly the growing sophistication of in-house legal departments and the increasing competition for legal talent. She predicts that as AI continues to enhance efficiency and drive changes in the industry, the demand for skilled legal professionals will rise. Law firms will need to adapt to these shifts by embracing new technologies and rethinking their strategies to remain competitive in a rapidly evolving market.


Future of the Delivery of Legal Services — from americanbar.org
The legal profession is in the midst of unprecedented change. Learn what might be next for the industry and your bar.


What. Just. Happened? (Post-ILTACon Emails Week of 08-19-2024) — from geeklawblog.com by Greg Lambert

Here’s this week’s edition of What. Just. Happened? Remember, you can track these daily with the AI Lawyer Talking Tech podcast (Spotify or Apple) which covers legal tech news and summarizes stories.


From DSC:
And although this next one is not necessarily legaltech-related, I wanted to include it here anyway — as I’m
always looking to reduce the costs of obtaining a degree.

Improve the Diversity of the Profession By Addressing the Costs of Becoming a Lawyer — from lssse.indiana.edu by Joan Howarth

Not surprisingly, then, research shows that economic assets are a significant factor in bar passage. And LSSSE research shows us the connections between the excessive expense of becoming a lawyer and the persistent racial and ethnic disparities in bar passage rate.

The racial and ethnic bar passage disparities are extreme. For example, the national ABA statistics for first time passers in 2023-24 show White candidates passing at 83%, compared to Black candidates (57%) with Asians and Hispanics in the middle (75% and 69%, respectively).

These disturbing figures are very related to the expense of becoming a lawyer.

Finally, though, after decades of stability — or stagnation — in attorney licensing, change is here. And some of the changes, such as the new pathway to licensure in Oregon based on supervised practice instead of a traditional bar exam, or the Nevada Plan in which most of the requirements can be satisfied during law school, should significantly decrease the costs of licensure and add flexibility for candidates with responsibilities beyond studying for a bar exam.  These reforms are long overdue.


Thomson Reuters acquires Safe Sign Technologies — from legaltechnology.com by Caroline Hill

Thomson Reuters today (21 August) announced it has acquired Safe Sign Technologies (SST), a UK-based startup that is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.

 

Welcome to the Digital Writing Lab -- Supporting teachers to develop and empower digitally literate citizens.

Digital Writing Lab

About this Project

The Digital Writing Lab is a key component of the Australian national Teaching Digital Writing project, which runs from 2022-2025.

This stage of the broader project involves academic and secondary English teacher collaboration to explore how teachers are conceptualising the teaching of digital writing and what further supports they may need.

Previous stages of the project included archival research reviewing materials related to digital writing in Australia’s National Textbook Collection, and a national survey of secondary English teachers. You can find out more about the whole project via the project blog.

Who runs the project?

Project Lead Lucinda McKnight is an Associate Professor and Australian Research Council (ARC) DECRA Fellow researching how English teachers can connect the teaching of writing to contemporary media and students’ lifeworlds.

She is working with Leon Furze, who holds the doctoral scholarship attached to this project, and Chris Zomer, the project Research Fellow. The project is located in the Research for Educational Impact (REDI) centre at Deakin University, Melbourne.

.

Teaching Digital Writing is a research project about English today.

 

Learning Engineering: New Profession or Transformational Process? A Q&A with Ellen Wagner — from campustechnology.com by Mary Grush and Ellen Wagner

“Learning is one of the most personal things that people do; engineering provides problem-solving methods to enable learning at scale. How do we resolve this paradox? 

—Ellen Wagner

Wagner: Learning engineering offers us a process for figuring that out! If we think of learning engineering as a process that can transform research results into learning action there will be evidence to guide that decision-making at each point in the value chain. I want to get people to think of learning engineering as a process for applying research in practice settings, rather than as a professional identity. And by that I mean that learning engineering is a bigger process than what any one person can do on their own.


From DSC:
Instructional Designers, Learning Experience Designers, Professors, Teachers, and Directors/Staff of Teaching & Learning  Centers will be interested in this article. It made me think of the following graphic I created a while back:
.

We need to take more of the research from learning science and apply it in our learning spaces.

 

The Musician’s Rule and GenAI in Education — from opencontent.org by David Wiley

We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.

From DSC:
As is often the case, David put together a solid posting here. A few comments/reflections on it:

  • I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
  • The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
  • The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
  • The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
  • As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
  • There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better.
    .

We need to take more of the research from learning science and apply it in our learning spaces.

 

How Humans Do (and Don’t) Learn— from drphilippahardman.substack.com by Dr. Philippa Hardman
One of the biggest ever reviews of human behaviour change has been published, with some eye-opening implications for how we design & deliver learning experiences

Excerpts (emphasis DSC):

This month, researchers from the University of Pennsylvania published one of the biggest ever reviews of behaviour change efforts – i.e. interventions which do (and don’t) lead to behavioural change in humans.

Research into human behaviour change suggests that, in order to impact capability in real, measurable terms, we need to rethink how we typically design and deliver training.

The interventions which we use most frequently to behaviour change – such as video + quiz approaches and one off workshops – have a negligible impact on measurable changes in human behaviour.

For learning professionals who want to change how their learners think and behave, this research shows conclusively the central importance of:

    1. Shifting attention away from the design of content to the design of context.
    2. Delivering sustained cycles of contextualised practice, support & feedback.

 

 

Introducing Perplexity Pages — from perplexity.ai
You’ve used Perplexity to search for answers, explore new topics, and expand your knowledge. Now, it’s time to share what you learned.

Meet Perplexity Pages, your new tool for easily transforming research into visually stunning, comprehensive content. Whether you’re crafting in-depth articles, detailed reports, or informative guides, Pages streamlines the process so you can focus on what matters most: sharing your knowledge with the world.

Seamless creation
Pages lets you effortlessly create, organize, and share information. Search any topic, and instantly receive a well-structured, beautifully formatted article. Publish your work to our growing library of user-generated content and share it directly with your audience with a single click.

A tool for everyone
Pages is designed to empower creators in any field to share knowledge.

  • Educators: Develop comprehensive study guides for your students, breaking down complex topics into easily digestible content.

  • Researchers: Create detailed reports on your findings, making your work more accessible to a wider audience.

  • Hobbyists: Share your passions by creating engaging guides that inspire others to explore new interests.

 

How to Make the Dream of Education Equity (or Most of It) a Reality — from nataliewexler.substack.com by Natalie Wexler
Studies on the effects of tutoring–by humans or computers–point to ways to improve regular classroom instruction.

One problem, of course, is that it’s prohibitively expensive to hire a tutor for every average or struggling student, or even one for every two or three of them. This was the two-sigma “problem” that Bloom alluded to in the title of his essay: how can the massive benefits of tutoring possibly be scaled up? Both Khan and Zuckerberg have argued that the answer is to have computers, maybe powered by artificial intelligence, serve as tutors instead of humans.

From DSC:
I’m hoping that AI-backed learning platforms WILL help many people of all ages and backgrounds. But I realize — and appreciate what Natalie is saying here as well — that human beings are needed in the learning process (especially at younger ages). 

But without the human element, that’s unlikely to be enough. Students are more likely to work hard to please a teacher than to please a computer.

Natalie goes on to talk about training all teachers in cognitive science — a solid idea for sure. That’s what I was trying to get at with this graphic:
.

We need to take more of the research from learning science and apply it in our learning spaces.

.
But I’m not as hopeful in all teachers getting trained in cognitive science…as it should have happened (in the Schools of Education and in the K12 learning ecosystem at large) by now. Perhaps it will happen, given enough time.

And with more homeschooling and blended programs of education occurring, that idea gets stretched even further. 

K-12 Hybrid Schooling Is in High Demand — from realcleareducation.com by Keri D. Ingraham (emphasis below from DSC); via GSV

Parents are looking for a different kind of education for their children. A 2024 poll of parents reveals that 72% are considering, 63% are searching for, and 44% have selected a new K-12 school option for their children over the past few years. So, what type of education are they seeking?

Additional polling data reveals that 49% of parents would prefer their child learn from home at least one day a week. While 10% want full-time homeschooling, the remaining 39% of parents desire their child to learn at home one to four days a week, with the remaining days attending school on-campus. Another parent poll released this month indicates that an astonishing 64% of parents indicated that if they were looking for a new school for their child, they would enroll him or her in a hybrid school.

 

GTC March 2024 Keynote with NVIDIA CEO Jensen Huang


Also relevant/see:




 
© 2024 | Daniel Christian