Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

Thomson Reuters’ AI debut signals a new era of widespread AI integration in legaltech — from nydailyrecord.com by Nicole Black

It shouldn’t surprise you to learn that legal technology companies have joined the fray. Since early 2023, over one hundred announcements from legal technology companies have emerged, detailing plans to incorporate generative AI functionality into their products. Although most products are stillfirm;eta, rest assured that regardless of the software platforms used in your firm, you can expect that generative AI will soon be seamlessly integrated into the tools that are part of the daily workflows of legal professionals in your firm.

Proof in point: Wednesday’s generative AI announcements from Thomson Reuters offer strong evidence that we’re entering a new era of widespread AI integration. For Thomson Reuter’s legal customers, the integrated generative AI experience will soon be a reality and readily accessible across several different products. This newfound capability largely stems from leveraging CoCounsel, a generative AI legal assistant tool acquired by Thomson Reuters as part of the acquisition of Casetext for $650 million, which was completed in August.

Ironcrow AI’s LLM Sandbox: Setting an Industry Standard — from mccrus.com by McCoy Russell; with thanks to Mr. Justin Wagner out on LinkedIn for this resource

As an innovative firm, McCoy Russell has been at the forefront of patent law with its development and use of proprietary AI software via its software arm Ironcrow AI. Recently, Ironcrow has invested substantial efforts to create a specialized AI LLM Sandbox as a key tool for patent professionals.

Ironcrow is excited to announce a groundbreaking achievement in the field of AI/ML for Patent Law professionals – Ironcrow’s specialized AI LLM Sandbox has achieved a score above the 70% threshold required to pass the patent bar exam, using a test set of questions. While other researchers have developed tools to pass a state bar exam, none have attempted to pass the specialized patent bar exam administered by the USPTO.

This remarkable feat showcases the innovation by the Ironcrow and McCoy Russell partnership and the ability of the LLM Sandbox’s “Interrogate” feature to answer questions based on the knowledge of the patent procedure. The Sandbox can provide well-cited answers along with relevant excerpts from the MPEP, etc., to its users. This unique feature sets Ironcrow AI’s LLM Sandbox apart from other systems in the market.

 

 

Amazon aims to provide free AI skills training to 2 million people by 2025 with its new ‘AI Ready’ commitment — from aboutamazon.com by Swami Sivasubramanian

Artificial intelligence (AI) is the most transformative technology of our generation. If we are going to unlock the full potential of AI to tackle the world’s most challenging problems, we need to make AI education accessible to anyone with a desire to learn.

That’s why Amazon is announcing “AI Ready,” a new commitment designed to provide free AI skills training to 2 million people globally by 2025. To achieve this goal, we’re launching new initiatives for adults and young learners, and scaling our existing free AI training programs—removing cost as a barrier to accessing these critical skills.

From DSC:
While this will likely serve Amazon just fine, it’s still an example of the leadership of a corporation seeking to help others out.

 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

OpenAI announces leadership transition — from openai.com
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

Excerpt (emphasis DSC):

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

From DSC:
I’m not here to pass judgment, but all of us on planet Earth should be at least concerned with this disturbing news.

AI is one of the most powerful set of emerging technologies on the planet right now. OpenAI is arguably the most powerful vendor/innovator/influencer/leader in that space. And Sam Altman is was the face of OpenAI — and arguably for AI itself. So this is a big deal.

What concerns me is what is NOT being relayed in this posting:

  • What was being hidden from OpenAI’s Board?
  • What else doesn’t the public know? 
  • Why is Greg Brockman stepping down as Chairman of the Board?

To whom much is given, much is expected.


Also related/see:

OpenAI CEO Sam Altman ousted, shocking AI world — from washingtonpost.com by Gerrit De Vynck and Nitasha Tiku
The artificial intelligence company’s directors said he was not ‘consistently candid in his communications with the board’

Altman’s sudden departure sent shock waves through the technology industry and the halls of government, where he had become a familiar presence in debates over the regulation of AI. His rise and apparent fall from tech’s top rung is one of the fastest in Silicon Valley history. In less than a year, he went from being Bay Area famous as a failed start-up founder who reinvented himself as a popular investor in small companies to becoming one of the most influential business leaders in the world. Journalists, politicians, tech investors and Fortune 500 CEOs alike had been clamoring for his attention.

OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O. — from nytimes.com by Cade Metz

Sam Altman, the high-profile chief executive of OpenAI, who became the face of the tech industry’s artificial intelligence boom, was pushed out of the company by its board of directors, OpenAI said in a blog post on Friday afternoon.


From DSC:
Updates — I just saw these items

.
Sam Altman fired as CEO of OpenAI — from theverge.com by Jay Peters
In a sudden move, Altman is leaving after the company’s board determined that he ‘was not consistently candid in his communications.’ President and co-founder Greg Brockman has also quit.



 

OpenAI Is Slowly Killing Prompt Engineering With The Latest ChatGPT and DALL-E Updates — from artificialcorner.substack.com by
ChatGPT and DALL-E 3 now do most of the prompting for us. Does this mean the end of prompt engineering?

Prompt engineering is a must-have skill that any AI enthusiast should have … at least until OpenAI released GPTs and DALL-E 3.

OpenAI doesn’t want to force users to learn prompt engineering to get the most out of its tools.

It seems OpenAI’s goal is to make its tools as easy to use as possible allowing even non-tech people to create outstanding AI images and tailored versions of ChatGPT without learning prompting techniques or coding.

AI can now generate prompts for us, but is this enough to kill prompt engineering? To answer this, let’s see how good are these AI-generated prompts.

From DSC:
I agree with several others that prompt engineering will be drastically altered…for the majority of us, I wouldn’t spend a lot of time becoming a Prompt Engineer.


.


 

Be My Eyes AI offers GPT-4-powered support for blind Microsoft customers — from theverge.com by Sheena Vasani
The tech giant’s using Be My Eyes’ visual assistant tool to help blind users quickly resolve issues without a human agent.


From DSC:
Speaking of Microsoft and AI:

 

AI Pedagogy Project, metaLAB (at) Harvard
Creative and critical engagement with AI in education. A collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi

AI Guide
Focused on the essentials and written to be accessible to a newcomer, this interactive guide will give you the background you need to feel more confident with engaging conversations about AI in your classroom.


From #47 of SAIL: Sensemaking AI Learning — by George Siemens

Excerpt (emphasis DSC):

Welcome to Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting education and learning.

Over the last year, after dozens of conferences, many webinars, panels, workshops, and many (many) conversations with colleagues, it’s starting to feel like higher education, as a system, is in an AI groundhog’s day loop. I haven’t heard anything novel generated by universities. We have a chatbot! Soon it will be a tutor! We have a generative AI faculty council! Here’s our list of links to sites that also have lists! We need AI literacy! My mantra over the last while has been that higher education leadership is failing us on AI in a more dramatic way than it failed us on digitization and online learning. What will your universities be buying from AI vendors in five years because they failed to develop a strategic vision and capabilities today?


AI + the Education System — from drphilippahardman.substack.com Dr. Philippa Hardman
The key to relevance, value & excellence?


The magic school of the future is one that helps students learn to work together and care for each other — from stefanbauschard.substack.com by Stefan Bauschard
AI is going to alter economic and professional structures. Will we alter the educational structures?

(e) What is really required is a significant re-organization of schooling and curriculum. At a meta-level, the school system is focused on developing the type of intelligence I opened with, and the economic value of that is going to rapidly decline.

(f). This is all going to happen very quickly (faster than any previous change in history), and many people aren’t paying attention.  AI is already here.


 

9 Tips for Using AI for Learning (and Fun!) — from edutopia.org by Daniel Leonard; via Donna Norton on X/Twitter
These innovative, AI-driven activities will help you engage students across grade levels and subject areas.

Here are nine AI-based lesson ideas to try across different grade levels and subject areas.

ELEMENTARY SCHOOL

AI-generated Animated Drawing of artwork

Courtesy of Meta AI Research
A child’s drawing (left) and animations created with Animated Drawings.

.

1. Bring Student Drawings to Life: Young kids love to sketch, and AI can animate their sketches—and introduce them to the power of the technology in the process.

HIGH SCHOOL

8. Speak With AI in a Foreign Language: When learning a new language, students might feel self-conscious about making mistakes and avoid practicing as much as they should.


Though not necessarily about education, also see:

How I Use AI for Productivity — from wondertools.substack.com by Jeremy Caplan
In this Wonder Tools audio post I share a dozen of my favorite AI tools

From DSC:
I like Jeremy’s mentioning the various tools that he used in making this audio post:

 

Where a developing, new kind of learning ecosystem is likely headed [Christian]

From DSC:
As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.

Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.

Project Chiron's vision: Our vision for education Every child will soon have a super-intelligent AI teacher by their side. We want to make sure they instill a love of learning in children.


From DSC:
This future learning platform will also focus on developing skills and competencies. Along those lines, see:

Scale for Skills-First — from the-job.beehiiv.com by Paul Fain
An ed-tech giant’s ambitious moves into digital credentialing and learner records.

A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.

Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.

Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.

Paul also links to 1EDTECH’s page regarding the Comprehensive Learning Record

 

The new apprenticeships — from jordanfurlong.substack.com by Jordan Furlong
Several American states are rewriting the rules of lawyer licensure and bringing the US into line with a key element of lawyer formation worldwide: supervised practice.

Change comes so gradually and fitfully to the legal sector that when something truly revolutionary happens — an actual turning point with an identifiable real-world impact — we have to mark the occasion. One such revolution broke out in the United States last week, opening up fantastic new possibilities for Americans who want to become lawyers.

The Oregon Supreme Court approved a new licensure program that does not require passage of a traditional written bar exam. After graduating from law school, aspiring Oregon lawyers can complete 675 hours of paid legal work under the supervision of an experienced attorney, assembling a portfolio of legal work to be assessed by bar admission officials. Candidates must submit eight samples of legal writing, take the lead in at least two initial client interviews or client counseling sessions, and oversee two negotiations, among other requirements.

Jordan mentions what’s going on in several other states including:

  • Utah
  • Washington
  • Minnesota
  • Nevada
  • California
  • Massachusetts
  • South Dakota

From DSC:
The Bar Exam doesn’t have a good reputation for actually helping get someone ready to practice law. So this is huge news indeed! The U.S. needs more people/specialists at the legal table moving forward. The items Jordan relays in this posting are a huge step forward in making that a reality.


For other innovations within the legal realm, see:

LawSchoolAi — from youtube.com

Picture this: A world where anyone can unlock the doors to legal expertise, no matter their background or resources. Introducing Law School AI – the game-changing platform turning this vision into reality. Our mission? To make legal education accessible, affordable, and tailored to every learner’s unique style, by leveraging the power of artificial intelligence.

As a trailblazing edtech company, Law School AI fuses cutting-edge AI technology with modern pedagogical techniques to craft a personalized, immersive, and transformative learning experience. Our platform shatters boundaries, opening up equal opportunities for individuals from all walks of life to master the intricacies of law.

Embrace a new era of legal education with Law School AI, where the age-old law school experience is reimagined as a thrilling, engaging, and interactive odyssey. Welcome to the future of legal learning.

 

 

 


From GPTs (pt. 3) — from theneurondaily.com by Noah Edelman

BTW, here are a few GPTs worth checking out today:

  • ConvertAnything—convert images, audio, videos, PDFs, files, & more.
  • editGPT—edit any writing (like Grammarly inside ChatGPT).
  • Grimoire—a coding assistant that helps you build anything!

Some notes from Dan Fitzpatrick – The AI Educator:

Custom GPT Bots:

  • These could help with the creation of interactive learning assistants, aligned with curricula.
  • They can be easily created with natural language programming.
  • Important to note users must have a ChatGPT Plus paid account

Custom GPT Store:

  • Marketplace for sharing and accessing educational GPT tools created by other teachers.
  • A store could offer access to specialised tools for diverse learning needs.
  • A store could enhance teaching strategies when accessing proven, effective GPT applications.

From DSC:
I appreciate Dan’s potential menu of options for a child’s education:

Monday AM: Sports club
Monday PM: Synthesis Online School AI Tutor
Tuesday AM: Music Lesson
Tuesday PM: Synthesis Online School Group Work
Wednesday AM: Drama Rehearsal
Wednesday PM: Synthesis Online School AI Tutor
Thursday AM: Volunteer work
Thursday PM: Private study
Friday AM: Work experience
Friday PM: Work experience

Our daughter has special learning needs and this is very similar to what she is doing. 

Also, Dan has a couple of videos out here at Google for Education:



Tuesday’s AI Ten for Educators (November 14) — from stefanbauschard.substack.com by Stefan Bauschard
Ten AI developments for educators to be aware of

Two boxes. In my May Cottesmore presentation, I put up two boxes:

(a) Box 1 — How educators can use AI to do what they do now (lesson plans, quizzes, tests, vocabulary lists, etc.)

(b) Box 2 — How the education system needs to change because, in the near future (sort of already), everyone is going to have multiple AIs working with them all day, and the premium on intelligence, especially “knowledge-based” intelligence, is going to decline rapidly. It’s hard to think that significant changes in the education system won’t be needed to accommodate that change.

There is a lot of focus on preparing educators to work in Box 1, which is important, if for no other reason than that they can see the power of even the current but limited technologies, but the hard questions are starting to be about Box 2. I encourage you to start those conversations, as the “ed tech” companies already are, and they’ll be happy to provide the answers and the services if you don’t want to.

Practical suggestions: Two AI teams in your institution. Team 1 works on Box A and Team 2 works on Box B.

 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

MIT Technology Review — Big problems that demand bigger energy. — from technologyreview.com by various

Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.

MIT Technology Review interviews many people to weigh in on the underserved issues at the intersections of technology and society.

 

How ChatGPT changed my approach to learning — from wondertools.substack.com Jeremy Caplan and Frank Andrade
A guest contributor tutored himself with AI

Excerpt:

Frank: ChatGPT has changed how I learn and practice new things every day.

  • I use ChatGPT not only to fix my mistakes, but also to learn from them.
  • I use ChatGPT Voice to explore new topics, simulate job interviews, and practice foreign languages.
  • You can even use ChatGPT Vision to learn from images!

Here’s how to use AI to enhance your learning.

 
© 2025 | Daniel Christian