The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

MIT Technology Review — Big problems that demand bigger energy. — from technologyreview.com by various

Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.

MIT Technology Review interviews many people to weigh in on the underserved issues at the intersections of technology and society.

 

How ChatGPT changed my approach to learning — from wondertools.substack.com Jeremy Caplan and Frank Andrade
A guest contributor tutored himself with AI

Excerpt:

Frank: ChatGPT has changed how I learn and practice new things every day.

  • I use ChatGPT not only to fix my mistakes, but also to learn from them.
  • I use ChatGPT Voice to explore new topics, simulate job interviews, and practice foreign languages.
  • You can even use ChatGPT Vision to learn from images!

Here’s how to use AI to enhance your learning.

 

A future-facing minister, a young inventor and a shared vision: An AI tutor for every student — from news.microsoft.com by Chris Welsch

The Ministry of Education and Pativada see what has become known as the U.A.E. AI Tutor as a way to provide students with 24/7 assistance as well as help level the playing field for those families who cannot afford a private tutor. At the same time, the AI Tutor would be an aid to teachers, they say. “We see it as a tool that will support our teachers,” says Aljughaiman. “This is a supplement to classroom learning.”

If everything goes according to plan, every student in the United Arab Emirates’ school system will have a personal AI tutor – that fits in their pockets.

It’s a story that involves an element of coincidence, a forward-looking education minister and a tech team led by a chief executive officer who still lives at home with his parents.

In February 2023, the U.A.E.’s education minister, His Excellency Dr. Ahmad Belhoul Al Falasi, announced that the ministry was embracing AI technology and pursuing the idea of an AI tutor to help Emirati students succeed. And he also announced that the speech he presented had been written by ChatGPT. “We should not demonize AI,” he said at the time.



Fostering deep learning in humans and amplifying our intelligence in an AI World — from stefanbauschard.substack.com by Stefan Bauschard
A free 288-page report on advancements in AI and related technology, their effects on education, and our practical support for AI-amplified human deep learning

Six weeks ago, Dr. Sabba Quidwai and I accidentally stumbled upon an idea to compare the deep learning revolution in computer science to the mostly lacking deep learning efforts in education (Mehta & Fine). I started writing, and as these things often go with me, I thought there were many other things that would be useful to think through and for educators to know, and we ended up with this 288-page report.

***

Here’s an abstract from that report:

This report looks at the growing gap between the attention paid to the development of intelligence in machines and humans. While computer scientists have made great strides in developing human intelligence capacities in machines using deep learning technologies, including the abilities of machines to learn on their own, a significant part of the education system has not kept up with developing the intelligence capabilities in people that will enable them to succeed in the 21st century. Instead of fully embracing pedagogical methods that place primary emphasis on promoting collaboration, critical thinking, communication, creativity, and self-learning through experiential, interdisciplinary approaches grounded in human deep learning and combined with current technologies, a substantial portion of the educational system continues to heavily rely on traditional instructional methods and goals. These methods and goals prioritize knowledge acquisition and organization, areas in which machines already perform substantially better than people.

Also from Stefan Bauschard, see:

  • Debating in the World of AI
    Performative assessment, learning to collaborate with humans and machines, and developing special human qualities

13 Nuggets of AI Wisdom for Higher Education Leaders — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable AI Guidance for Higher Education Leaders

Incentivize faculty AI innovation with AI. 

Invest in people first, then technology. 

On teaching, learning, and assessment. AI has captured the attention of all institutional stakeholders. Capitalize to reimagine pedagogy and evaluation. Rethink lectures, examinations, and assignments to align with workforce needs. Consider incorporating Problem-Based Learning, building portfolios and proof of work, and conducting oral exams. And use AI to provide individualized support and assess real-world skills.

Actively engage students.


Some thoughts from George Siemens re: AI:

Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting learning.

Our education system has a uni-dimensional focus: learning things. Of course, we say we care about developing the whole learner, but the metrics that matter (grade, transcripts) that underpin the education system are largely focused on teaching students things that have long been Google-able but are now increasingly doable by AI. Developments in AI matters in ways that calls into question large parts of what happens in our universities. This is not a statement that people don’t need to learn core concepts and skills. My point is that the fulcrum of learning has shifted. Knowing things will continue to matter less and less going forward as AI improves its capabilities. We’ll need to start intentionally developing broader and broader attributes of learners: metacognition, wellness, affect, social engagement, etc. Education will continue to shift toward human skills and away from primary assessment of knowledge gains disconnected from skills and practice and ways of being.


AI, the Next Chapter for College Librarians — from insidehighered.com by Lauren Coffey
Librarians have lived through the disruptions of fax machines, websites and Wikipedia, and now they are bracing to do it again as artificial intelligence tools go mainstream: “Maybe it’s our time to shine.”

A few months after ChatGPT launched last fall, faculty and students at Northwestern University had many questions about the building wave of new artificial intelligence tools. So they turned to a familiar source of help: the library.

“At the time it was seen as a research and citation problem, so that led them to us,” said Michelle Guittar, head of instruction and curriculum support at Northwestern University Libraries.

In response, Guittar, along with librarian Jeanette Moss, created a landing page in April, “Using AI Tools in Your Research.” At the time, the university itself had yet to put together a comprehensive resource page.


From Dr. Nick Jackson’s recent post on LinkedIn: 

Last night the Digitech team of junior and senior teachers from Scotch College Adelaide showcased their 2023 experiments, innovation, successes and failures with technology in education. Accompanied by Student digital leaders, we saw the following:

  •  AI used for languagelearning where avatars can help with accents
  • Motioncapture suits being used in mediastudies
  • AI used in assessment and automatic grading of work
  • AR used in designtechnology
  • VR used for immersive Junior school experiences
  • A teacher’s AI toolkit that has changed teaching practice and workflow
  • AR and the EyeJack app used by students to create dynamic art work
  • VR use in careers education in Senior school
  • How ethics around AI is taught to Junior school students from Year 1
  • Experiments with MyStudyWorks

Almost an Agent: What GPTs can do — from oneusefulthing.org by Ethan Mollick

What would a real AI agent look like? A simple agent that writes academic papers would, after being given a dataset and a field of study, read about how to compose a good paper, analyze the data, conduct a literature review, generate hypotheses, test them, and then write up the results, all without intervention. You put in a request, you get a Word document that contains a draft of an academic paper.

A process kind of like this one:


What I Learned From an Experiment to Apply Generative AI to My Data Course — from edsurge.com by Wendy Castillo

As an educator, I have a duty to remain informed about the latest developments in generative AI, not only to ensure learning is happening, but to stay on top of what tools exist, what benefits and limitations they have, and most importantly, how students might be using them.

However, it’s also important to acknowledge that the quality of work produced by students now requires higher expectations and potential adjustments to grading practices. The baseline is no longer zero, it is AI. And the upper limit of what humans can achieve with these new capabilities remains an unknown frontier.


Artificial Intelligence in Higher Education: Trick or Treat? — from tytonpartners.com by Kristen Fox and Catherine Shaw
.

Two components of AI -- generative AI and predictive AI

 

What happens to teaching after Covid? — from chronicle.com by Beth McMurtrie

It’s an era many instructors would like to put behind them: black boxes on Zoom screens, muffled discussions behind masks, students struggling to stay engaged. But how much more challenging would teaching during the pandemic have been if colleges did not have experts on staff to help with the transition? On many campuses, teaching-center directors, instructional designers, educational technologists, and others worked alongside professors to explore learning-management systems, master video technology, and rethink what and how they teach.

A new book out this month, Higher Education Beyond Covid: New Teaching Paradigms and Promise, explores this period through the stories of campus teaching and learning centers. Their experiences reflect successes and failures, and what higher education could learn as it plans for the future.

Beth also mentioned/link to:


How to hold difficult discussions online — from chronicle.com by Beckie Supiano

As usual, our readers were full of suggestions. Kathryn Schild, the lead instructional designer in faculty development and instructional support at the University of Alaska at Anchorage, shared a guide she’s compiled on holding asynchronous discussions, which includes a section on difficult topics.

In an email, Schild also pulled out a few ideas she thought were particularly relevant to Le’s question, including:

  • Set the ground rules as a class. One way to do this is to share your draft rules in a collaborative document and ask students to annotate it and add suggestions.
  • Plan to hold fewer difficult discussions than in a face-to-face class, and work on quality over quantity. This could include multiweek discussions, where you spiral through the same issue with fresh perspectives as the class learns new approaches.
  • Start with relationship-building interactions in the first few weeks, such as introductions, low-stakes group assignments, or peer feedback, etc.
 

New models and developer products announced at DevDay — from openai.com
GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.

Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:

  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)


Introducing GPTs — from openai.com
You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.




OpenAI’s New Groundbreaking Update — from newsletter.thedailybite.co
Everything you need to know about OpenAI’s update, what people are building, and a prompt to skim long YouTube videos…

But among all this exciting news, the announcement of user-created “GPTs” took the cake.

That’s right, your very own personalized version of ChatGPT is coming, and it’s as groundbreaking as it sounds.

OpenAI’s groundbreaking announcement isn’t just a new feature – it’s a personal AI revolution. 

The upcoming customizable “GPTs” transform ChatGPT from a one-size-fits-all to a one-of-a-kind digital sidekick that is attuned to your life’s rhythm. 


Lore Issue #56: Biggest Week in AI This Year — from news.lore.com by Nathan Lands

First, Elon Musk announced “Grok,” a ChatGPT competitor inspired by “The Hitchhiker’s Guide to the Galaxy.” Surprisingly, in just a few months, xAI has managed to surpass the capabilities of GPT-3.5, signaling their impressive speed of execution and establishing them as a formidable long-term contender.

Then, OpenAI hosted their inaugural Dev Day, unveiling “GPT-4 Turbo,” which boasts a 128k context window, API costs slashed by threefold, text-to-speech capabilities, auto-model switching, agents, and even their version of an app store slated for launch next month.


The Day That Changed Everything — from joinsuperhuman.ai by Zain Kahn
ALSO: Everything you need to know about yesterday’s OpenAI announcements

  • OpenAI DevDay Part I: Custom ChatGPTs and the App Store of AI
  • OpenAI DevDay Part II: GPT-4 Turbo, Assistants, APIs, and more

OpenAI’s Big Reveal: Custom GPTs, GPT Store & More — from  news.theaiexchange.com
What you should know about the new announcements; how to get started with building custom GPTs


Incredible pace of OpenAI — from theaivalley.com by Barsee
PLUS: Elon’s Gork


 

 


Teaching writing in the age of AI — from the Future of Learning (a Hechinger Report newsletter) by Javeria Salman

ChatGPT can produce a perfectly serviceable writing “product,” she said. But writing isn’t a product per se — it’s a tool for thinking, for organizing ideas, she said.

“ChatGPT and other text-based tools can’t think for us,” she said. “There’s still things to learn when it comes to writing because writing is a form of figuring out what you think.”

When students could contrast their own writing to ChatGPT’s more generic version, Levine said, they were able to “understand what their own voice is and what it does.”




Grammarly’s new generative AI feature learns your style — and applies it to any text — from techcrunch.com by Kyle Wiggers; via Tom Barrett

But what about text? Should — and if so, how should — writers be recognized and remunerated for AI-generated works that mimic their voices?

Those are questions that are likely to be raised by a feature in Grammarly, the cloud-based typing assistant, that’s scheduled to launch by the end of the year for subscribers to Grammarly’s business tier. Called “Personalized voice detection and application,” the feature automatically detects a person’s unique writing style and creates a “voice profile” that can rewrite any text in the person’s style.


Is AI Quietly Weaving the Fabric of a Global Classroom Renaissance? — from medium.com by Robert the Robot
In a world constantly buzzing with innovation, a silent revolution is unfolding within the sanctuaries of learning—our classrooms.

From bustling metropolises to serene hamlets, schools across the globe are greeting a new companion—Artificial Intelligence (AI). This companion promises to redefine the essence of education, making learning a journey tailored to each child’s unique abilities.

The advent of AI in education is akin to a gentle breeze, subtly transforming the academic landscape. Picture a classroom where each child, with their distinct capabilities and pace, embarks on a personalized learning path. AI morphs this vision into reality, crafting a personalized educational landscape that celebrates the unique potential harbored within every learner.


AI Books for Educators — from aiadvisoryboards.wordpress.com by Barbara Anna Zielonka

Books have always held a special place in my heart. As an avid reader and AI enthusiast, I have curated a list of books on artificial intelligence specifically tailored for educators. These books delve into the realms of AI, exploring its applications, ethical considerations, and its impact on education. Share your suggestions and let me know which books you would like to see included on this list.


SAIL: ELAI recordings, AI Safety, Near term AI/learning — by George Siemens

We held our fourth online Empowering Learners for the Age of AI conference last week. We sold out at 1500 people (a Whova and budget limit). The recordings/playlist from the conference can now be accessed here.

 

60+ Ideas for ChatGPT Assignments — from stars.library.ucf.edu by Kevin Yee, Kirby Whittington, Erin Doggette, and Laurie Uttich

60+ ideas for using ChatGPT in your assignments today


Artificial intelligence is disrupting higher education — from itweb.co.za by Rennie Naidoo; via GSV
Traditional contact universities need to adapt faster and find creative ways of exploring and exploiting AI, or lose their dominant position.

Higher education professionals have a responsibility to shape AI as a force for good.


Introducing Canva’s biggest education launch — from canva.com
We’re thrilled to unveil our biggest education product launch ever. Today, we’re introducing a whole new suite of products that turn Canva into the all-in-one classroom tool educators have been waiting for.

Also see Canva for Education.
Create and personalize lesson plans, infographics,
posters, video, and more. 
100% free for
teachers and students at eligible schools.


ChatGPT and generative AI: 25 applications to support student engagement — from timeshighereducation.com by Seb Dianati and Suman Laudari
In the fourth part of their series looking at 100 ways to use ChatGPT in higher education, Seb Dianati and Suman Laudari share 25 prompts for the AI tool to boost student engagement


There are two ways to use ChatGPT — from theneurondaily.com

  1. Type to it.
  2. Talk to it (new).


Since then, we’ve looked to it for a variety of real-world business advice. For example, Prof Ethan Mollick posted a great guide using ChatGPT-4 with voice as a negotiation instructor.

In a similar fashion, you can consult ChatGPT with voice for feedback on:

  • Job interviews.
  • Team meetings.
  • Business presentations.



Via The Rundown: Google is using AI to analyze the company’s Maps data and suggest adjustments to traffic light timing — aiming to cut driver waits, stops, and emissions.


Google Pixel’s face-altering photo tool sparks AI manipulation debate — from bbc.com by Darren Waters

The camera never lies. Except, of course, it does – and seemingly more often with each passing day.
In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels.

Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.

Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs.



From Digital Native to AI-Empowered: Learning in the Age of Artificial Intelligence — from campustechnology.com by Kim Round
The upcoming generation of learners will enter higher education empowered by AI. How can institutions best serve these learners and prepare them for the workplace of the future?

Dr. Chris Dede, of Harvard University and Co-PI of the National AI Institute for Adult Learning and Online Education, spoke about the differences between knowledge and wisdom in AI-human interactions in a keynote address at the 2022 Empowering Learners for the Age of AI conference. He drew a parallel between Star Trek: The Next Generation characters Data and Picard during complex problem-solving: While Data offers the knowledge and information, Captain Picard offers the wisdom and context from on a leadership mantle, and determines its relevance, timing, and application.


The Near-term Impact of Generative AI on Education, in One Sentence — from opencontent.org by David Wiley

This “decreasing obstacles” framing turned out to be helpful in thinking about generative AI. When the time came, my answer to the panel question, “how would you summarize the impact generative AI is going to have on education?” was this:

“Generative AI greatly reduces the degree to which access to expertise is an obstacle to education.”

We haven’t even started to unpack the implications of this notion yet, but hopefully just naming it will give the conversation focus, give people something to disagree with, and help the conversation progress more quickly.


How to Make an AI-Generated Film — from heatherbcooper.substack.com by Heather Cooper
Plus, Midjourney finally has a new upscale tool!


Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning — from blogs.nvidia.com by Angie Lee
AI agent uses LLMs to automatically generate reward algorithms to train robots to accomplish complex tasks.

From DSC:
I’m not excited about this, as I can’t help but wonder…how long before the militaries of the world introduce this into their warfare schemes and strategies?


The 93 Questions Schools Should Ask About AI — from edweek.org by Alyson Klein

The toolkit recommends schools consider:

  • Purpose: How can AI help achieve educational goals?
  • Compliance: How does AI fit with existing policies?
  • Knowledge: How can schools advance AI Literacy?
  • Balance: What are the benefits and risks of AI?
  • Integrity: How does AI fit into policies on things like cheating?
  • Agency: How can humans stay in the loop on AI?
  • Evaluation: How can schools regularly assess the impact of AI?
 
 

The Game-Changer: How Legal Technology is Transforming the Legal Sector — from todaysconveyancer.co.uk by Perfect Portal

Rob Lawson, Strategic Sales Manager at Perfect Portal discussed why he thinks legal technology is so important:

“I spent almost 20 years in private practice and was often frustrated at the antiquated technology and processes that were deployed. It is one of the reasons that I love working in legal tech to provide solutions and streamline processes in the modern law firm. One of the major grumbles for practitioners is the amount of admin that they must do to fulfil the needs of their clients. Technology can automate routine tasks, streamline processes, and help manage large volumes of data more effectively. This then allows legal professionals to focus on more strategic aspects of their work. Ultimately this will increase efficiency and productivity.”



Some gen AI vendors say they’ll defend customers from IP lawsuits. Others, not so much. — from techcrunch.com by Kyle Wiggers

A person using generative AI — models that generate text, images, music and more given a prompt — could infringe on someone else’s copyright through no fault of their own. But who’s on the hook for the legal fees and damages if — or rather, when — that happens?

It depends.

In the fast-changing landscape of generative AI, companies monetizing the tech — from startups to big tech companies like Google, Amazon and Microsoft — are approaching IP risks from very different angles.


Clio Goes All Out with Major Product Announcements, Including A Personal Injury Add-On, E-Filing, and (Of Course) Generative AI — from lawnext.com by Bob Ambrogi

At its annual Clio Cloud Conference in Nashville today, the law practice management company Clio introduced an array of major new products and product updates, calling the series of announcements its most expansive product update ever in its 15-year history.


AI will invert the biglaw pyramid — from alexofftherecord.com by Cece Xie

These tasks that GPT can now handle are, coincidentally, common tasks for junior associates. From company and transaction summaries to legal research and drafting memos, analyzing and drafting have long been the purview of bright-eyed, bushy-tailed new law grads.

If we follow the capitalistic impulse of biglaw firms to its logical conclusion, this means that junior associates may soon face obsolescence. Why spend an hour figuring out how to explain an assignment to a first-year associate when you can just ask CoCounsel in five minutes? And the initial output will likely be better than a first-year’s initial work product, too.

Given the immense cost-savings that legal GPT products can confer, I suspect the rise of AI in legal tech will coincide with smaller junior associate classes. Gone are the days of 50+ junior lawyers all working on the same document review or due diligence. Instead, a fraction of those junior lawyers will be hired to oversee and QC the AI’s outputs. Junior associates will edit more than they do currently and manage more than they do right now. Juniors will effectively be more like midlevels from the get-go.


Beyond Law Firms: How Legal Tech’s Real Frontier Lies With SMBs (small and medium-sized businesses) — from forbes.com by Charles Brecque

Data and artificial intelligence are transforming the legal technology space—there’s no doubt about it. A recent Thomson Reuters Institute survey of lawyers showed that a large majority (82%) of respondents believe ChatGPT and generative AI can be readily applied to legal work.

While it’s tempting to think of legal tech as a playground exclusive to law firms, as technology enables employees without legal training to use and create legal frameworks and documentation, I’d like to challenge that narrative. Being the founder of a company that uses AI to manage contracts, the way I see it is the real magic happens when legal tech tools meet the day-to-day challenges of small and medium-sized businesses (known as “SMBs”).

 

WHAT WAS GARY MARCUS THINKING, IN THAT INTERVIEW WITH GEOFF HINTON? — from linkedin.com by Stephen Downes

Background (emphasis DSC): 60 Minutes did an interview with ‘the Godfather of AI’, Geoffrey Hinton. In response, Gary Marcus wrote a column in which he inserted his own set of responses into the transcript, as though he were a panel participant. Neat idea. So, of course, I’m stealing it, and in what follows, I insert my own comments as I join the 60 Minutes panel with Geoffrey Hinton and Gary Marcus.

Usually I put everyone else’s text in italics, but for this post I’ll put it all in normal font, to keep the format consistent.

Godfather of Artificial Intelligence Geoffrey Hinton on the promise, risks of advanced AI


OpenAI’s Revenue Skyrockets to $1.3 Billion Annualized Rate — from maginative.com by Chris McKay
This means the company is generating over $100 million per month—a 30% increase from just this past summer.

OpenAI, the company behind the viral conversational AI ChatGPT, is experiencing explosive revenue growth. The Information reports that CEO Sam Altman told the staff this week that OpenAI’s revenue is now crossing $1.3 billion on an annualized basis. This means the company is generating over $100 million per month—a 30% increase from just this past summer.

Since the launch of a paid version of ChatGPT in February, OpenAI’s financial growth has been nothing short of meteoric. Additionally, in August, the company announced the launch of ChatGPT Enterprise, a commercial version of its popular conversational AI chatbot aimed at business users.

For comparison, OpenAI’s total revenue for all of 2022 was just $28 million. The launch of ChatGPT has turbocharged OpenAI’s business, positioning it as a bellwether for demand for generative AI.



From 10/13:


New ways to get inspired with generative AI in Search — from blog.google
We’re testing new ways to get more done right from Search, like the ability to generate imagery with AI or creating the first draft of something you need to write.

 

The Misunderstanding About Education That Cost Mark Zuckerberg $100 Million — from danmeyer.substack.com by Dan Meyer
Personalized learning can feel isolating. Whole class learning can feel personal. This is hard to understand.

Excerpt (emphasis DSC):

Last week, Matt Barnum reported in Chalkbeat that the Chan Zuckerberg Initiative is laying off dozens of staff members and pivoting away from the personalized learning platform they have funded since 2015 with somewhere near $100M.

I have tried to illustrate as often as my subscribers will tolerate that students don’t particularly enjoy learning alone with laptops within social spaces like classrooms. That learning fails to answer their questions about their social identity. It contributes to their feelings of alienation and disbelonging. I find this case easy to make but hard to prove. Maybe we just haven’t done personalized learning right? Maybe Summit just needed to include generative AI chatbots in their platform?

What is far easier to prove, or rather to disprove, is the idea that “whole class instruction must feel impersonal to students,” that “whole class instruction must necessarily fail to meet the needs of individual students.”

From DSC:
I appreciate Dan’s comments here (as highlighted above) as they are helpful in my thoughts regarding the Learning from the Living [Class] Room vision. They seem to be echoed here by Jeppe Klitgaard Stricker when he says:

Personalized learning paths can be great, but they also entail a potential abolishment or unintended dissolution of learning communities and belonging.

Perhaps this powerful, global, Artificial Intelligence (AI)-backed, next-generation, lifelong learning platform of the future will be more focused on postsecondary students and experiences — but not so much for the K12 learning ecosystem.

But the school systems I’ve seen here in Michigan (USA) represent systems that address a majority of the class only. These one-size-fits-all systems don’t work for many students who need extra help and/or who are gifted students. The trains move fast. Good luck if you can’t keep up with the pace.

But if K-12’ers are involved in a future learning platform, the platform needs to address what Dan’s saying. It must address students questions about their social identity and not contribute to their feelings of alienation and disbelonging. It needs to support communities of practice and learning communities.

 

Thinking with Colleagues: AI in Education — from campustechnology.com by Mary Grush
A Q&A with Ellen Wagner

Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.

Mary and Ellen linked to “What Is Top of Mind for Higher Education Leaders about AI?” — from northcoasteduvisory.com. Below are some excerpts from those notes:

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 


Along these lines, also see:


What people ask me most. Also, some answers. — from oneusefulthing.org by Ethan Mollick
A FAQ of sorts

I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.

I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.

Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.

 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 

180 Degree Turn: NYC District Goes From Banning ChatGPT to Exploring AI’s Potential — from edweek.org by Alyson Klein (behind paywall)

New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.


The Leader’s Blindspot: How to Prepare for the Real Future — from preview.mailerlite.io by the AIEducator
The Commonly Held Belief: AI Will Automate Only Boring, Repetitive Tasks First

The Days of Task-Based Views on AI Are Numbered
The winds of change are sweeping across the educational landscape (emphasis DSC):

  1. Multifaceted AI: AI technologies are not one-trick ponies; they are evolving into complex systems that can handle a variety of tasks.
  2. Rising Expectations: As technology becomes integral to our lives, the expectations for personalised, efficient education are soaring.
  3. Skill Transformation: Future job markets will demand a different skill set, one that is symbiotic with AI capabilities.

Teaching: How to help students better understand generative AI — from chronicle.com by Beth McMurtrie
Beth describes ways professors have used ChatGPT to bolster critical thinking in writing-intensive courses

Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.

“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.

The students, he wrote, found the writing “soulless.”


7 Questions with Dr. Cristi Ford, VP of Academic Affairs at D2L — from campustechnology.com by Rhea Kelly

In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?

To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.


AI Platform Built by Teachers, for Teachers, Class Companion Raises $4 Million to Tap Into the Power of Practice — from prweb.com

“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


10 Ways Technology Leaders Can Step Up and Into the Generative AI Discussion in Higher Ed — from er.educause.edu by Lance Eaton and Stan Waddell

  1. Offer Short Primers on Generative AI
  2. Explain How to Get Started
  3. Suggest Best Practices for Engaging with Generative AI
  4. Give Recommendations for Different Groups
  5. Recommend Tools
  6. Explain the Closed vs. Open-Source Divide
  7. Avoid Pitfalls
  8. Conduct Workshops and Events
  9. Spot the Fake
  10. Provide Proper Guidance on the Limitations of AI Detectors


 
© 2024 | Daniel Christian