WHAT WAS GARY MARCUS THINKING, IN THAT INTERVIEW WITH GEOFF HINTON? — from linkedin.com by Stephen Downes

Background (emphasis DSC): 60 Minutes did an interview with ‘the Godfather of AI’, Geoffrey Hinton. In response, Gary Marcus wrote a column in which he inserted his own set of responses into the transcript, as though he were a panel participant. Neat idea. So, of course, I’m stealing it, and in what follows, I insert my own comments as I join the 60 Minutes panel with Geoffrey Hinton and Gary Marcus.

Usually I put everyone else’s text in italics, but for this post I’ll put it all in normal font, to keep the format consistent.

Godfather of Artificial Intelligence Geoffrey Hinton on the promise, risks of advanced AI


OpenAI’s Revenue Skyrockets to $1.3 Billion Annualized Rate — from maginative.com by Chris McKay
This means the company is generating over $100 million per month—a 30% increase from just this past summer.

OpenAI, the company behind the viral conversational AI ChatGPT, is experiencing explosive revenue growth. The Information reports that CEO Sam Altman told the staff this week that OpenAI’s revenue is now crossing $1.3 billion on an annualized basis. This means the company is generating over $100 million per month—a 30% increase from just this past summer.

Since the launch of a paid version of ChatGPT in February, OpenAI’s financial growth has been nothing short of meteoric. Additionally, in August, the company announced the launch of ChatGPT Enterprise, a commercial version of its popular conversational AI chatbot aimed at business users.

For comparison, OpenAI’s total revenue for all of 2022 was just $28 million. The launch of ChatGPT has turbocharged OpenAI’s business, positioning it as a bellwether for demand for generative AI.



From 10/13:


New ways to get inspired with generative AI in Search — from blog.google
We’re testing new ways to get more done right from Search, like the ability to generate imagery with AI or creating the first draft of something you need to write.

 

Thinking with Colleagues: AI in Education — from campustechnology.com by Mary Grush
A Q&A with Ellen Wagner

Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.

Mary and Ellen linked to “What Is Top of Mind for Higher Education Leaders about AI?” — from northcoasteduvisory.com. Below are some excerpts from those notes:

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 


Along these lines, also see:


What people ask me most. Also, some answers. — from oneusefulthing.org by Ethan Mollick
A FAQ of sorts

I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.

I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.

Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.

 

Next month Microsoft Corp. will start making its artificial intelligence features for Office widely available to corporate customers. Soon after, that will include the ability for it to read your emails, learn your writing style and compose messages on your behalf.

From DSC:
As readers of this blog know, I’m generally pro-technology. I see most technologies as tools — which can be used for good or for ill. So I will post items both pro and con concerning AI.

But outsourcing email communications to AI isn’t on my wish list or to-do list.

 

Chatbot hallucinations are poisoning web search — from link.wired.com by Will Knight

The age of generative AI threatens to sprinkle epistemological sand into the gears of web search by fooling algorithms designed for a time when the web was mostly written by humans.


Meta Is Paying Creators Millions for AI Chatbots — from bensbites.beehiiv.com

Meta is shelling out millions to get celebrities to license their likenesses for AI characters in a bid to draw users to its platforms.

Why should I care?
Meta is still all-in on its vision for the metaverse and AI, despite its recent struggles. Meta seems willing to pay top dollar to partner with big names who can draw their massive audiences to use the AI avatars. If the celebrity avatars take off, they could be a blueprint for how creators monetize their brands in virtual worlds. There’s also a chance Meta pulls the plug on funding if user traction is low, just as it did with Facebook Watch originals.


The Post-AI Workplace — from drphilippahardman.substack.com by Dr. Philippa Hartman
SAP SuccessFactors’ new product offers the most comprehensive insight yet into the post-AI workplace & workforce

Skills Maps
AI will be used to categorise, track and analyse employee skills and competencies. This will enable orgs to build a clear idea of pockets of talent and areas in need of focus, providing HR, L&D professionals & managers with the opportunity to take a data-driven approach to talent development and capability building.

Roles Impacted: HR Analysts, Managers, Learning & Development Professionals



More than 40% of labor force to be affected by AI in 3 years, Morgan Stanley forecasts — from cnbc.com by Samantha Subin

Analyst Brian Nowak estimates that the AI technology will have a $4.1 trillion economic effect on the labor force — or affect about 44% of labor — over the next few years by changing input costs, automating tasks and shifting the ways companies obtain, process and analyze information. Today, Morgan Stanley pegs the AI effect at $2.1 trillion, affecting 25% of labor.

“We see generative AI expanding the scope of business processes that can be automated,” he wrote in a Sunday note. “At the same time, the input costs supporting GenAI functionality are rapidly falling, enabling a strongly expansionary impact to software production. As a result, Generative AI is set to impact the labor markets, expand the enterprise software TAM, and drive incremental spend for Public Cloud services.”

Speaking of the changes in the workplace, also see:

 

180 Degree Turn: NYC District Goes From Banning ChatGPT to Exploring AI’s Potential — from edweek.org by Alyson Klein (behind paywall)

New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.


The Leader’s Blindspot: How to Prepare for the Real Future — from preview.mailerlite.io by the AIEducator
The Commonly Held Belief: AI Will Automate Only Boring, Repetitive Tasks First

The Days of Task-Based Views on AI Are Numbered
The winds of change are sweeping across the educational landscape (emphasis DSC):

  1. Multifaceted AI: AI technologies are not one-trick ponies; they are evolving into complex systems that can handle a variety of tasks.
  2. Rising Expectations: As technology becomes integral to our lives, the expectations for personalised, efficient education are soaring.
  3. Skill Transformation: Future job markets will demand a different skill set, one that is symbiotic with AI capabilities.

Teaching: How to help students better understand generative AI — from chronicle.com by Beth McMurtrie
Beth describes ways professors have used ChatGPT to bolster critical thinking in writing-intensive courses

Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.

“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.

The students, he wrote, found the writing “soulless.”


7 Questions with Dr. Cristi Ford, VP of Academic Affairs at D2L — from campustechnology.com by Rhea Kelly

In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?

To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.


AI Platform Built by Teachers, for Teachers, Class Companion Raises $4 Million to Tap Into the Power of Practice — from prweb.com

“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


10 Ways Technology Leaders Can Step Up and Into the Generative AI Discussion in Higher Ed — from er.educause.edu by Lance Eaton and Stan Waddell

  1. Offer Short Primers on Generative AI
  2. Explain How to Get Started
  3. Suggest Best Practices for Engaging with Generative AI
  4. Give Recommendations for Different Groups
  5. Recommend Tools
  6. Explain the Closed vs. Open-Source Divide
  7. Avoid Pitfalls
  8. Conduct Workshops and Events
  9. Spot the Fake
  10. Provide Proper Guidance on the Limitations of AI Detectors


 

Canva’s new AI tools automate boring, labor-intensive design tasks — from theverge.com by Jess Weatherbed
Magic Studio features like Magic Switch automatically convert your designs into blogs, social media posts, emails, and more to save time on manually editing documents.


Canva launches Magic Studio, partners with Runway ML for video — from bensbites.beehiiv.com by Ben Tossell

Here are the highlights of launched features under the new Magic Studio:

  • Magic Design – Turn ideas into designs instantly with AI-generated templates.
  • Magic Switch – Transform content into different formats and languages with one click.
  • Magic Grab – Make images editable like Canva templates for easy editing.
  • Magic Expand – Use AI to expand images beyond the original frame.
  • Magic Morph – Transform text and shapes with creative effects and prompts.
  • Magic Edit – Make complex image edits using simple text prompts.
  • Magic Media – Generate professional photos, videos and artworks from text prompts.
  • Magic Animate – Add animated transitions and motion to designs instantly.
  • Magic Write – Generate draft text and summaries powered by AI.



Adobe Firefly

Meet Adobe Firefly -- Adobe is going hard with the use of AI. This is a key product along those lines.


Addendums on 10/11/23:


Adobe Releases New AI Models Aimed at Improved Graphic Design — from bloomberg.com
New version of Firefly is bigger than initial tool, Adobe says Illustrator, Express programs each get own generative tools


 

Introducing Magic Studio: the power of AI, all in one place — from canva.com


Also relevant/see:

Canva’s new AI features make everyone a designer — from joinsuperhuman.ai by Zain Kahn

…here are all the cool new ways you can use Canva to create pro-grade designs for your work:

  • Magic Media: Generate photos and videos with text prompts.
  • Magic Design: Turn ideas into designs with AI-generated templates.
  • Magic Switch: Translate content into different languages and formats.
  • Magic Expand: Make images bigger with AI.
  • Magic Edit: Edit images with simple text prompts.
  • Magic Morph: Transform text and shapes with visual effects.
  • Magic Write: Generate texts and summaries with AI.

Canva also announced that they’re creating a $200 million fund to compensate creators who opt-in to train their AI models.

 

As AI Chatbots Rise, More Educators Look to Oral Exams — With High-Tech Twist — from edsurge.com by Jeffrey R. Young

To use Sherpa, an instructor first uploads the reading they’ve assigned, or they can have the student upload a paper they’ve written. Then the tool asks a series of questions about the text (either questions input by the instructor or generated by the AI) to test the student’s grasp of key concepts. The software gives the instructor the choice of whether they want the tool to record audio and video of the conversation, or just audio.

The tool then uses AI to transcribe the audio from each student’s recording and flags areas where the student answer seemed off point. Teachers can review the recording or transcript of the conversation and look at what Sherpa flagged as trouble to evaluate the student’s response.

 

Is Your AI Model Going Off the Rails? There May Be an Insurance Policy for That — from wsj.com by Belle Lin; via Brainyacts
As generative AI creates new risks for businesses, insurance companies sense an opportunity to cover the ways AI could go wrong

The many ways a generative artificial intelligence project can go off the rails poses an opportunity for insurance companies, even as those grim scenarios keep business technology executives up at night.

Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches several years ago, insurance providers have started taking steps into the AI space by offering financial protection against models that fail.

Corporate technology leaders say such policies could help them address risk-management concerns from board members, chief executives and legal departments.

 

Humane’s ‘Ai Pin’ debuts on the Paris runway — from techcrunch.com by Brian Heater

“The [Ai Pin is a] connected and intelligent clothing-based wearable device uses a range of sensors that enable contextual and ambient compute interactions,” the company noted at the time. “The Ai Pin is a type of standalone device with a software platform that harnesses the power of Ai to enable innovative personal computing experiences.”


Also relevant/see:

 



AI Meets Med School— from insidehighered.com by Lauren Coffey
Adding to academia’s AI embrace, two institutions in the University of Texas system are jointly offering a medical degree paired with a master’s in artificial intelligence.

Doctor AI

The University of Texas at San Antonio has launched a dual-degree program combining medical school with a master’s in artificial intelligence.

Several universities across the nation have begun integrating AI into medical practice. Medical schools at the University of Florida, the University of Illinois, the University of Alabama at Birmingham and Stanford and Harvard Universities all offer variations of a certificate in AI in medicine that is largely geared toward existing professionals.

“I think schools are looking at, ‘How do we integrate and teach the uses of AI?’” Dr. Whelan said. “And in general, when there is an innovation, you want to integrate it into the curriculum at the right pace.”

Speaking of emerging technologies and med school, also see:


Though not necessarily edu-related, this was interesting to me and hopefully will be to some profs and/or students out there:


How to stop AI deepfakes from sinking society — and science — from nature.com by Nicola Jones; via The Neuron
Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.





Exploring the Impact of AI in Education with PowerSchool’s CEO & Chief Product Officer — from michaelbhorn.substack.com by Michael B. Horn

With just under 10 acquisitions in the last 5 years, PowerSchool has been active in transforming itself from a student information systems company to an integrated education company that works across the day and lifecycle of K–12 students and educators. What’s more, the company turned heads in June with its announcement that it was partnering with Microsoft to integrate AI into its PowerSchool Performance Matters and PowerSchool LearningNav products to empower educators in delivering transformative personalized-learning pathways for students.


AI Learning Design Workshop: The Trickiness of AI Bootcamps and the Digital Divide — from eliterate.usby Michael Feldstein

As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.


Global AI Legislation Tracker— from iapp.org; via Tom Barrett

Countries worldwide are designing and implementing AI governance legislation commensurate to the velocity and variety of proliferating AI-powered technologies. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases, and voluntary guidelines and standards.

This tracker identifies legislative policy and related developments in a subset of jurisdictions. It is not globally comprehensive, nor does it include all AI initiatives within each jurisdiction, given the rapid and widespread policymaking in this space. This tracker offers brief commentary on the wider AI context in specific jurisdictions, and lists index rankings provided by Tortoise Media, the first index to benchmark nations on their levels of investment, innovation and implementation of AI.


Diving Deep into AI: Navigating the L&D Landscape — from learningguild.com by Markus Bernhardt

The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.



Learning Lab | ChatGPT in Higher Education: Exploring Use Cases and Designing Prompts — from events.educause.edu; via Robert Gibson on LinkedIn

Part 1: October 16 | 3:00–4:30 p.m. ET
Part 2: October 19 | 3:00–4:30 p.m. ET
Part 3: October 26 | 3:00–4:30 p.m. ET
Part 4: October 30 | 3:00–4:30 p.m. ET


Mapping AI’s Role in Education: Pioneering the Path to the Future — from marketscale.com by Michael B. Horn, Jacob Klein, and Laurence Holt

Welcome to The Future of Education with Michael B. Horn. In this insightful episode, Michael gains perspective on mapping AI’s role in education from Jacob Klein, a Product Consultant at Oko Labs, and Laurence Holt, an Entrepreneur In Residence at the XQ Institute. Together, they peer into the burgeoning world of AI in education, analyzing its potential, risks, and roadmap for integrating it seamlessly into learning environments.


Ten Wild Ways People Are Using ChatGPT’s New Vision Feature — from newsweek.com by Meghan Roos; via Superhuman

Below are 10 creative ways ChatGPT users are making use of this new vision feature.


 



Adobe video-AI announcements for IBC — from provideocoalition.com by Rich Young

For the IBC 2023 conference, Adobe announced new AI and 3D features to Creative Cloud video tools, including Premiere Pro Enhance Speech for faster dialog cleanup, and filler word detection and removal in Text-Based Editing. There’s also new AI-based rotoscoping and a true 3D workspace in the After Effects beta, as well as new camera-to-cloud integrations and advanced storage options in Frame.io.

Though not really about AI, you might also be interested in this posting:


Airt AI Art Generator (Review) — from hongkiat.com
Turn your creative ideas into masterpieces using Airt’s AI iPad app.

The Airt AI Generator app makes it easy to create art on your iPad. You can pick an art style and a model to make your artwork. It’s simple enough for anyone to use, but it doesn’t have many options for customizing your art.

Even with these limitations, it’s a good starting point for people who want to try making art with AI. Here are the good and bad points we found.

Pros:

  • User-Friendly: The app is simple and easy to use, making it accessible for users of all skill levels.

Cons:

  • Limited Advanced Features: The app lacks options for customization, such as altering image ratios, seeds, and other settings.

 

Comparing Online and AI-Assisted Learning: A Student’s View — from educationnext.org by Daphne Goldstein
An 8th grader reviews traditional Khan Academy and its AI-powered tutor, Khanmigo

Hi everyone, I’m Daphne, a 13-year-old going into 8th grade.

I’m writing to compare “regular” Khan Academy (no AI) to Khanmigo (powered by GPT4), using three of my own made-up criteria.

They are: efficiency, effectiveness, and enjoyability. Efficiency is how fast I am able to cover a math topic and get basic understanding. Effectiveness is my quality of understanding—the difference between basic and advanced understanding. And the final one—most important to kids and maybe least important to adults who make kids learn math—is enjoyability.


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


First Impressions with GPT-4V(ision) — from blog.roboflow.com by James Gallagher; via Donald Clark on LinkedIn

On September 25th, 2023, OpenAI announced the rollout of two new features that extend how people can interact with its recent and most advanced model, GPT-4: the ability to ask questions about images and to use speech as an input to a query.

This functionality marks GPT-4’s move into being a multimodal model. This means that the model can accept multiple “modalities” of input – text and images – and return results based on those inputs. Bing Chat, developed by Microsoft in partnership with OpenAI, and Google’s Bard model both support images as input, too. Read our comparison post to see how Bard and Bing perform with image inputs.

In this guide, we are going to share our first impressions with the GPT-4V image input feature.


 

Why Shaquille O’Neal led edtech startup Edsoma’s $2.5M seed round — from techcrunch.com by Kirsten Korosec; via GSV

Edsoma is an app that uses an AI reading assistant to help people learn or improve their reading and communication.

For now, the company is targeting users in grades kindergarten to fourth grade based on the content that it has today. Wallgren noted that the Edsoma’s technology will work right through into university and he has ambitions to become the No. 1 literacy resource in the United States.


Outschool launches an AI-powered tool to help teachers write progress reports — from techcrunch.com by Lauren Forristal; via GSV

Outschool, the online learning platform that offers kid-friendly academic and interest-based classes, announced today the launch of its AI Teaching Assistant, a tool for tutors to generate progress reports for their students. The platform — mainly popular for its small group class offerings — also revealed that it’s venturing into one-on-one tutoring, putting it in direct competition with companies like Varsity Tutors, Tutor.com and Preply.

 

 

The next wave of AI will be interactive — from joinsuperhuman.ai by Zain Kahn
ALSO: AI startups raise over $500 million

Google DeepMind cofounder Mustafa Suleyman thinks that generative is a passing phase, and that interactive AI is the next big thing in AI. Suleyman called the transformation “a profound moment” in the history of technology.

Suleyman divided AI’s evolution into 3 waves:

  1. Classification: Training computers to classify various types of data like images and text.
  2. Generative: The current wave, which takes input data to generate new data. ChatGPT is the best example of this.
  3. Interactive: The next wave, where an AI will be capable of communicating and operating autonomously.

“Think of it as autonomous software that can talk to other apps to get things done.”

From DSC:
Though I find this a generally positive thing, the above sentence makes me exclaim, “No, nothing could possibly go wrong there.”


 
© 2024 | Daniel Christian