Employers Say Students Need AI Skills. What If Students Don’t Want Them? — from insidehighered.com by Ashley Mowreader
Colleges and universities are considering new ways to incorporate generative AI into teaching and learning, but not every student is on board with the tech yet. Experts weigh in on the necessity of AI in career preparation and higher education’s role in preparing students for jobs of the future.

Among the 5,025-plus survey respondents, around 2 percent (n=93), provided free responses to the question on AI policy and use in the classroom. Over half (55) of those responses were flat-out refusal to engage with AI. A few said they don’t know how to use AI or are not familiar with the tool, which impacts their ability to apply appropriate use to coursework.

But as generative AI becomes more ingrained into the workplace and higher education, a growing number of professors and industry experts believe this will be something all students need, in their classes and in their lives beyond academia.

From DSC:
I used to teach a Foundations of Information Technology class. Some of the students didn’t want to be there as they began the class, as it was a required class for non-CS majors. But after seeing what various applications and technologies could do for them, a good portion of those same folks changed their minds. But not all. Some students (2% sounds about right) asserted that they would never use technologies in their futures. Good luck with that I thought to myself. There’s hardly a job out there that doesn’t use some sort of technology.

And I still think that today — if not more so. If students want good jobs, they will need to learn how to use AI-based tools and technologies. I’m not sure there’s much of a choice. And I don’t think there’s much of a choice for the rest of us either — whether we’re still working or not. 

So in looking at the title of the article — “Employers Say Students Need AI Skills. What If Students Don’t Want Them?” — those of us who have spent any time working within the world of business already know the answer.

#Reinvent #Skills #StayingRelevant #Surviving #Workplace + several other categories/tags apply.


For those folks who have tried AI:

Skills: However, genAI may also be helpful in building skills to retain a job or secure a new one. People who had used genAI tools were more than twice as likely to think that these tools could help them learn new skills that may be useful at work or in locating a new job. Specifically, among those who had not used genAI tools, 23 percent believed that these tools might help them learn new skills, whereas 50 percent of those who had used the tools thought they might be helpful in acquiring useful skills (a highly statistically significant difference, after controlling for demographic traits).

Source: Federal Reserve Bank of New York

 

AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh
The market for AI products and services could reach between $780 billion and $990 billion by 2027.

At a Glance

  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”


And on a somewhat related note (i.e., emerging technologies), also see the following two postings:

Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo
As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.

Key Takeaways

  • Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
  • Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
  • The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.

Proto hologram tech allows cancer patients to receive specialist care without traveling large distances — from inavateonthenet.net

“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.




Clone your voice in minutes: The AI trick 95% don’t know about — from aidisruptor.ai by Alex McFarland
Warning: May cause unexpected bouts of talking to yourself

Now that you’ve got your voice clone, what can you do with it?

  1. Content Creation:
    • Podcast Production: Record episodes in half the time. Your listeners won’t know the difference, but your schedule will thank you.
    • Audiobook Narration: Always wanted to narrate your own book? Now you can, without spending weeks in a recording studio.
    • YouTube Videos: Create voiceovers for your videos in multiple languages. World domination, here you come!
  2. Business Brilliance:
    • Customer Service: Personalized automated responses that actually sound personal.
    • Training Materials: Create engaging e-learning content in your own voice, minus the hours of recording.
    • Presentations: Never worry about losing your voice before a big presentation again. Your clone’s got your back.

185 real-world gen AI use cases from the world’s leading organizations — from blog.google by Brian Hall; via Daniel Nest’s Why Try AI

In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.

In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.


AI Data Drop: 3 Key Insights from Real-World Research on AI Usage — from microsoft.com; via Daniel Nest’s Why Try AI
One of the largest studies of Copilot usage—at nearly 60 companies—reveals how AI is changing the way we work.

  1. AI is starting to liberate people from email
  2. Meetings are becoming more about value creation
  3. People are co-creating more with AI—and with one another


*** Dharmesh has been working on creating agent.ai — a professional network for AI agents.***


Speaking of agents, also see:

Onboarding the AI workforce: How digital agents will redefine work itself — from venturebeat.com by Gary Grossman

AI in 2030: A transformative force

  1. AI agents are integral team members
  2. The emergence of digital humans
  3. AI-driven speech and conversational interfaces
  4. AI-enhanced decision-making and leadership
  5. Innovation and research powered by AI
  6. The changing nature of job roles and skills

AI Video Tools You Can Use Today — from heatherbcooper.substack.com by Heather Cooper
The latest AI video models that deliver results

AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.

No worries – we do have plenty of generative AI video tools we can use right now.

  • Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
  • Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
  • Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
  • …plus several more

 

Top Software Engineering Newsletters in 2024 — from ai-supremacy.com by Michael Spencer
Including a very select few ML, AI and product Newsletters into the mix for Software Engineers.

This is an article specifically for the software engineers and developers among you.

In the past year (2023-2024) professionals are finding more value in Newsletters than ever before (especially on Substack).

As working from home took off, the nature of mentorship and skill acquisition has also evolved and shifted. Newsletters with pragmatic advice on our careers it turns out, are super valuable. This article is a resource list. Are you a software developer, work with one or know someone who is or wants to be?

 

Per the Rundown AI:

Why it matters: AI is slowly shifting from a tool we text/prompt with, to an intelligence that we collaborate, learn, and grow with. Advanced Voice Mode’s ability to understand and respond to emotions in real-time convos could also have huge use cases in everything from customer service to mental health support.

Also relevant/see:


Creators to Have Personalized AI Assistants, Meta CEO Mark Zuckerberg Tells NVIDIA CEO Jensen Huang — from blogs.nvidia.com by Brian Caulfield
Zuckerberg and Huang explore the transformative potential of open source AI, the launch of AI Studio, and exchange leather jackets at SIGGRAPH 2024.

“Every single restaurant, every single website will probably, in the future, have these AIs …” Huang said.

“…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI,” Zuckerberg responded.

More broadly, the advancement of AI across a broad ecosystem promises to supercharge human productivity, for example, by giving every human on earth a digital assistant — or assistants — allowing people to live richer lives that they can interact with quickly and fluidly.

Also related/see:


From DSC:
Today was a MUCH better day for Nvidia however (up 12.81%). But it’s been very volatile in the last several weeks — as people and institutions ask where the ROI’s are going to come from.






9 compelling reasons to learn how to use AI Chatbots — from interestingengineering.com by Atharva Gosavi
AI Chatbots are conversational agents that can act on your behalf and converse with humans – a futuristic novelty that is already getting people excited about its usage in improving efficiency.

7. Accessibility and inclusivity
Chatbots can be designed to support multiple languages and accessibility needs, making services more inclusive. They can cater to users with disabilities by providing voice interaction capabilities and simplifying access to information. Understanding how to develop inclusive chatbots can help you contribute to making technology more accessible to everyone, a crucial aspect in today’s diverse society.

8. Future-proofing your skills
AI and automation are the future of work. Having the skills of building AI chatbots is a great way to future-proof your skills, and given the rising trajectory of AI, it’ll be a demanding skill in the market in the years to come. Staying ahead of technological trends is a great way to ensure you remain relevant and competitive in the job market.


Top 7 generative AI use cases for business — from cio.com by Grant Gross
Advanced chatbots, digital assistants, and coding helpers seem to be some of the sweet spots for gen AI use so far in business.

Many AI experts say the current use cases for generative AI are just the tip of the iceberg. More uses cases will present themselves as gen AIs get more powerful and users get more creative with their experiments.

However, a handful of gen AI use cases are already bubbling up. Here’s a look at the most popular and promising.

 

Introducing Eureka Labs — “We are building a new kind of school that is AI native.” — by Andrej Karpathy, Previously Director of AI @ Tesla, founding team @ OpenAI

However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).


After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education — from techcrunch.com by Rebecca Bellan

Andrej Karpathy, former head of AI at Tesla and researcher at OpenAI, is launching Eureka Labs, an “AI native” education platform. In tech speak, that usually means built from the ground up with AI at its core. And while Eureka Labs’ AI ambitions are lofty, the company is starting with a more traditional approach to teaching.

San Francisco-based Eureka Labs, which Karpathy registered as an LLC in Delaware on June 21, aims to leverage recent progress in generative AI to create AI teaching assistants that can guide students through course materials.


What does it mean for students to be AI-ready? — from timeshighereducation.com by David Joyner
Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner

We owe it to our students to prepare them for this full range of AI skills, not merely the end points. The best way to fulfil this responsibility is to acknowledge and examine this new category of tools. More and more tools that students use daily – word processors, email, presentation software, development environments and more – have AI-based features. Practising with these tools is a valuable exercise for students, so we should not prohibit that behaviour. But at the same time, we do not have to just shrug our shoulders and accept however much AI assistance students feel like using.


Teachers say AI usage has surged since the school year started — from eschoolnews.com by Laura Ascione
Half of teachers report an increase in the use of AI and continue to seek professional learning

Fifty percent of educators reported an increase in AI usage, by both students and teachers, over the 2023–24 school year, according to The 2024 Educator AI Report: Perceptions, Practices, and Potential, from Imagine Learning, a digital curriculum solutions provider.

The report offers insight into how teachers’ perceptions of AI use in the classroom have evolved since the start of the 2023–24 school year.


OPINION: What teachers call AI cheating, leaders in the workforce might call progress — from hechingerreport.org by C. Edward Waston and Jose Antonio Bowen
Authors of a new guide explore what AI literacy might look like in a new era

Excerpt (emphasis DSC):

But this very ease has teachers wondering how we can keep our students motivated to do the hard work when there are so many new shortcuts. Learning goals, curriculums, courses and the way we grade assignments will all need to be reevaluated.

The new realities of work also must be considered. A shift in employers’ job postings rewards those with AI skills. Many companies report already adopting generative AI tools or anticipate incorporating them into their workflow in the near future.

A core tension has emerged: Many teachers want to keep AI out of our classrooms, but also know that future workplaces may demand AI literacy.

What we call cheating, business could see as efficiency and progress.

It is increasingly likely that using AI will emerge as an essential skill for students, regardless of their career ambitions, and that action is required of educational institutions as a result.


Teaching Writing With AI Without Replacing Thinking: 4 Tips — from by Erik Ofgang
AI has a lot of potential for writing students, but we can’t let it replace the thinking parts of writing, says writing professor Steve Graham

Reconciling these two goals — having AI help students learn to write more efficiently without hijacking the cognitive benefits of writing — should be a key goal of educators. Finding the ideal balance will require more work from both researchers and classroom educators, but Graham shares some initial tips for doing this currently.




Why I ban AI use for writing assignments — from timeshighereducation.com by James Stacey Taylor
Students may see handwriting essays in class as a needlessly time-consuming approach to assignments, but I want them to learn how to engage with arguments, develop their own views and convey them effectively, writes James Stacey Taylor

Could they use AI to generate objections to the arguments they read? Of course. AI does a good job of summarising objections to Singer’s view. But I don’t want students to parrot others’ objections. I want them to think of objections themselves. 

Could AI be useful for them in organising their exegesis of others’ views and their criticisms of them? Yes. But, again, part of what I want my students to learn is precisely what this outsources to the AI: how to organise their thoughts and communicate them effectively. 


How AI Will Change Education — from digitalnative.tech by Rex Woodbury
Predicting Innovation in Education, from Personalized Learning to the Downfall of College 

This week explores how AI will bleed into education, looking at three segments of education worth watching, then examining which business models will prevail.

  1. Personalized Learning and Tutoring
  2. Teacher Tools
  3. Alternatives to College
  4. Final Thoughts: Business Models and Why Education Matters

New Guidance from TeachAI and CSTA Emphasizes Computer Science Education More Important than Ever in an Age of AI — from csteachers.org by CSTA
The guidance features new survey data and insights from teachers and experts in computer science (CS) and AI, informing the future of CS education.

SEATTLE, WA – July 16, 2024 – Today, TeachAI, led by Code.org, ETS, the International Society of Technology in Education (ISTE), Khan Academy, and the World Economic Forum, launches a new initiative in partnership with the Computer Science Teachers Association (CSTA) to support and empower educators as they grapple with the growing opportunities and risks of AI in computer science (CS) education.

The briefs draw on early research and insights from CSTA members, organizations in the TeachAI advisory committee, and expert focus groups to address common misconceptions about AI and offer a balanced perspective on critical issues in CS education, including:

  • Why is it Still Important for Students to Learn to Program?
  • How Are Computer Science Educators Teaching With and About AI?
  • How Can Students Become Critical Consumers and Responsible Creators of AI?
 

Anthropic Introduces Claude 3.5 Sonnet — from anthropic.com

Anthropic Introduces Claude 3.5 Sonnet

What’s new? 
  • Frontier intelligence
    Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions and is exceptional at writing high-quality content with a natural, relatable tone.
  • 2x speed
  • State-of-the-art vision
  • Introducing Artifacts—a new way to use Claude
    We’re also introducing Artifacts on claude.ai, a new feature that expands how you can interact with Claude. When you ask Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside your conversation. This creates a dynamic workspace where you can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into your projects and workflows.

Train Students on AI with Claude 3.5 — from automatedteach.com by Graham Clay
I show how and compare it to GPT-4o.

  • If you teach computer science, user interface design, or anything involving web development, you can have students prompt Claude to produce web pages’ source code, see this code produced on the right side, preview it after it has compiled, and iterate through code+preview combinations.
  • If you teach economics, financial analysis, or accounting, you can have students prompt Claude to create analyses of markets or businesses, including interactive infographics, charts, or reports via React. Since it shows its work with Artifacts, your students can see how different prompts result in different statistical analyses, different representations of this information, and more.
  • If you teach subjects that produce purely textual outputs without a code intermediary, like philosophy, creative writing, or journalism, your students can compare prompting techniques, easily review their work, note common issues, and iterate drafts by comparing versions.

I see this as the first serious step towards improving the otherwise terrible user interfaces of LLMs for broad use. It may turn out to be a small change in the grand scheme of things, but it sure feels like a big improvement — especially in the pedagogical context.


And speaking of training students on AI, also see:

AI Literacy Needs to Include Preparing Students for an Unknown World — from stefanbauschard.substack.com by Stefan Bauschard
Preparing students for it is easier than educators think

Schools could enhance their curricula by incorporating debate, Model UN and mock government programs, business plan competitions, internships and apprenticeships, interdisciplinary and project-based learning initiatives, makerspaces and innovation labs, community service-learning projects, student-run businesses or non-profits, interdisciplinary problem-solving challenges, public speaking, and presentation skills courses, and design thinking workshop.

These programs foster essential skills such as recognizing and addressing complex challenges, collaboration, sound judgment, and decision-making. They also enhance students’ ability to communicate with clarity and precision, while nurturing creativity and critical thinking. By providing hands-on, real-world experiences, these initiatives bridge the gap between theoretical knowledge and practical application, preparing students more effectively for the multifaceted challenges they will face in their future academic and professional lives.

 



Addendum on 6/28/24:

Collaborate with Claude on Projects — from anthropic.com

Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. As a step in this direction, Claude.ai Pro and Team users can now organize their chats into Projects, bringing together curated sets of knowledge and chat activity in one place—with the ability to make their best chats with Claude viewable by teammates. With this new functionality, Claude can enable idea generation, more strategic decision-making, and exceptional results.

Projects are available on Claude.ai for all Pro and Team customers, and can be powered by Claude 3.5 Sonnet, our latest release which outperforms its peers on a wide variety of benchmarks. Each project includes a 200K context window, the equivalent of a 500-page book, so users can add all of the relevant documents, code, and insights to enhance Claude’s effectiveness.

 

Microsoft teams with Khan Academy to make its AI tutor free for K-12 educators and will develop a Phi-3 math model — from venturebeat.com by Ken Yeung

Microsoft is partnering with Khan Academy in a multifaceted deal to demonstrate how AI can transform the way we learn. The cornerstone of today’s announcement centers on Khan Academy’s Khanmigo AI agent. Microsoft says it will migrate the bot to its Azure OpenAI Service, enabling the nonprofit educational organization to provide all U.S. K-12 educators free access to Khanmigo.

In addition, Microsoft plans to use its Phi-3 model to help Khan Academy improve math tutoring and collaborate to generate more high-quality learning content while making more courses available within Microsoft Copilot and Microsoft Teams for Education.


One-Third of Teachers Have Already Tried AI, Survey Finds — from the74million.org by Kevin Mahnken
A RAND poll released last month finds English and social studies teachers embracing tools like ChatGPT.

One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.


Professors Try ‘Restrained AI’ Approach to Help Teach Writing — from edsurge.com by Jeffrey R. Young
Can ChatGPT make human writing more efficient, or is writing an inherently time-consuming process best handled without AI tools?

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.


On Building AI Models for Education — from aieducation.substack.com by Claire Zau
Google’s LearnLM, Khan Academy/MSFT’s Phi-3 Models, and OpenAI’s ChatGPT Edu

This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM. Our understanding of Microsoft/Khan Academy small language models and ChatGPT Edu is limited to the information provided through announcements, leaving us with less “under the hood” visibility into their development.


AI tutors are quietly changing how kids in the US study, and the leading apps are from China — from techcrunch.com by Rita Liao

Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.


Is your school behind on AI? If so, there are practical steps you can take for the next 12 months — from stefanbauschard.substack.com by Stefan Bauschard

If your school (district) or university has not yet made significant efforts to think about how you will prepare your students for a World of AI, I suggest the following steps:

July 24 – Administrator PD & AI Guidance
In July, administrators should receive professional development on AI, if they haven’t already. This should include…

August 24 –Professional Development for Teachers and Staff…
Fall 24 — Parents; Co-curricular; Classroom experiments…
December 24 — Revision to Policy…


New ChatGPT Version Aiming at Higher Ed — from insidehighered.com by Lauren Coffey
ChatGPT Edu, emerging after initial partnerships with several universities, is prompting both cautious optimism and worries.

OpenAI unveiled a new version of ChatGPT focused on universities on Thursday, building on work with a handful of higher education institutions that partnered with the tech giant.

The ChatGPT Edu product, expected to start rolling out this summer, is a platform for institutions intended to give students free access. OpenAI said the artificial intelligence (AI) toolset could be used for an array of education applications, including tutoring, writing grant applications and reviewing résumés.

 

A Guide to the GPT-4o ‘Omni’ Model — from aieducation.substack.com by Claire Zau
The closest thing we have to “Her” and what it means for education / workforce

Today, OpenAI introduced its new flagship model, GPT-4o, that delivers more powerful capabilities and real-time voice interactions to its users. The letter “o” in GPT-4o stands for “Omni”, referring to its enhanced multimodal capabilities. While ChatGPT has long offered a voice mode, GPT-4o is a step change in allowing users to interact with an AI assistant that can reason across voice, text, and vision in real-time.

Facilitating interaction between humans and machines (with reduced latency) represents a “small step for machine, giant leap for machine-kind” moment.

Everyone gets access to GPT-4: “the special thing about GPT-4o is it brings GPT-4 level intelligence to everyone, including our free users”, said CTO Mira Murati. Free users will also get access to custom GPTs in the GPT store, Vision and Code Interpreter. ChatGPT Plus and Team users will be able to start using GPT-4o’s text and image capabilities now

ChatGPT launched a desktop macOS app: it’s designed to integrate seamlessly into anything a user is doing on their keyboard. A PC Windows version is also in the works (notable that a Mac version is being released first given the $10B Microsoft relationship)


Also relevant, see:

OpenAI Drops GPT-4 Omni, New ChatGPT Free Plan, New ChatGPT Desktop App — from theneuron.ai [podcast]

In a surprise launch, OpenAI dropped GPT-4 Omni, their new leading model. They also made a bunch of paid features in ChatGPT free and announced a new desktop app. Pete breaks down what you should know and what this says about AI.


What really matters — from theneurondaily.com

  • Free users get 16 ChatGPT-4o messages per 3 hours.
  • Plus users get 80 ChatGPT-4o messages per 3 hours
  • Teams users 160 ChatGPT-4o messages per 3 hours.
 

io.google/2024

.


How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.

Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.

On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.

Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.


Google I/O 2024: An I/O for a new generation — from blog.google

The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.

In this story:


Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI

  • Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
  • All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
  • Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.

 



New ways to engage with Gemini for Workspace — from workspace.google.com

Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.


Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers

Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.

At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.


Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.


 


Information Age vs Generation Age Technologies for Learning — from opencontent.org by David Wiley

Remember (emphasis DSC)

  • the internet eliminated time and place as barriers to education, and
  • generative AI eliminates access to expertise as a barrier to education.

Just as instructional designs had to be updated to account for all the changes in affordances of online learning, they will need to be dramatically updated again to account for the new affordances of generative AI.


The Curious Educator’s Guide to AI | Strategies and Exercises for Meaningful Use in Higher Ed  — from ecampusontario.pressbooks.pub by Kyle Mackie and Erin Aspenlieder; via Stephen Downes

This guide is designed to help educators and researchers better understand the evolving role of Artificial Intelligence (AI) in higher education. This openly-licensed resource contains strategies and exercises to help foster an understanding of AI’s potential benefits and challenges. We start with a foundational approach, providing you with prompts on aligning AI with your curiosities and goals.

The middle section of this guide encourages you to explore AI tools and offers some insights into potential applications in teaching and research. Along with exposure to the tools, we’ll discuss when and how to effectively build AI into your practice.

The final section of this guide includes strategies for evaluating and reflecting on your use of AI. Throughout, we aim to promote use that is effective, responsible, and aligned with your educational objectives. We hope this resource will be a helpful guide in making informed and strategic decisions about using AI-powered tools to enhance teaching and learning and research.


Annual Provosts’ Survey Shows Need for AI Policies, Worries Over Campus Speech — from insidehighered.com by Ryan Quinn
Many institutions are not yet prepared to help their faculty members and students navigate artificial intelligence. That’s just one of multiple findings from Inside Higher Ed’s annual survey of chief academic officers.

Only about one in seven provosts said their colleges or universities had reviewed the curriculum to ensure it will prepare students for AI in their careers. Thuswaldner said that number needs to rise. “AI is here to stay, and we cannot put our heads in the sand,” he said. “Our world will be completely dominated by AI and, at this point, we ain’t seen nothing yet.”


Is GenAI in education more of a Blackberry or iPhone? — from futureofbeinghuman.com by Andrew Maynard
There’s been a rush to incorporate generative AI into every aspect of education, from K-12 to university courses. But is the technology mature enough to support the tools that rely on it?

In other words, it’s going to mean investing in concepts, not products.

This, to me, is at the heart of an “iPhone mindset” as opposed to a “Blackberry mindset” when it comes to AI in education — an approach that avoids hard wiring in constantly changing technologies, and that builds experimentation and innovation into the very DNA of learning.

For all my concerns here though, maybe there is something to being inspired by the Blackberry/iPhone analogy — not as a playbook for developing and using AI in education, but as a mindset that embraces innovation while avoiding becoming locked in to apps that are detrimentally unreliable and that ultimately lead to dead ends.


Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays — from sciencedirect.com by Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, and Jens Möller

Highlights

  • Randomized-controlled experiments investigating novice and experienced teachers’ ability to identify AI-generated texts.
  • Generative AI can simulate student essay writing in a way that is undetectable for teachers.
  • Teachers are overconfident in their source identification.
  • AI-generated essays tend to be assessed more positively than student-written texts.

Can Using a Grammar Checker Set Off AI-Detection Software? — from edsurge.com by Jeffrey R. Young
A college student says she was falsely accused of cheating, and her story has gone viral. Where is the line between acceptable help and cheating with AI?


Use artificial intelligence to get your students thinking critically — from timeshighereducation.com by Urbi Ghosh
When crafting online courses, teaching critical thinking skills is crucial. Urbi Ghosh shows how generative AI can shape how educators can approach this


ChatGPT shaming is a thing – and it shouldn’t be — from futureofbeinghuman.com by Andrew Maynard
There’s a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it’s a problem.

Excerpt (emphasis DSC):

This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.

From DSC:
Yes…that sounds like how many faculty members viewed online learning, even though they had never taught online before.

 

Are Colleges Ready For an Online-Education World Without OPMs? — from edsurge.com by Robert Ubell (Columnist)
Online Program Management companies have helped hundreds of colleges build online degree programs, but the sector is showing signs of strain.

For more than 15 years, a group of companies known as Online Program Management providers, or OPMs, have been helping colleges build online degree programs. And most of them have relied on an unusual arrangement — where the companies put up the financial backing to help colleges launch programs in exchange for a large portion of tuition revenue.

As a longtime administrator of online programs at colleges, I have mixed feelings about the idea of shutting down the model. And the question boils down to this: Are colleges ready for a world without OPMs?


Guy Raz on Podcasts and Passion: Audio’s Ability to Spark Learning — from michaelbhorn.substack.com by Michael B. Horn

This conversation went in a bunch of unexpected directions. And that’s what’s so fun about it. After all, podcasting is all about bringing audio back and turning learning into leisure. And the question Guy and his partner Mindy Thomas asked a while back was: Why not bring kids in on the fun? Guy shared how his studio, Tinkercast, is leveraging the medium to inspire and educate the next generation of problem solvers.

We discussed the power of audio to capture curiosities and foster imagination, how Tinkercast is doing that in and out of the classroom, and how it can help re-engage students in building needed skills at a critical time. Enjoy!



April 2024 Job Cuts Announced by US-Based Companies Fall; More Cuts Attributed to TX DEI Law, AI in April — from challengergray.com

Excerpt (emphasis DSC):

Education
Companies in the Education industry, which includes schools and universities, cut the second-most jobs last month with 8,092 for a total of 17,892. That is a 635% increase from the 2,435 cuts announced during the first four months of 2023.

“April is typically the time school districts are hiring and setting budgets for the next fiscal year. Certainly, there are budgetary constraints, as labor costs rise, but school systems also have a retention and recruitment issue,” said Challenger.


Lifetime college returns differ significantly by major, research finds — from highereddive.com by Lilah Burke
Engineering and computer science showed the best return out of 10 fields of study that were examined.

Dive Brief:

  • The lifetime rate of return for a college education differs significantly by major, but it also varies by a student’s gender and race or ethnicity, according to new peer-reviewed research published in the American Educational Research Journal.
  • A bachelor’s degree in general provides a roughly 9% rate of return for men, and nearly 10% for women, researchers concluded. The majors with the best returns were computer science and engineering.
  • Black, Hispanic and Asian college graduates had slightly higher rates of return than their White counterparts, the study found.
 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

AI RESOURCES AND TEACHING (Kent State University) — from aiadvisoryboards.wordpress.com

AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.

The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.


A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor — from the74million.org by Greg Toppo
With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits.

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

From DSC:
This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:

Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.

 

Nvidia’s AI boom is only getting started. Just ask CEO Jensen Huang — from fastcompany.com by Harry McCracken
Nvidia’s chips sparked the AI revolution. Now it’s in the business of putting the technology to work in an array of industries.

Nvidia is No. 1 on Fast Company’s list of the World’s 50 Most Innovative Companies of 2024. Explore the full list of companies that are reshaping industries and culture.

Nvidia isn’t just in the business of providing ever-more-powerful computing hardware and letting everybody else figure out what to do with it. Across an array of industries, the company’s technologies, platforms, and partnerships are doing much of the heavy lifting of putting AI to work. In a single week in January 2024, for instance, Nvidia reported that it had begun beta testing its drug discovery platform, demoed software that lets video game characters speak unscripted dialogue, announced deals with four Chinese EV manufacturers that will incorporate Nvidia technology in their vehicles, and unveiled a retail-industry partnership aimed at foiling organized shoplifting.


Johnson & Johnson MedTech Works With NVIDIA to Broaden AI’s Reach in Surgery — from blogs.nvidia.com by David Niewolny

AI — already used to connect, analyze and offer predictions based on operating room data — will be critical to the future of surgery, boosting operating room efficiency and clinical decision-making.

That’s why NVIDIA is working with Johnson & Johnson MedTech to test new AI capabilities for the company’s connected digital ecosystem for surgery. It aims to enable open innovation and accelerate the delivery of real-time insights at scale to support medical professionals before, during and after procedures.

J&J MedTech is in 80% of the world’s operating rooms and trains more than 140,000 healthcare professionals each year through its education programs.


GE and NVIDIA Join Forces to Accelerate Artificial Intelligence Adoption in Healthcare — from nvidianews.nvidia.com

  • New generation of intelligent medical devices will use world’s most advanced AI platform with the goal of improving patient care
  • GE Healthcare is the first medical device company to use the NVIDIA GPU Cloud
  • New Revolution Frontier CT, powered by NVIDIA, is two times faster for image processing, proving performance acceleration has begun

Nvidia Announces Major Deals With Healthcare Companies — from cheddar.com

At the GTC A.I. conference last week, Nvidia launched nearly two dozen new A.I. powered, health care focused tools and deals with companies Johnson & Johnson and GE Healthcare for surgery and medical imaging. The move into health care space for the A.I. company is an effort that’s been under development for a decade.


Nvidia is now powering AI nurses — from byMaxwell Zeff / Gizmodo;; via Claire Zau
The cheap AI agents offer medical advice to patients over video calls in real-time

 
© 2024 | Daniel Christian