One thing often happens at keynotes and conferences. It surprised me…. — from donaldclarkplanb.blogspot.com by Donald Clark

AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.

    • Text-to-Speech & Speech-to-Text Tools…
    • Grammar and Spelling Assistants…
    • Comprehension Tools…
    • Visual and Multisensory Tools…
    • …and more

Let’s Make a Movie Teaser With AI — from whytryai.com by Daniel Nest
How to use free generative AI tools to make a teaser trailer.

Here are the steps and the free tools we can use for each.

  1. Brainstorm ideas & flesh out the concept.
    1. Claude 3.5 Sonnet
    2. Google Gemini 1.5 Pro
    3. …or any other free LLM
  2. Create starting frames for each scene.
    1. FLUX.1 Pro
    2. Ideogram
    3. …or any other free text-to-image model
  3. Bring the images to life.
    1. Kling AI
    2. Luma Dream Machine
    3. Runway Gen-2
  4. Generate the soundtrack.
    1. Udio
    2. Suno
  5. Add sound effects.
    1. ElevenLabs Sound Effects
    2. ElevenLabs VideoToSoundEffects
    3. Meta Audiobox
  6. Put everything together.
    1. Microsoft Clipchamp
    2. DaVinci Resolve
    3. …or any other free video editing tool.

Here we go.


Is AI in Schools Promising or Overhyped? Potentially Both, New Reports Suggest — from the74million.org by Greg Toppo; via Claire Zau
One urges educators to prep for an artificial intelligence boom. The other warns that it could all go awry. Together, they offer a reality check.

Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?

Or is it, perhaps, both?

Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.


Bite-Size AI Content for Faculty and Staff — from aiedusimplified.substack.com by Lance Eaton
Another two 5-tips videos for faculty and my latest use case: creating FAQs!

I had an opportunity recently to do more of my 15-minute lightning talks. You can see my lightning talks from late winter in this post, or can see all of them on my YouTube channel. These two talks were focused on faculty in particular.


Also from Lance, see:


AI in Education: Leading a Paradigm Shift — from gettingsmart.com by Dr. Tyler Thigpen

Despite possible drawbacks, an exciting wondering has been—What if AI was a tipping point helping us finally move away from a standardized, grade-locked, ranking-forced, batched-processing learning model based on the make believe idea of “the average man” to a learning model that meets every child where they are at and helps them grow from there?

I get that change is indescribably hard and there are risks. But the integration of AI in education isn’t a trend. It’s a paradigm shift that requires careful consideration, ongoing reflection, and a commitment to one’s core values. AI presents us with an opportunity—possibly an unprecedented one—to transform teaching and learning, making it more personalized, efficient, and impactful. How might we seize the opportunity boldly?


California and NVIDIA Partner to Bring AI to Schools, Workplaces — from govtech.com by Abby Sourwine
The latest step in Gov. Gavin Newsom’s plans to integrate AI into public operations across California is a partnership with NVIDIA intended to tailor college courses and professional development to industry needs.

California Gov. Gavin Newsom and tech company NVIDIA joined forces last week to bring generative AI (GenAI) to community colleges and public agencies across the state. The California Community Colleges Chancellor’s Office (CCCCO), NVIDIA and the governor all signed a memorandum of understanding (MOU) outlining how each partner can contribute to education and workforce development, with the goal of driving innovation across industries and boosting their economic growth.


Listen to anything on the go with the highest-quality voices — from elevenlabs.io; via The Neuron
The ElevenLabs Reader App narrates articles, PDFs, ePubs, newsletters, or any other text content. Simply choose a voice from our expansive library, upload your content, and listen on the go.

Per The Neuron

Some cool use cases:

  • Judy Garland can teach you biology while walking to class.
  • James Dean can narrate your steamy romance novel.
  • Sir Laurence Olivier can read you today’s newsletter—just paste the web link and enjoy!

Why it’s important: ElevenLabs shared how major Youtubers are using its dubbing services to expand their content into new regions with voices that actually sound like them (thanks to ElevenLabs’ ability to clone voices).
Oh, and BTW, it’s estimated that up to 20% of the population may have dyslexia. So providing people an option to listen to (instead of read) content, in their own language, wherever they go online can only help increase engagement and communication.


How Generative AI Improves Parent Engagement in K–12 Schools — from edtechmagazine.com by Alexadner Slagg
With its ability to automate and personalize communication, generative artificial intelligence is the ideal technological fix for strengthening parent involvement in students’ education.

As generative AI tools populate the education marketplace, the technology’s ability to automate complex, labor-intensive tasks and efficiently personalize communication may finally offer overwhelmed teachers a way to effectively improve parent engagement.

These personalized engagement activities for students and their families can include local events, certification classes and recommendations for books and videos. “Family Feed might suggest courses, such as an Adobe certification,” explains Jackson. “We have over 14,000 courses that we have vetted and can recommend. And we have books and video recommendations for students as well.”

Including personalized student information and an engagement opportunity makes it much easier for parents to directly participate in their children’s education.


Will AI Shrink Disparities in Schools, or Widen Them? — edsurge.com by Daniel Mollenkamp
Experts predict new tools could boost teaching efficiency — or create an “underclass of students” taught largely through screens.

 

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI — from forbes.com by Michael T. Nietzel; via Greg Lambert

The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus.


Also relevant, see:

Training AI to Mentor Like a Partner: Insights from Dr. Megan Ma — from geeklawblog.com

This week on The Geek in Review, we discuss the future of legal technology with Dr. Megan Ma, a distinguished research fellow and Associate Director of the Stanford Program in Law, Science, and Technology at the Stanford Center for Legal Informatics, also known as Codex. Dr. Ma’s groundbreaking work in integrating generative AI into legal applications takes center stage as she shares her insights on translating legal knowledge into code and the implications of human-machine collaboration in the legal field.

 

College Writing Centers Worry AI Could Replace Them — from edsurge.com by Maggie Hicks
Those who run the centers argue that they could be a hub for teaching AI literacy.

But as generative AI tools like ChatGPT sweep into mainstream business tools, promising to draft properly-formatted text from simple prompts and the click of a button, new questions are rising about what role writing centers should play — or whether they will be needed in the future.

Writing centers need to find a balance between introducing AI into the writing process and keeping the human support that every writer needs, argues Anna Mills, an English instructor at the College of Marin.

AI can serve as a supplement to a human tutor, Mills says. She encourages her students to use MyEssayFeedback, an AI tool that critiques the organization of an essay, the quality of evidence a student has included to support their thesis or the tone of the writing. Such tools can also evaluate research questions or review a student’s writing based on the rubric for the assignment, she says.

 

Augmented Course Design: Using AI to Boost Efficiency and Expand Capacity — from er.educause.edu by Berlin Fang and Kim Broussard
The emerging class of generative AI tools has the potential to significantly alter the landscape of course development.

Using generative artificial intelligence (GenAI) tools such as ChatGPT, Gemini, or CoPilot as intelligent assistants in instructional design can significantly enhance the scalability of course development. GenAI can significantly improve the efficiency with which institutions develop content that is closely aligned with the curriculum and course objectives. As a result, institutions can more effectively meet the rising demand for flexible and high-quality education, preparing a new generation of future professionals equipped with the knowledge and skills to excel in their chosen fields.1 In this article, we illustrate the uses of AI in instructional design in terms of content creation, media development, and faculty support. We also provide some suggestions on the effective and ethical uses of AI in course design and development. Our perspectives are rooted in medical education, but the principles can be applied to any learning context.

Table 1 summarizes a few low-hanging fruits in AI usage in course development.
.

Table 1. Types of Use of GenAI in Course Development
Practical Use of AI Use Scenarios and Examples
Inspiration
  • Exploring ideas for instructional strategies
  • Exploring ideas for assessment
  • Course mapping
  • Lesson or unit content planning
Supplementation
  • Text to audio
  • Transcription for audio
  • Alt text auto-generation
  • Design optimization (e.g., using Microsoft PPT Design)
Improvement
  • Improving learning objectives
  • Improving instructional materials
  • Improving course content writing (grammar, spelling, etc.)
Generation
  • Creating a PowerPoint draft using learning objectives
  • Creating peripheral content materials (introductions, conclusions)
  • Creating decorative images for content
Expansion
  • Creating a scenario based on learning objectives
  • Creating a draft of a case study
  • Creating a draft of a rubric

.


Also see:

10 Ways Artificial Intelligence Is Transforming Instructional Design — from er.educause.edu by Rob Gibson
Artificial intelligence (AI) is providing instructors and course designers with an incredible array of new tools and techniques to improve the course design and development process. However, the intersection of AI and content creation is not new.

I have been telling my graduate instructional design students that AI technology is not likely to replace them any time soon because learning and instruction are still highly personalized and humanistic experiences. However, as these students embark on their careers, they will need to understand how to appropriately identify, select, and utilize AI when developing course content. Examples abound of how instructional designers are experimenting with AI to generate and align student learning outcomes with highly individualized course activities and assessments. Instructional designers are also using AI technology to create and continuously adapt the custom code and power scripts embedded into the learning management system to execute specific learning activities.Footnote1 Other useful examples include scripting and editing videos and podcasts.

Here are a few interesting examples of how AI is shaping and influencing instructional design. Some of the tools and resources can be used to satisfy a variety of course design activities, while others are very specific.


Taking the Lead: Why Instructional Designers Should Be at the Forefront of Learning in the Age of AI — from medium.com by Rob Gibson
Education is at a critical juncture and needs to draw leaders from a broader pool, including instructional designers

The world of a medieval stone cutter and a modern instructional designer (ID) may seem separated by a great distance, but I wager any ID who upon hearing the story I just shared would experience an uneasy sense of déjà vu. Take away the outward details, and the ID would recognize many elements of the situation: the days spent in projects that fail to realize the full potential of their craft, the painful awareness that greater things can be built, but are unlikely to occur due to a poverty of imagination and lack of vision among those empowered to make decisions.

Finally, there is the issue of resources. No stone cutter could ever hope to undertake a large-scale enterprise without a multitude of skilled collaborators and abundant materials. Similarly, instructional designers are often departments of one, working in scarcity environments, with limited ability to acquire resources for ambitious projects and — just as importantly — lacking the authority or political capital needed to launch significant initiatives. For these reasons, instructional design has long been a profession caught in an uncomfortable stasis, unable to grow, evolve and achieve its full potential.

That is until generative AI appeared on the scene. While the discourse around AI in education has been almost entirely about its impact on teaching and assessment, there has been a dearth of critical analysis regarding AI’s potential for impacting instructional design.

We are at a critical juncture for AI-augmented learning. We can either stagnate, missing opportunities to support learners while educators continue to debate whether the use of generative AI tools is a good thing, or we can move forward, building a transformative model for learning akin to the industrial revolution’s impact.

Too many professional educators remain bound by traditional methods. The past two years suggest that leaders of this new learning paradigm will not emerge from conventional educational circles. This vacuum of leadership can be filled, in part, by instructional designers, who are prepared by training and experience to begin building in this new learning space.

 

Gemini makes your mobile device a powerful AI assistant — from blog.google
Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.

Rolling out today: Gemini Live <– Google swoops in before OpenAI can get their Voice Mode out there
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.

.

Per the Rundown AI:
Why it matters: Real-time voice is slowly shifting AI from a tool we text/prompt with, to an intelligence that we collaborate, learn, consult, and grow with. As the world’s anticipation for OpenAI’s unreleased products grows, Google has swooped in to steal the spotlight as the first to lead widespread advanced AI voice rollouts.

Beyond Social Media: Schmidt Predicts AI’s Earth-Shaking Impact— from wallstreetpit.com
The next wave of AI is coming, and if Schmidt is correct, it will reshape our world in ways we are only beginning to imagine.

In a recent Q&A session at Stanford, Eric Schmidt, former CEO and Chairman of search giant Google, offered a compelling vision of the near future in artificial intelligence. His predictions, both exciting and sobering, paint a picture of a world on the brink of a technological revolution that could dwarf the impact of social media.

Schmidt highlighted three key advancements that he believes will converge to create this transformative wave: very large context windows, agents, and text-to-action capabilities. These developments, according to Schmidt, are not just incremental improvements but game-changers that could reshape our interaction with technology and the world at large.

.


The rise of multimodal AI agents— from 11onze.cat
Technology companies are investing large amounts of money in creating new multimodal artificial intelligence models and algorithms that can learn, reason and make decisions autonomously after collecting and analysing data.

The future of multimodal agents
In practical terms, a multimodal AI agent can, for example, analyse a text while processing an image, spoken language, or an audio clip to give a more complete and accurate response, both through voice and text. This opens up new possibilities in various fields: from education and healthcare to e-commerce and customer service.


AI Change Management: 41 Tactics to Use (August 2024)— from flexos.work by Daan van Rossum
Future-proof companies are investing in driving AI adoption, but many don’t know where to start. The experts recommend these 41 tips for AI change management.

As Matt Kropp told me in our interview, BCG has a 10-20-70 rule for AI at work:

  • 10% is the LLM or algorithm
  • 20% is the software layer around it (like ChatGPT)
  • 70% is the human factor

This 70% is exactly why change management is key in driving AI adoption.

But where do you start?

As I coach leaders at companies like Apple, Toyota, Amazon, L’Oréal, and Gartner in our Lead with AI program, I know that’s the question on everyone’s minds.

I don’t believe in gatekeeping this information, so here are 41 principles and tactics I share with our community members looking for winning AI change management principles.


 

How Generative AI will change what lawyers do — from jordanfurlong.substack.com by Jordan Furlong
As we enter the Age of Accessible Law, a wave of new demand is coming our way — but AI will meet most of the surge. What will be left for lawyers? Just the most valuable and irreplaceable role in law.

AI can already provide actionable professional advice; within the next ten years, if it takes that long, I believe it will offer acceptable legal advice. No one really wants “AI courts,” but soon enough, we’ll have AI-enabled mediation and arbitration, which will have a much greater impact on everyday dispute resolution.

I think it’s dangerous to assume that AI will never be able to do something that lawyers now do. “Never” is a very long time. And AI doesn’t need to replicate the complete arsenal of the most gifted lawyer out there. If a Legal AI can replicate 80% of what a middling lawyer can do, for 10% of the cost, in 1% of the time, that’s all the revolution you’ll need.

From DSC:
It is my sincere hope that AI will open up the floodgates to FAR great Access to Justice (A2J) in the future.


It’s the Battle of the AI Legal Assistants, As LexisNexis Unveils Its New Protégé and Thomson Reuters Rolls Out CoCounsel 2.0 — from lawnext.com by Bob Ambrogi

It’s not quite BattleBots, but competitors LexisNexis and Thomson Reuters both made significant announcements today involving the development of generative AI legal assistants within their products.

Thomson Reuters, which last year acquired the CoCounsel legal assistant originally developed by Casetext, and which later announced plans to deploy it throughout its product lines, today unveiled what it says is the “supercharged” CoCounsel 2.0.

Meanwhile, LexisNexis said today it is rolling out the commercial preview version of its Protégé Legal AI Assistant, which it describes as a “substantial leap forward in personalized generative AI that will transform legal work.” It is part of the launch of the third generation of Lexis+ AI, the AI-driven legal research platform the company launched last year.


Thomson Reuters Launches CoCounsel 2.0 — from abovethelaw.com by Joe Patrice
New release promises results three times faster than the last version.

It seems like just last year we were talking about CoCounsel 1.0, the generative AI product launched by Casetext and then swiftly acquired by Thomson Reuters. That’s because it was just last year. Since then, Thomson Reuters has worked to marry Casetext’s tool with TR’s treasure trove of data.

It’s not an easy task. A lot of the legal AI conversation glosses over how constructing these tools requires a radical confrontation with the lawyers’ mind. Why do attorneys do what they do every day? Are there seemingly “inefficient” steps that actually serve a purpose? Does an AI “answer” advance the workflow or hinder the research alchemy? As recently as April, Thomson Reuters was busy hyping the fruits of its efforts to get ahead of these challenges.


Though this next item is not necessarily related to legaltech, it’s still relevant to the legal realm:

A Law Degree Is No Sure Thing— from cew.georgetown.edu
Some Law School Graduates Earn Top Dollar, but Many Do Not

Summary
Is law school worth it? A Juris Doctor (JD) offers high median earnings and a substantial earnings boost relative to a bachelor’s degree in the humanities or social sciences—two of the more common fields of study that lawyers pursue as undergraduate students. However, graduates of most law schools carry substantial student loan debt, which dims the financial returns associated with a JD.

A Law Degree Is No Sure Thing: Some Law School Graduates Earn Top Dollar, but Many Do Not finds that the return on investment (ROI) in earnings and career outcomes varies widely across law schools. The median earnings net of debt payments are $72,000 four years after graduation for all law school graduates, but exceed $200,000 at seven law schools. By comparison, graduates of 33 law schools earn less than $55,000 net of debt payments four years after graduation.

From DSC:
A former boss’ husband was starting up a local public defender’s office in Michigan and needed to hire over two dozen people. The salaries were in the $40K’s she said. This surprised me greatly, as I thought all lawyers were bringing in the big bucks. This is not the case, clearly. Many lawyers do not make the big bucks, as this report shows:

…graduates of 33 law schools earn less than $55,000 net of debt payments four years after graduation.

.

Also relevant/see:

 

From DSC:
The above item is simply excellent!!! I love it!



Also relevant/see:

3 new Chrome AI features for even more helpful browsing — from blog.google from Parisa Tabriz
See how Chrome’s new AI features, including Google Lens for desktop and Tab compare, can help you get things done more easily on the web.


On speaking to AI — from oneusefulthing.org by Ethan Mollick
Voice changes a lot of things

So, let’s talk about ChatGPT’s new Advanced Voice mode and the new AI-powered Siri. They are not just different approaches to talking to AI. In many ways, they represent the divide between two philosophies of AI – Copilots versus Agents, small models versus large ones, specialists versus generalists.


Your guide to AI – August 2024 — from nathanbenaich.substack.com by Nathan Benaich and Alex Chalmers


Microsoft says OpenAI is now a competitor in AI and search — from cnbc.com by Jordan Novet

Key Points

  • Microsoft’s annually updated list of competitors now includes OpenAI, a long-term strategic partner.
  • The change comes days after OpenAI announced a prototype of a search engine.
  • Microsoft has reportedly invested $13 billion into OpenAI.


Excerpt from by Graham Clay

1. Flux, an open-source text-to-image creator that is comparable to industry leaders like Midjourney, was released by Black Forest Labs (the “original team” behind Stable Diffusion). It is capable of generating high quality text in images (there are tons of educational use cases). You can play with it on their demo page, on Poe, or by running it on your own computer (tutorial here).

Other items re: Flux:

How to FLUX  — from heatherbcooper.substack.com by Heather Cooper
Where to use FLUX online & full tutorial to create a sleek ad in minutes

.

Also from Heather Cooper:

Introducing FLUX: Open-Source text to image model

FLUX… has been EVERYWHERE this week, as I’m sure you have seen. Developed by Black Forest Labs, is an open-source image generation model that’s gaining attention for its ability to rival leading models like Midjourney, DALL·E 3, and SDXL.

What sets FLUX apart is its blend of creative freedom, precision, and accessibility—it’s available across multiple platforms and can be run locally.

Why FLUX Matters
FLUX’s open-source nature makes it accessible to a broad audience, from hobbyists to professionals.

It offers advanced multimodal and parallel diffusion transformer technology, delivering high visual quality, strong prompt adherence, and diverse outputs.

It’s available in 3 models:
FLUX.1 [pro]: A high-performance, commercial image synthesis model.
FLUX.1 [dev]: An open-weight, non-commercial variant of FLUX.1 [pro]
FLUX.1 [schnell]: A faster, distilled version of FLUX.1, operating up to 10x quicker.

Daily Digest: Huge (in)Flux of AI videos. — from bensbites.beehiiv.com
PLUS: Review of ChatGPT’s advanced voice mode.

  1. During the weekend, image models made a comeback. Recently released Flux models can create realistic images with near-perfect text—straight from the model, without much patchwork. To get the party going, people are putting these images into video generation models to create prettytrippyvideos. I can’t identify half of them as AI, and they’ll only get better. See this tutorial on how to create a video ad for your product..

 


7 not only cool but handy use cases of new Claude — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Data visualization
  2. Infographic
  3. Copy the UI of a website
  4. …and more

Achieving Human Level Competitive Robot Table Tennis — from sites.google.com

 


ChatGPT Voice Mode Is Here: Will It Revolutionize AI Communication?


Advanced Voice Mode – FAQ — from help.openai.com
Learn more about our Advanced Voice capabilities.

Advanced Voice Mode on ChatGPT features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.

Advanced Voice Mode on ChatGPT is currently in a limited alpha. Please note that it may make mistakes, and access and rate limits are subject to change.


From DSC:
Think about the impacts/ramifications of global, virtual, real-time language translations!!! This type of technology will create very powerful, new affordances in our learning ecosystems — as well as in business communications, with the various governments across the globe, and more!

 

 

Colleges Race to Ready Students for the AI Workplace — from wsj.com by Milla Surjadi (behind a paywall)
Non-techie students are learning basic generative-AI skills as schools revamp their course offerings to be more job-friendly

College students are desperate to add a new skill to their résumés: artificial intelligence.

The rise of generative AI in the workplace and students’ demands for more hirable talents are driving schools to revamp courses and add specialized degrees at speeds rarely seen in higher education. Schools are even going so far as to emphasize that all undergraduates get a taste of the tech, teaching them how to use AI in a given field—as well as its failings and unethical applications.


Speaking of AI, also see Educause’s Artificial Intelligence (AI)-related resources, which includes the following excerpt:

The Basics of AI in Higher Education

 

Welcome to the Digital Writing Lab -- Supporting teachers to develop and empower digitally literate citizens.

Digital Writing Lab

About this Project

The Digital Writing Lab is a key component of the Australian national Teaching Digital Writing project, which runs from 2022-2025.

This stage of the broader project involves academic and secondary English teacher collaboration to explore how teachers are conceptualising the teaching of digital writing and what further supports they may need.

Previous stages of the project included archival research reviewing materials related to digital writing in Australia’s National Textbook Collection, and a national survey of secondary English teachers. You can find out more about the whole project via the project blog.

Who runs the project?

Project Lead Lucinda McKnight is an Associate Professor and Australian Research Council (ARC) DECRA Fellow researching how English teachers can connect the teaching of writing to contemporary media and students’ lifeworlds.

She is working with Leon Furze, who holds the doctoral scholarship attached to this project, and Chris Zomer, the project Research Fellow. The project is located in the Research for Educational Impact (REDI) centre at Deakin University, Melbourne.

.

Teaching Digital Writing is a research project about English today.

 

Using Class Discussions as AI-Proof Assessments — from edutopia.org by Kara McPhillips
Classroom discussions are one way to ensure that students are doing their own work in the age of artificial intelligence. 

I admit it: Grading essays has never topped my list of teaching joys. Sure, the moments when a student finally nails a skill after months of hard work make me shout for joy, startling my nearby colleagues (sorry, Ms. Evans), but by and large, it’s hard work. Yet lately, as generative artificial intelligence (AI) headlines swirl in my mind, a new anxiety has crept into my grading life. I increasingly wonder, am I looking at their hard work?

Do you know when I don’t feel this way? During discussions. A ninth grader wiggling the worn corner of her text, leaning forward with excitement over what she’s cleverly noticed about Kambili, rarely makes me wonder, “Are these her ideas?”

While I’ve always thought discussion is important, AI is elevating that importance. This year, I wonder, how can I best leverage discussion in my classroom?

 

For college students—and for higher ed itself—AI is a required course — from forbes.com by Jamie Merisotis

Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.

It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?

The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.

 


Addressing Special Education Needs With Custom AI Solutions — from teachthought.com
AI can offer many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?

Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

.


 

.


11 Summer AI Developments Important to Educators — from stefanbauschard.substack.com by Stefan Bauschard
Equity demands that we help students prepare to thrive in an AI-World

*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments


Academic Fracking: When Publishers Sell Scholars Work to AI — from aiedusimplified.substack.com by Lance Eaton
Further discussion of publisher practices selling scholars’ work to AI companies

Last week, I explored AI and academic publishing in response to an article that came out a few weeks ago about a deal Taylor & Francis made to sell their books to Microsoft and one other AI company (unnamed) for a boatload of money.

Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.


At the Crossroads of Innovation: Embracing AI to Foster Deep Learning in the College Classroom — from er.educause.edu by Dan Sarofian-Butin
AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?

The Way Forward
So now what?

In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11

Therefore, I believe we must double down on a learning-centered model in the age of AI.

The first step is to fully and enthusiastically embrace AI.

The second step is to find the “jagged technological frontier” of using AI in the college classroom.


.

.


.

.


Futures Thinking in Education — from gettingsmart.com by Getting Smart Staff

Key Points

  • Educators should leverage these tools to prepare for rapid changes driven by technology, climate, and social dynamics.
  • Cultivating empathy for future generations can help educators design more impactful and forward-thinking educational practices.
 

Per the Rundown AI:

Why it matters: AI is slowly shifting from a tool we text/prompt with, to an intelligence that we collaborate, learn, and grow with. Advanced Voice Mode’s ability to understand and respond to emotions in real-time convos could also have huge use cases in everything from customer service to mental health support.

Also relevant/see:


Creators to Have Personalized AI Assistants, Meta CEO Mark Zuckerberg Tells NVIDIA CEO Jensen Huang — from blogs.nvidia.com by Brian Caulfield
Zuckerberg and Huang explore the transformative potential of open source AI, the launch of AI Studio, and exchange leather jackets at SIGGRAPH 2024.

“Every single restaurant, every single website will probably, in the future, have these AIs …” Huang said.

“…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI,” Zuckerberg responded.

More broadly, the advancement of AI across a broad ecosystem promises to supercharge human productivity, for example, by giving every human on earth a digital assistant — or assistants — allowing people to live richer lives that they can interact with quickly and fluidly.

Also related/see:


From DSC:
Today was a MUCH better day for Nvidia however (up 12.81%). But it’s been very volatile in the last several weeks — as people and institutions ask where the ROI’s are going to come from.






9 compelling reasons to learn how to use AI Chatbots — from interestingengineering.com by Atharva Gosavi
AI Chatbots are conversational agents that can act on your behalf and converse with humans – a futuristic novelty that is already getting people excited about its usage in improving efficiency.

7. Accessibility and inclusivity
Chatbots can be designed to support multiple languages and accessibility needs, making services more inclusive. They can cater to users with disabilities by providing voice interaction capabilities and simplifying access to information. Understanding how to develop inclusive chatbots can help you contribute to making technology more accessible to everyone, a crucial aspect in today’s diverse society.

8. Future-proofing your skills
AI and automation are the future of work. Having the skills of building AI chatbots is a great way to future-proof your skills, and given the rising trajectory of AI, it’ll be a demanding skill in the market in the years to come. Staying ahead of technological trends is a great way to ensure you remain relevant and competitive in the job market.


Top 7 generative AI use cases for business — from cio.com by Grant Gross
Advanced chatbots, digital assistants, and coding helpers seem to be some of the sweet spots for gen AI use so far in business.

Many AI experts say the current use cases for generative AI are just the tip of the iceberg. More uses cases will present themselves as gen AIs get more powerful and users get more creative with their experiments.

However, a handful of gen AI use cases are already bubbling up. Here’s a look at the most popular and promising.

 

The resistance to AI in education isn’t really about learning — from medium.com by Peter Shea


A quick comment first from DSC:
Peter Shea gives us some interesting perspectives here. His thoughts should give many of us fodder for our own further reflection.


This reaction underscores a deeper issue: the resistance to AI in education is not truly about learning. It reflects a reluctance to re-evaluate the traditional roles of educators and to embrace the opportunities AI offers to enhance the learning experience.

In order to thrive in the learning ecosystem that will evolve in the Age of AI, the teaching profession needs to do some difficult but essential re-evaluation of their role, in order to better understand where they can provide the best value to learners. This requires confronting some comforting myths and uncomfortable truths.

Problem #2: The Closed World of Academic Culture
In addition, many teachers have spent little time working in non-academic professions. This is especially true for college instructors, who must devote five to seven years to graduate education before obtaining their first full-time position, and thus have little time to explore careers outside academia. This common lack of non-academic work experience heightens the anxiety that educators feel when contemplating the potential impact of generative AI on their work lives.


Also see this related posting:

Majority of Grads Wish They’d Been Taught AI in College — from insidehighered.com by Lauren Coffey
A new survey shows 70 percent of graduates think generative AI should be incorporated into courses. More than half said they felt unprepared for the workforce.

A majority of college graduates believe generative artificial intelligence tools should be incorporated into college classrooms, with more than half saying they felt unprepared for the workforce, according to a new survey from Cengage Group, an education-technology company.

The survey, released today, found that 70 percent of graduates believe basic generative AI training should be integrated into courses; 55 percent said their degree programs did not prepare them to use the new technology tools in the workforce.

 


“Who to follow in AI” in 2024? [Part I] — from ai-supremacy.com by Michael Spencer [some of posting is behind a paywall]
#1-20 [of 150] – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

Also see:

Along these lines, also see:


AI In Medicine: 3 Future Scenarios From Utopia To Dystopia — from medicalfuturist.com by Andrea Koncz
There’s a vast difference between baseless fantasizing and realistic forward planning. Structured methodologies help us learn how to “dream well”.

Key Takeaways

  • We’re often told that daydreaming and envisioning the future is a waste of time. But this notion is misguided.
  • We all instinctively plan for the future in small ways, like organizing a trip or preparing for a dinner party. This same principle can be applied to larger-scale issues, and smart planning does bring better results.
  • We show you a method that allows us to think “well” about the future on a larger scale so that it better meets our needs.

Adobe Unveils Powerful New Innovations in Illustrator and Photoshop Unlocking New Design Possibilities for Creative Pros — from news.adobe.com

  • Latest Illustrator and Photoshop releases accelerate creative workflows, save pros time and empower designers to realize their visions faster
  • New Firefly-enabled features like Generative Shape Fill in Illustrator along with the Dimension Tool, Mockup, Text to Pattern, the Contextual Taskbar and performance enhancement tools accelerate productivity and free up time so creative pros can dive deeper into the parts of their work they love
  • Photoshop introduces all-new Selection Brush Tool and the general availability of Generate Image, Adjustment Brush Tool and other workflow enhancements empowering creators to make complex edits and unique designs
    .


Nike is using AI to turn athletes’ dreams into shoes — from axios.com by Ina Fried

Zoom in: Nike used genAI for ideation, including using a variety of prompts to produce images with different textures, materials and color to kick off the design process.

What they’re saying: “It’s a new way for us to work,” Nike lead footwear designer Juliana Sagat told Axios during a media tour of the showcase on Tuesday.
.


AI meets ‘Do no harm’: Healthcare grapples with tech promises — from finance.yahoo.com by Maya Benjamin

Major companies are moving at high speed to capture the promises of artificial intelligence in healthcare while doctors and experts attempt to integrate the technology safely into patient care.

“Healthcare is probably the most impactful utility of generative AI that there will be,” Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to enhance drug discovery in the pharmaceutical industry, among other investments in healthcare companies, declared at the company’s AI Summit in June.


Mistral reignites this week’s LLM rivalry with Large 2 (source) — from superhuman.ai

Today, we are announcing Mistral Large 2, the new generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.


Meta releases the biggest and best open-source AI model yet — from theverge.com by Alex Heath
Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months.

Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.

Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.


4 ways to boost ChatGPT — from wondertools.substack.com by Jeremy Caplan & The PyCoach
Simple tactics for getting useful responses

To help you make the most of ChatGPT, I’ve invited & edited today’s guest post from the author of a smart AI newsletter called The Artificial Corner. I appreciate how Frank Andrade pushes ChatGPT to produce better results with four simple, clever tactics. He offers practical examples to help us all use AI more effectively.

Frank Andrade: Most of us fail to make the most of ChatGPT.

  1. We omit examples in our prompts.
  2. We fail to assign roles to ChatGPT to guide its behavior.
  3. We let ChatGPT guess instead of providing it with clear guidance.

If you rely on vague prompts, learning how to create high-quality instructions will get you better results. It’s a skill often referred to as prompt engineering. Here are several techniques to get you to the next level.

 
© 2024 | Daniel Christian