Boosting Student Engagement with Interactive and Practical Teaching Methods — from campustechnology.com by Dr. Lucas Long

One of my biggest goals as an educator is to show students how the material they learn in class can be applied to real-world situations. In my finance courses, this often means taking what we’re learning about financial calculations and connecting it to decisions they’ll have to make as adults. For example, I’ve used real-life scenarios like buying a car with a loan, paying off student debt, saving for a wedding, or calculating mortgage payments for a future home purchase. I even use salary data to show students what they could realistically afford given average salaries after graduation, helping them relate to the financial decisions they will face after college.

These practical examples don’t just keep students engaged; they also demonstrate the immediate value of learning financial principles. I often hear students express frustration when they feel like they’re learning concepts that won’t apply to their lives. But when I use real scenarios and provide tools like financial calculators to show them exactly how they’ll use this knowledge in their future, their attitude changes. They become more motivated to engage with the material because they see its relevance beyond the classroom.

 

Voice and Trust in Autonomous Learning Experiences — from learningguild.com by Bill Brandon

This article seeks to apply some lessons from brand management to learning design at a high level. Throughout the rest of this article, it is essential to remember that the context is an autonomous, interactive learning experience. The experience is created adaptively by Gen AI or (soon enough) by agents, not by rigid scripts. It may be that an AI will choose to present prewritten texts or prerecorded videos from a content library according to the human users’ responses or questions. Still, the overall experience will be different for each user. It will be more like a conversation than a book.

In summary, while AI chatbots have the potential to enhance learning experiences, their acceptance and effectiveness depend on several factors, including perceived usefulness, ease of use, trust, relational factors, perceived risk, and enjoyment. 

Personalization and building trust are essential for maintaining user engagement and achieving positive learning outcomes. The right “voice” for autonomous AI or a chatbot can enhance trust by making interactions more personal, consistent, and empathetic.

 

FlexOS’ Stay Ahead Edition #43 — from flexos.work

People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:

  • Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
  • Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
  • Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
  • Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
  • …plus several other items

The posting also lists several ideas to try with NotebookLM such as:

Idea 2: Study Companion

  • Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
  • Get a breakdown of the course materials to understand them better.

Google’s NotebookLM: A Game-Changer for Education and Beyond — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
AI Tools: Breaking down Google’s latest AI tool and its implications for education.

“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”

With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.


Meet Google NotebookLM: 10 things to know for educators — from ditchthattextbook.com by Matt Miller

1. Upload a variety of sources for NotebookLM to use. 
You can use …

  • websites
  • PDF files
  • links to websites
  • any text you’ve copied
  • Google Docs and Slides
  • even Markdown

You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).

2. Ask it to create resources.
3. Create an audio summary.
4. Chat with your sources.
5. Save (almost) everything. 


NotebookLM summarizes my dissertation — from darcynorman.net by D’Arcy Norman, PhD

I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.

The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.


4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter
As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.

As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:

  • Incorporate personal experiences and local content into assignments
  • Ask students for multi-modal deliverables
  • Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
  • Consider real-time and oral assignments

Google CEO Sundar Pichai announces $120M fund for global AI education — from techcrunch.com by Anthony Ha

He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.


Educators discuss the state of creativity in an AI world — from gettingsmart.com by Joe & Kristin Merrill, LaKeshia Brooks, Dominique’ Harbour, Erika Sandstrom

Key Points

  • AI allows for a more personalized learning experience, enabling students to explore creative ideas without traditional classroom limitations.
  • The focus of technology integration should be on how the tool is used within lessons, not just the tool itself

Addendum on 9/27/24:

Google’s NotebookLM enhances AI note-taking with YouTube, audio file sources, sharable audio discussions — from techcrunch.com by Jagmeet Singh

Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions

NotebookLM adds audio and YouTube support, plus easier sharing of Audio Overviews — from blog.google

 

Gemini makes your mobile device a powerful AI assistant — from blog.google
Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.

Rolling out today: Gemini Live <– Google swoops in before OpenAI can get their Voice Mode out there
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.

.

Per the Rundown AI:
Why it matters: Real-time voice is slowly shifting AI from a tool we text/prompt with, to an intelligence that we collaborate, learn, consult, and grow with. As the world’s anticipation for OpenAI’s unreleased products grows, Google has swooped in to steal the spotlight as the first to lead widespread advanced AI voice rollouts.

Beyond Social Media: Schmidt Predicts AI’s Earth-Shaking Impact— from wallstreetpit.com
The next wave of AI is coming, and if Schmidt is correct, it will reshape our world in ways we are only beginning to imagine.

In a recent Q&A session at Stanford, Eric Schmidt, former CEO and Chairman of search giant Google, offered a compelling vision of the near future in artificial intelligence. His predictions, both exciting and sobering, paint a picture of a world on the brink of a technological revolution that could dwarf the impact of social media.

Schmidt highlighted three key advancements that he believes will converge to create this transformative wave: very large context windows, agents, and text-to-action capabilities. These developments, according to Schmidt, are not just incremental improvements but game-changers that could reshape our interaction with technology and the world at large.

.


The rise of multimodal AI agents— from 11onze.cat
Technology companies are investing large amounts of money in creating new multimodal artificial intelligence models and algorithms that can learn, reason and make decisions autonomously after collecting and analysing data.

The future of multimodal agents
In practical terms, a multimodal AI agent can, for example, analyse a text while processing an image, spoken language, or an audio clip to give a more complete and accurate response, both through voice and text. This opens up new possibilities in various fields: from education and healthcare to e-commerce and customer service.


AI Change Management: 41 Tactics to Use (August 2024)— from flexos.work by Daan van Rossum
Future-proof companies are investing in driving AI adoption, but many don’t know where to start. The experts recommend these 41 tips for AI change management.

As Matt Kropp told me in our interview, BCG has a 10-20-70 rule for AI at work:

  • 10% is the LLM or algorithm
  • 20% is the software layer around it (like ChatGPT)
  • 70% is the human factor

This 70% is exactly why change management is key in driving AI adoption.

But where do you start?

As I coach leaders at companies like Apple, Toyota, Amazon, L’Oréal, and Gartner in our Lead with AI program, I know that’s the question on everyone’s minds.

I don’t believe in gatekeeping this information, so here are 41 principles and tactics I share with our community members looking for winning AI change management principles.


 

What to Know About Buying A Projector for School — from by Luke Edwards
Buy the right projector for school with these helpful tips and guidance.

Picking the right projector for school can be a tough decision as the types and prices range pretty widely. From affordable options to professional grade pricing, there are many choices. The problem is that the performance is also hugely varied. This guide aims to be the solution by offering all you need to know about buying the right projector for school where you are.

Luke covers a variety of topics including:

  • Types of projectors
  • Screen quality
  • Light type
  • Connectivity
  • Pricing

From DSC:
I posted this because Luke covered a variety of topics — and if you’re set on going with a projector, this is a solid article. But I hesitated to post this, as I’m not sure of the place that projectors will have in the future of our learning spaces. With voice-enabled apps and appliances continuing to be more prevalent — along with the presence of AI-based human-computer interactions and intelligent systems — will projectors be the way to go? Will enhanced interactive whiteboards be the way to go? Will there be new types of displays? I’m not sure. Time will tell.

 
 

Introducing Copilot+ PCs — from blogs.microsoft.com

[On May 20th], at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs.

Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all–day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can’t on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.

From DSC:
As a first off-the-hip look, Recall could be fraught with possible security/privacy-related issues. But what do I know? The Neuron states “Microsoft assures that everything Recall sees remains private.” Ok…


From The Rundown AI concerning the above announcements:

The details:

  • A new system enables Copilot+ PCs to run AI workloads up to 20x faster and 100x more efficiently than traditional PCs.
    Windows 11 has been rearchitected specifically for AI, integrating the Copilot assistant directly into the OS.
  • New AI experiences include a new feature called Recall, which allows users to search for anything they’ve seen on their screen with natural language.
  • Copilot’s new screen-sharing feature allows AI to watch, hear, and understand what a user is doing on their computer and answer questions in real-time.
  • Copilot+ PCs will start at $999, and ship with OpenAI’s latest GPT-4o models.

Why it matters: Tony Stark’s all-powerful JARVIS AI assistant is getting closer to reality every day. Once Copilot, ChatGPT, Project Astra, or anyone else can not only respond but start executing tasks autonomously, things will start getting really exciting — and likely initiate a whole new era of tech work.


 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

Guiding Students in Special Education to Generate Ideas for Writing — from edutopia.org by Erin Houghton
When students are stuck, breaking the brainstorming stage down into separate steps can help them get started writing.

Students who first generate ideas about a topic—access what they know about it—more easily write their outlines and drafts for the bigger-picture assignment. For Sally, brainstorming was too overwhelming as an initial step, so we started off by naming examples. I gave Sally a topic—name ways characters in Charlotte’s Web helped one another—she named examples of things (characters), and we generated a list of ways those characters helped one another.

IMPLEMENTING BRAINSTORMING AS SKILL BUILDING
This “naming” strategy is easy to implement with individual students or in groups. These are steps to get you started.

Step 1. Introduce the student to the exercise.
Step 2. Select a topic for practice.


[Opinion] It’s okay to play: How ‘play theory’ can revitalize U.S. education — from hechingerreport.org by Tyler Samstag
City planners are recognizing that play and learning are intertwined and turning public spaces into opportunities for active learning

When we’re young, playing and learning are inseparable.

Simple games like peekaboo and hide-and-seek help us learn crucial lessons about time, anticipation and cause and effect. We discover words, numbers, colors and sounds through toys, puzzles, storybooks and cartoons. Everywhere we turn, there’s something fun to do and something new to learn.

Then, somewhere around early elementary school, learning and play officially become separated for life.

Suddenly, learning becomes a task that only takes place in proper classrooms with the help of textbooks, homework and tests. Meanwhile, play becomes a distraction that we’re only allowed to indulge in during our free time, often by earning it as a reward for studying. As a result, students tend to grow up feeling as if learning is a stressful chore while playing is a reward.

Similar interactive learning experiences are popping up in urban areas from California to the East Coast, with equally promising results: art, games and music are being incorporated into green spaces, public parks, transportation stations, laundromats and more.


And on a somewhat related note, also see:


Though meant for higher ed, this is also applicable to the area of pedagogy within K12:

Space to fail. And learn — from educationalist.substack.com by Alexandra Mihai
I want to use today’s newsletter to talk about how we can help students to own their mistakes and really learn from them, so I’m sharing some thoughts, some learning design ideas and some resources…

10 ideas to make failure a learning opportunity

  • Start with yourself:
  • Admit when you don’t know something
  • Try to come up with “goal free problems”
  • Always dig deeper:
  • Encourage practice:
 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 
  1. The GPT-4 Browser That Will Change Your Search Game — from noise.beehiiv.com by Alex Banks
    Why Microsoft Has The ‘Edge’ On Google

Excerpts:

Microsoft has launched a GPT-4 enhanced Edge browser.

By integrating OpenAI’s GPT-4 technology with Microsoft Edge, you can now use ChatGPT as a copilot in your Bing browser. This delivers superior search results, generates content, and can even transform your copywriting skills (read on to find out how).

Benefits mentioned include: Better Search, Complete Answers, and Creative Spark.

The new interactive chat feature means you can get the complete answer you are looking for by refining your search by asking for more details, clarity, and ideas.

From DSC:
I have to say that since the late 90’s, I haven’t been a big fan of web browsers from Microsoft. (I don’t like how Microsoft unfairly buried Netscape Navigator and the folks who had out-innovated them during that time.) As such, I don’t use Edge so I can’t fully comment on the above article.

But I do have to say that this is the type of thing that may make me reevaluate my stance regarding Microsoft’s browsers. Integrating GPT-4 into their search/chat functionalities seems like it would be a very solid, strategic move — at least as of late April 2023.


Speaking of new items coming from Microsoft, also see:

Microsoft makes its AI-powered Designer tool available in preview — from techcrunch.com by Kyle Wiggers

Excerpts:

[On 4/27/23], Microsoft Designer, Microsoft’s AI-powered design tool, launched in public preview with an expanded set of features.

Announced in October, Designer is a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. It leverages user-created content and DALL-E 2, OpenAI’s text-to-image AI, to ideate designs, with drop-downs and text boxes for further customization and personalization.

Designer will remain free during the preview period, Microsoft says — it’s available via the Designer website and in Microsoft’s Edge browser through the sidebar. Once the Designer app is generally available, it’ll be included in Microsoft 365 Personal and Family subscriptions and have “some” functionality free to use for non-subscribers, though Microsoft didn’t elaborate.

 

How Easy Is It/Will It Be to Use AI to Design a Course? — from wallyboston.com by Wally Boston

Excerpt:

Last week I received a text message from a friend to check out a March 29th Campus Technology article about French AI startup, Nolej. Nolej (pronounced “Knowledge”) has developed an OpenAI-based instructional content generator for educators called NolejAI.

Access to NolejAI is through a browser. Users can upload video, audio, text documents, or a website url. NolejAI will generate an interactive micro-learning package which is a standalone digital lesson including content transcript, summaries, a glossary of terms, flashcards, and quizzes. All the lesson materials generated is based upon the uploaded materials.


From DSC:
I wonder if this will turn out to be the case:

I am sure it’s only a matter of time before NolejAI or another product becomes capable of generating a standard three credit hour college course. Whether that is six months or two years, it’s likely sooner than we think.


Also relevant/see:

The Ultimate 100 AI Tools

The Ultimate 100 AI Tools -- as of 4-12-23


 

Meet MathGPT: a Chatbot Tutor Built Specific to a Math Textbook — from thejournal.com by Kristal Kuykendall

Excerpt:

Micro-tutoring platform PhotoStudy has unveiled a new chatbot built on OpenAI’s ChatGPT APIs that can teach a complete elementary algebra textbook with “extremely high accuracy,” the company said.

“Textbook publishers and teachers can now transform their textbooks and teaching with a ChatGPT-like assistant that can teach all the material in a textbook, assess student progress, provide personalized help in weaker areas, generate quizzes with support for text, images, audio, and ultimately a student customized avatar for video interaction,” PhotoStudy said in its news release.

Some sample questions the MathGPT tool can answer:

    • “I don’t know how to solve a linear equation…”
    • “I have no idea what’s going on in class but we are doing Chapter 2. Can we start at the top?”
    • “Can you help me understand how to solve this mixture of coins problem?”
    • “I need to practice for my midterm tomorrow, through Chapter 6. Help.”
 

Job Titles: It’s Not Only Instructional Design — from idolcourses.com by Ivett Csordas

Excerpt:

When I first came across the title “Instructional Designer” while looking for alternative career options, I was just as confused as anybody would be hearing about our job for the first time. I remember asking questions like: What does an Instructional Designer do? Why is it called Instructional Design? Wouldn’t a title such as Learning Experience Designer or Training Content Developer suit them better? How are their skill sets different from curriculum developers like teachers’? etc.

Then, the more I learnt about the different roles of Instructional Designers, and the more job interviews I had, ironically, the less clarity I had over the companies’ expectations of us.

The truth is that the role of an Instructional Designer varies from company to company. What a person hired with the title “Instructional Designer” ends up doing depends on a range of factors such as the company’s training portfolio, the profile of their learners, the size of the L&D team, the way they use technology, just to mention a few.

From DSC:
I don’t know a thing about idolcourses.com, but I really appreciated running across this posting by Ivett Csordas about the various job titles out there and the differences between some of these job titles. The posting deals with job titles associated with developers, designers, LXD, LMS roles, managers, L&D Coordinators, specialists, consultants, and strategists.

 

 

behance.net/live/   <— Check out our revamped schedule!

Join us in the morning for Adobe Express streams — If you are an aspiring creative, small business owner, or looking to kickstart a side hustle – these live streams are for you!

Then level up your skills with Creative Challenges, Bootcamps, and Pro-Tips. Get inspired by artists from all over the world during our live learning events. Tune in to connect directly with your instructors and other creatives just like you.

In the afternoon, join creatives in their own Community Streams! Laugh and create along side other Adobe Live Community members on Behance, Youtube and Twitch!

For weekly updates on the Adobe Live schedule + insight into upcoming guests and content, join our discord communities!

Watch Adobe Live Now!

 
© 2024 | Daniel Christian