When you keep getting distracted from all of the extraneous items — such as those annoying videos and advertisements — that appear when you launch a web page, there is a solution to quickly hiding all of those items. It’s called Postlight Reader. I’ve been using it for years and wanted to put this information out there for folks who might not have heard about it.
I highly recommend it if you are having trouble reading an article and processing the information that it contains. Instructional Designers will know all about Extraneous Load (one of the types of Cognitive Load) and how it negatively impacts one’s learning and processing of the information that really counts (i.e., the Germane Cognitive Load).
Note the differences when I used Postlight Reader on an article out at cbsnews.com:
The page appears with all kinds of ads and videos going on…I can hardly
process the information on the article due to these items:
Then, after I enabled this extension in Chrome and click on the icon for Postlight Reader, it strips away all of those items and leaves me with the article that I wanted to read:
If you aren’t using it, I highly recommend that you give it a try.
The Postlight Reader extension for Chrome removes ads and distractions, leaving only text and images for a clean and consistent reading view on every site. Features:
Disable surrounding webpage noise and clutter with one click
Send To Kindle functionality
Adjust typeface and text size, and toggle between light or dark themes
Quick keyboard shortcut (Cmd + Esc for Mac users, Alt + ` for Windows users) to switch to Reader on any article page
From DSC: The above item is simply excellent!!! I love it!
We’re going to see a lot more of the Square, Stripe, Shopify-type startups pop up for agentic AI.
This one is like an AI-human broker.
1) Prompt an AI with a need
2) Give the AI a budget (real money)
3) AI turns need into plan with tasks
4) AI finds humans to complete the… https://t.co/UXf1bNZ4AK
3 new Chrome AI features for even more helpful browsing — from blog.google from Parisa Tabriz See how Chrome’s new AI features, including Google Lens for desktop and Tab compare, can help you get things done more easily on the web.
On speaking to AI — from oneusefulthing.org by Ethan Mollick Voice changes a lot of things
So, let’s talk about ChatGPT’s new Advanced Voice mode and the new AI-powered Siri. They are not just different approaches to talking to AI. In many ways, they represent the divide between two philosophies of AI – Copilots versus Agents, small models versus large ones, specialists versus generalists.
1. Flux, an open-source text-to-image creator that is comparable to industry leaders like Midjourney, was released by Black Forest Labs (the “original team” behind Stable Diffusion). It is capable of generating high quality text in images (there are tons of educational use cases). You can play with it on their demo page, on Poe, or by running it on your own computer (tutorial here).
Other items re: Flux:
How to FLUX — from heatherbcooper.substack.com by Heather Cooper Where to use FLUX online & full tutorial to create a sleek ad in minutes
.
Also from Heather Cooper:
Introducing FLUX: Open-Source text to image model
FLUX… has been EVERYWHERE this week, as I’m sure you have seen. Developed by Black Forest Labs, is an open-source image generation model that’s gaining attention for its ability to rival leading models like Midjourney, DALL·E 3, and SDXL.
What sets FLUX apart is its blend of creative freedom, precision, and accessibility—it’s available across multiple platforms and can be run locally.
Why FLUX Matters
FLUX’s open-source nature makes it accessible to a broad audience, from hobbyists to professionals.
It offers advanced multimodal and parallel diffusion transformer technology, delivering high visual quality, strong prompt adherence, and diverse outputs.
It’s available in 3 models:
FLUX.1 [pro]: A high-performance, commercial image synthesis model.
FLUX.1 [dev]: An open-weight, non-commercial variant of FLUX.1 [pro]
FLUX.1 [schnell]: A faster, distilled version of FLUX.1, operating up to 10x quicker.
During the weekend, image models made a comeback. Recently released Flux models can create realistic images with near-perfect text—straight from the model, without much patchwork. To get the party going, people are putting these images into video generation models to create pretty–trippy–videos. I can’t identify half of them as AI, and they’ll only get better. See this tutorial on how to create a video ad for your product..
Advanced Voice Mode on ChatGPT features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.
Advanced Voice Mode on ChatGPT is currently in a limited alpha. Please note that it may make mistakes, and access and rate limits are subject to change.
From DSC: Think about the impacts/ramifications of global, virtual, real-time language translations!!! This type of technology will create very powerful, new affordances in our learning ecosystems — as well as in business communications, with the various governments across the globe, and more!
Picking the right projector for school can be a tough decision as the types and prices range pretty widely. From affordable options to professional grade pricing, there are many choices. The problem is that the performance is also hugely varied. This guide aims to be the solution by offering all you need to know about buying the right projector for school where you are.
Luke covers a variety of topics including:
Types of projectors
Screen quality
Light type
Connectivity
Pricing
From DSC: I posted this because Luke covered a variety of topics — and if you’re set on going with a projector, this is a solid article. But I hesitated to post this, as I’m not sure of the place that projectors will have in the future of our learning spaces. With voice-enabled apps and appliances continuing to be more prevalent — along with the presence of AI-based human-computer interactions and intelligent systems — will projectors be the way to go? Will enhanced interactive whiteboards be the way to go? Will there be new types of displays? I’m not sure. Time will tell.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Apple announced “Apple Intelligence” at WWDC 2024, its name for a new suite of AI features for the iPhone, Mac, and more. Starting later this year, Apple is rolling out what it says is a more conversational Siri, custom, AI-generated “Genmoji,” and GPT-4o access that lets Siri turn to OpenAI’s chatbot when it can’t handle what you ask it for.
SAN FRANCISCO — Apple officially launched itself into the artificial intelligence arms race, announcing a deal with ChatGPT maker OpenAI to use the company’s technology in its products and showing off a slew of its own new AI features.
The announcements, made at the tech giant’s annual Worldwide Developers Conference on Monday in Cupertino, Calif., are aimed at helping the tech giant keep up with competitors such as Google and Microsoft, which have boasted in recent months that AI makes their phones, laptops and software better than Apple’s. In addition to Apple’s own homegrown AI tech, the company’s phones, computers and iPads will also have ChatGPT built in “later this year,” a huge validation of the importance of the highflying start-up’s tech.
The highly anticipated AI partnership is the first of its kind for Apple, which has been regarded by analysts as slower to adopt artificial intelligence than other technology companies such as Microsoft and Google.
The deal allows Apple’s millions of users to access technology from OpenAI, one of the highest-profile artificial intelligence companies of recent years. OpenAI has already established partnerships with a variety of technology and publishing companies, including a multibillion-dollar deal with Microsoft.
The real deal here is that Apple is literally putting AI into the hands of >1B people, most of whom will probably be using AI for the 1st time. And it’s delivering AI that’s actually useful (forget those Genmojis, we’re talking about implanting ChatGPT-4o’s brain into Apple devices).
It’s WWDC 2024 keynote time! Each year Apple kicks off its Worldwide Developers Conference with a few hours of just straight announcements, like the long-awaited Apple Intelligence and a makeover for smart AI assistant, Siri. We expected much of them to revolve around the company’s artificial intelligence ambitions (and here), and Apple didn’t disappoint. We also bring you news about Vision Pro and lots of feature refreshes.
Why Gamma is great for presentations — from Jeremy Caplan
Gamma has become one of my favorite new creativity tools. You can use it like Powerpoint or Google Slides, adding text and images to make impactful presentations. It lets you create vertical, square or horizontal slides. You can embed online content to make your deck stand out with videos, data or graphics. You can even use it to make quick websites.
Its best feature, though, is an easy-to-use application of AI. The AI will learn from any document you import, or you can use a text prompt to create a strong deck or site instantly. .
ChatGPT has 180.5 million users out of which 100 million users are active weekly.
In January 2024, ChatGPT got 2.3 billion website visits and 2 million developers are using its API.
The highest percentage of ChatGPT users belong to USA (46.75%), followed by India (5.47%). ChatGPT is banned in 7 countries including Russia and China.
OpenAI’s projected revenue from ChatGPT is $2billion in 2024.
Running ChatGPT costs OpenAI around $700,000 daily.
Sam Altman is seeking $7 trillion for a global AI chip project while Open AI is also listed as a major shareholder in Reddit.
ChatGPT offers a free version with GPT-3.5 and a Plus version with GPT-4, which is 40% more accurate and 82% safer costing $20 per month.
ChatGPT is being used for automation, education, coding, data-analysis, writing, etc.
43% of college students and 80% of the Fortune 500 companies are using ChatGPT.
A 2023 study found 25% of US companies surveyed saved $50K-$70K using ChatGPT, while 11% saved over $100K.
Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all–day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can’t on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.
From DSC: As a first off-the-hip look, Recall could be fraught with possible security/privacy-related issues. But what do I know? The Neuron states “Microsoft assures that everything Recall sees remains private.” Ok…
From The Rundown AI concerning the above announcements:
The details:
A new system enables Copilot+ PCs to run AI workloads up to 20x faster and 100x more efficiently than traditional PCs.
Windows 11 has been rearchitected specifically for AI, integrating the Copilot assistant directly into the OS.
New AI experiences include a new feature called Recall, which allows users to search for anything they’ve seen on their screen with natural language.
Copilot’s new screen-sharing feature allows AI to watch, hear, and understand what a user is doing on their computer and answer questions in real-time.
Copilot+ PCs will start at $999, and ship with OpenAI’s latest GPT-4o models.
Why it matters: Tony Stark’s all-powerful JARVIS AI assistant is getting closer to reality every day. Once Copilot, ChatGPT, Project Astra, or anyone else can not only respond but start executing tasks autonomously, things will start getting really exciting — and likely initiate a whole new era of tech work.
AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.
ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.
…
Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.
There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.
Hello GPT-4o — from openai.com We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.
GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Providing inflection, emotions, and a human-like voice
Understanding what the camera is looking at and integrating it into the AI’s responses
Providing customer service
With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.
This demo is insane.
A student shares their iPad screen with the new ChatGPT + GPT-4o, and the AI speaks with them and helps them learn in *realtime*.
Imagine giving this to every student in the world.
I recently created an AI version of myself—REID AI—and recorded a Q&A to see how this digital twin might challenge me in new ways. The video avatar is generated by Hour One, its voice was created by Eleven Labs, and its persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades. I decided to interview it to test its capability and how closely its responses match—and test—my thinking. Then, REID AI asked me some questions on AI and technology. I thought I would hate this, but I’ve actually ended up finding the whole experience interesting and thought-provoking.
From DSC: This ability to ask questions of a digital twin is very interesting when you think about it in terms of “interviewing” a historical figure. I believe character.ai provides this kind of thing, but I haven’t used it much.
Over the past year, many excellent and resourceful books have crossed my desk or Kindle. I’m rounding them up here so you can find a few to expand your horizons. The list below is in alphabetical order by title.
Each book is unique, yet as a collection, they reflect some common themes and trends in Learning and Development: a focus on empathy and emotion, adopting best practices from other fields, using data for greater impact, aligning projects with organizational goals, and developing consultative skills. The authors listed here are optimistic and forward-thinking—they believe change is possible. I hope you enjoy the books.
Below are some items for those creatives who might be interested in telling stories, designing games, crafting audio-based experiences, composing music, developing new worlds using 3D graphics, and more.
The sounds of any game can make or break the experience for its players. Many of our favorite adventures come roaring back into our minds when we hear a familiar melody, or maybe it’s a special sound effect that reminds us of our time performing a particularly heroic feat… or the time we just caused some havoc with friends. With Lightfall sending Guardians to explore the new destination of Neomuna, there’s an entire universe hidden away within the sounds—both orchestral and diegetic—for Guardians to uncover and immerse themselves in. We recently assembled some of Destiny’s finest sound designers and composers to dive a little bit deeper into the stunning depths of Neomuna’s auditory experience.
Before diving into the interview with our incredible team, we wanted to make sure you have seen the Lightfall music documentary that went out shortly after the expansion’s release. This short video is a great introduction to how our team worked to create the music of Lightfall and is a must-see for audiophiles and Destiny fans alike.
Every game has a story to tell, a journey to take players through that — if done well — can inspire wonderful memories that last a lifetime. Unlike other storytelling mediums, the art of video games is an intricate interweaving of experiences, including psychological cues that are designed to entrance players and make them feel like they’re a part of the story. One way this is achieved is through the art of audio. And no, we aren’t just talking about the many incredible soundtracks out there, we’re talking about the oftentimes overlooked universe of audio design.
… What does an audio designer do?
“Number one? We don’t work on music. That’s a thing almost everyone thinks every audio designer does,” jokes Nyte when opening up about beginning her quest into the audio world. “That, or for a game like Destiny, people just assume we only work on weapon sounds and nothing else. Which, [Juan] Uribe does, but a lot of us don’t. There is this entire gamut of other sounds that are in-game that people don’t really notice. Some do, and that’s always cool, but audio is about all sounds coming together for a ‘whole’ audio experience.”
On the Transformation of Entertainment
What company will be the Pixar of the AI era? What talent agency will be the CAA of the AI era? How fast can the entertainment industry evolve to natively leverage AI, and what parts will be disrupted by the industry’s own ambivalence? Or are all of these questions myopic…and should we anticipate a wave of entirely new categories of entertainment?
We are starting to see material adoption of AI tools across many industries, including media and entertainment. No doubt, these tools will transform the processes behind generating content. But what entirely new genres of content might emerge? The platform shift to AI-based workflows might give rise to entirely new types of companies that transform entertainment as we know it – from actor representation, Hollywood economics, consumption devices and experiences, to the actual mediums of entertainment themselves. Let’s explore just a few of the more edgy implications: