What DICE does in this posting will be available 24x7x365 in the future [Christian]

From DSC:
First of all, when you look at the following posting:


What Top Tech Skills Should You Learn for 2025? — from dice.com by Nick Kolakowski


…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.

I assert that in the future, people will be able to see this information on a 24x7x365 basis.

  • Which jobs are in demand?
  • What skills do I need to do those jobs?
  • WHERE do I get/develop those skills?

And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.

BUT PEOPLE are the key! Often times, we need to — and prefer to — learn with others!


 

Duolingo Introduces AI-Powered Innovations at Duocon 2024 — from investors.duolingo.com; via Claire Zau

Duolingo’s new Video Call feature represents a leap forward in language practice for learners. This AI-powered tool allows Duolingo Max subscribers to engage in spontaneous, realistic conversations with Lily, one of Duolingo’s most popular characters. The technology behind Video Call is designed to simulate natural dialogue and provides a personalized, interactive practice environment. Even beginner learners can converse in a low-pressure environment because Video Call is designed to adapt to their skill level. By offering learners the opportunity to converse in real-time, Video Call builds the confidence needed to communicate effectively in real-world situations. Video Call is available for Duolingo Max subscribers learning English, Spanish, and French.


And here’s another AI-based learning item:

AI reading coach startup Ello now lets kids create their own stories — from techcrunch.com by Lauren Forristal; via Claire Zau

Ello, the AI reading companion that aims to support kids struggling to read, launched a new product on Monday that allows kids to participate in the story-creation process.

Called “Storytime,” the new AI-powered feature helps kids generate personalized stories by picking from a selection of settings, characters, and plots. For instance, a story about a hamster named Greg who performed in a talent show in outer space.

 

AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh
The market for AI products and services could reach between $780 billion and $990 billion by 2027.

At a Glance

  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”


And on a somewhat related note (i.e., emerging technologies), also see the following two postings:

Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo
As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.

Key Takeaways

  • Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
  • Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
  • The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.

Proto hologram tech allows cancer patients to receive specialist care without traveling large distances — from inavateonthenet.net

“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.




Clone your voice in minutes: The AI trick 95% don’t know about — from aidisruptor.ai by Alex McFarland
Warning: May cause unexpected bouts of talking to yourself

Now that you’ve got your voice clone, what can you do with it?

  1. Content Creation:
    • Podcast Production: Record episodes in half the time. Your listeners won’t know the difference, but your schedule will thank you.
    • Audiobook Narration: Always wanted to narrate your own book? Now you can, without spending weeks in a recording studio.
    • YouTube Videos: Create voiceovers for your videos in multiple languages. World domination, here you come!
  2. Business Brilliance:
    • Customer Service: Personalized automated responses that actually sound personal.
    • Training Materials: Create engaging e-learning content in your own voice, minus the hours of recording.
    • Presentations: Never worry about losing your voice before a big presentation again. Your clone’s got your back.

185 real-world gen AI use cases from the world’s leading organizations — from blog.google by Brian Hall; via Daniel Nest’s Why Try AI

In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.

In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.


AI Data Drop: 3 Key Insights from Real-World Research on AI Usage — from microsoft.com; via Daniel Nest’s Why Try AI
One of the largest studies of Copilot usage—at nearly 60 companies—reveals how AI is changing the way we work.

  1. AI is starting to liberate people from email
  2. Meetings are becoming more about value creation
  3. People are co-creating more with AI—and with one another


*** Dharmesh has been working on creating agent.ai — a professional network for AI agents.***


Speaking of agents, also see:

Onboarding the AI workforce: How digital agents will redefine work itself — from venturebeat.com by Gary Grossman

AI in 2030: A transformative force

  1. AI agents are integral team members
  2. The emergence of digital humans
  3. AI-driven speech and conversational interfaces
  4. AI-enhanced decision-making and leadership
  5. Innovation and research powered by AI
  6. The changing nature of job roles and skills

AI Video Tools You Can Use Today — from heatherbcooper.substack.com by Heather Cooper
The latest AI video models that deliver results

AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.

No worries – we do have plenty of generative AI video tools we can use right now.

  • Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
  • Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
  • Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
  • …plus several more

 

FlexOS’ Stay Ahead Edition #43 — from flexos.work

People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:

  • Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
  • Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
  • Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
  • Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
  • …plus several other items

The posting also lists several ideas to try with NotebookLM such as:

Idea 2: Study Companion

  • Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
  • Get a breakdown of the course materials to understand them better.

Google’s NotebookLM: A Game-Changer for Education and Beyond — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
AI Tools: Breaking down Google’s latest AI tool and its implications for education.

“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”

With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.


Meet Google NotebookLM: 10 things to know for educators — from ditchthattextbook.com by Matt Miller

1. Upload a variety of sources for NotebookLM to use. 
You can use …

  • websites
  • PDF files
  • links to websites
  • any text you’ve copied
  • Google Docs and Slides
  • even Markdown

You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).

2. Ask it to create resources.
3. Create an audio summary.
4. Chat with your sources.
5. Save (almost) everything. 


NotebookLM summarizes my dissertation — from darcynorman.net by D’Arcy Norman, PhD

I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.

The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.


4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter
As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.

As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:

  • Incorporate personal experiences and local content into assignments
  • Ask students for multi-modal deliverables
  • Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
  • Consider real-time and oral assignments

Google CEO Sundar Pichai announces $120M fund for global AI education — from techcrunch.com by Anthony Ha

He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.


Educators discuss the state of creativity in an AI world — from gettingsmart.com by Joe & Kristin Merrill, LaKeshia Brooks, Dominique’ Harbour, Erika Sandstrom

Key Points

  • AI allows for a more personalized learning experience, enabling students to explore creative ideas without traditional classroom limitations.
  • The focus of technology integration should be on how the tool is used within lessons, not just the tool itself

Addendum on 9/27/24:

Google’s NotebookLM enhances AI note-taking with YouTube, audio file sources, sharable audio discussions — from techcrunch.com by Jagmeet Singh

Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions

NotebookLM adds audio and YouTube support, plus easier sharing of Audio Overviews — from blog.google

 

The Most Popular AI Tools for Instructional Design (September, 2024) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The tools we use most, and how we use them

This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.

My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?

Here’s where we are in September, 2024:


Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby,  Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)

As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.



The Impact of AI in Advancing Accessibility for Learners with Disabilities — from er.educause.edu by Rob Gibson

AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.


 

Gemini makes your mobile device a powerful AI assistant — from blog.google
Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.

Rolling out today: Gemini Live <– Google swoops in before OpenAI can get their Voice Mode out there
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.

.

Per the Rundown AI:
Why it matters: Real-time voice is slowly shifting AI from a tool we text/prompt with, to an intelligence that we collaborate, learn, consult, and grow with. As the world’s anticipation for OpenAI’s unreleased products grows, Google has swooped in to steal the spotlight as the first to lead widespread advanced AI voice rollouts.

Beyond Social Media: Schmidt Predicts AI’s Earth-Shaking Impact— from wallstreetpit.com
The next wave of AI is coming, and if Schmidt is correct, it will reshape our world in ways we are only beginning to imagine.

In a recent Q&A session at Stanford, Eric Schmidt, former CEO and Chairman of search giant Google, offered a compelling vision of the near future in artificial intelligence. His predictions, both exciting and sobering, paint a picture of a world on the brink of a technological revolution that could dwarf the impact of social media.

Schmidt highlighted three key advancements that he believes will converge to create this transformative wave: very large context windows, agents, and text-to-action capabilities. These developments, according to Schmidt, are not just incremental improvements but game-changers that could reshape our interaction with technology and the world at large.

.


The rise of multimodal AI agents— from 11onze.cat
Technology companies are investing large amounts of money in creating new multimodal artificial intelligence models and algorithms that can learn, reason and make decisions autonomously after collecting and analysing data.

The future of multimodal agents
In practical terms, a multimodal AI agent can, for example, analyse a text while processing an image, spoken language, or an audio clip to give a more complete and accurate response, both through voice and text. This opens up new possibilities in various fields: from education and healthcare to e-commerce and customer service.


AI Change Management: 41 Tactics to Use (August 2024)— from flexos.work by Daan van Rossum
Future-proof companies are investing in driving AI adoption, but many don’t know where to start. The experts recommend these 41 tips for AI change management.

As Matt Kropp told me in our interview, BCG has a 10-20-70 rule for AI at work:

  • 10% is the LLM or algorithm
  • 20% is the software layer around it (like ChatGPT)
  • 70% is the human factor

This 70% is exactly why change management is key in driving AI adoption.

But where do you start?

As I coach leaders at companies like Apple, Toyota, Amazon, L’Oréal, and Gartner in our Lead with AI program, I know that’s the question on everyone’s minds.

I don’t believe in gatekeeping this information, so here are 41 principles and tactics I share with our community members looking for winning AI change management principles.


 

Per the Rundown AI:

Why it matters: AI is slowly shifting from a tool we text/prompt with, to an intelligence that we collaborate, learn, and grow with. Advanced Voice Mode’s ability to understand and respond to emotions in real-time convos could also have huge use cases in everything from customer service to mental health support.

Also relevant/see:


Creators to Have Personalized AI Assistants, Meta CEO Mark Zuckerberg Tells NVIDIA CEO Jensen Huang — from blogs.nvidia.com by Brian Caulfield
Zuckerberg and Huang explore the transformative potential of open source AI, the launch of AI Studio, and exchange leather jackets at SIGGRAPH 2024.

“Every single restaurant, every single website will probably, in the future, have these AIs …” Huang said.

“…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI,” Zuckerberg responded.

More broadly, the advancement of AI across a broad ecosystem promises to supercharge human productivity, for example, by giving every human on earth a digital assistant — or assistants — allowing people to live richer lives that they can interact with quickly and fluidly.

Also related/see:


From DSC:
Today was a MUCH better day for Nvidia however (up 12.81%). But it’s been very volatile in the last several weeks — as people and institutions ask where the ROI’s are going to come from.






9 compelling reasons to learn how to use AI Chatbots — from interestingengineering.com by Atharva Gosavi
AI Chatbots are conversational agents that can act on your behalf and converse with humans – a futuristic novelty that is already getting people excited about its usage in improving efficiency.

7. Accessibility and inclusivity
Chatbots can be designed to support multiple languages and accessibility needs, making services more inclusive. They can cater to users with disabilities by providing voice interaction capabilities and simplifying access to information. Understanding how to develop inclusive chatbots can help you contribute to making technology more accessible to everyone, a crucial aspect in today’s diverse society.

8. Future-proofing your skills
AI and automation are the future of work. Having the skills of building AI chatbots is a great way to future-proof your skills, and given the rising trajectory of AI, it’ll be a demanding skill in the market in the years to come. Staying ahead of technological trends is a great way to ensure you remain relevant and competitive in the job market.


Top 7 generative AI use cases for business — from cio.com by Grant Gross
Advanced chatbots, digital assistants, and coding helpers seem to be some of the sweet spots for gen AI use so far in business.

Many AI experts say the current use cases for generative AI are just the tip of the iceberg. More uses cases will present themselves as gen AIs get more powerful and users get more creative with their experiments.

However, a handful of gen AI use cases are already bubbling up. Here’s a look at the most popular and promising.

 

The Three Wave Strategy of AI Implementation — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

The First Wave: Low-Hanging Fruit

These are just examples:

  • Student services
  • Resume and Cover Letter Review (Career Services)Offering individual resume critiques
  • Academic Policy Development and Enforcement (Academic Affairs)…
  • Health Education and Outreach (Health and Wellness Services) …
  • Sustainability Education and Outreach (Sustainability and Environmental Initiatives) …
  • Digital Marketing and Social Media Management (University Communications and Marketing) …
  • Grant Proposal Development and Submission (Research and Innovation) …
  • Financial Aid Counseling (Financial Aid and Scholarships) …
  • Alumni Communications (Alumni Relations and Development) …
  • Scholarly Communications (Library Services) …
  • International Student and Scholar Services (International Programs and Global Engagement)

Duolingo Max: A Paid Subscription to Learn a Language Using ChatGPT AI (Worth It?) — from theaigirl.substack.com by Diana Dovgopol (behind paywall for the most part)
The integration of AI in language learning apps could be game-changing.


Research Insights #12: Copyrights and Academia — from aiedusimplified.substack.com by Lance Eaton
Scholarly authors are not going to be happy…

A while back, I wrote about some of my thoughts on generative AI around the copyright issues. Not much has changed since then, but a new article (Academic authors ‘shocked’ after Taylor & Francis sells access to their research to Microsoft AI) is definitely stirring up all sorts of concerns by academic authors. The basics of that article are that Taylor & Francis sold access to authors’ research to Microsoft for AI development without informing the authors, sparking significant concern among academics and the Society of Authors about transparency, consent, and the implications for authors’ rights and future earnings.

The stir can be seen as both valid and redundant. Two folks’ points stick out to me in this regard.

 

What aspects of teaching should remain human? — from hechingerreport.org by Chris Berdik
Even techno optimists hesitate to say teaching is best left to the bots, but there’s a debate about where to draw the line

ATLANTA — Science teacher Daniel Thompson circulated among his sixth graders at Ron Clark Academy on a recent spring morning, spot checking their work and leading them into discussions about the day’s lessons on weather and water. He had a helper: As Thompson paced around the class, peppering them with questions, he frequently turned to a voice-activated AI to summon apps and educational videos onto large-screen smartboards.

When a student asked, “Are there any animals that don’t need water?” Thompson put the question to the AI. Within seconds, an illustrated blurb about kangaroo rats appeared before the class.

Nitta said there’s something “deeply profound” about human communication that allows flesh-and-blood teachers to quickly spot and address things like confusion and flagging interest in real time.


Deep Learning: Five New Superpowers of Higher Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
How Deep Learning is Transforming Higher Education

While the traditional model of education is entrenched, emerging technologies like deep learning promise to shake its foundations and usher in an age of personalized, adaptive, and egalitarian education. It is expected to have a significant impact across higher education in several key ways.

…deep learning introduces adaptivity into the learning process. Unlike a typical lecture, deep learning systems can observe student performance in real-time. Confusion over a concept triggers instant changes to instructional tactics. Misconceptions are identified early and remediated quickly. Students stay in their zone of proximal development, constantly challenged but never overwhelmed. This adaptivity prevents frustration and stagnation.


InstructureCon 24 Conference Notes — from onedtech.philhillaa.com by Glenda Morgan
Another solid conference from the market leader, even with unclear roadmap

The new stuff: AI
Instructure rolled out multiple updates and improvements – more than last year. These included many AI-based or focused tools and services as well as some functional improvements. I’ll describe the AI features first.

Sal Khan was a surprise visitor to the keynote stage to announce the September availability of the full suite of AI-enabled Khanmigo Teacher Tools for Canvas users. The suite includes 20 tools, such as tools to generate lesson plans and quiz questions and write letters of recommendation. Next year, they plan to roll out tools for students themselves to use.

Other AI-based features include.

    • Discussion tool summaries and AI-generated responses…
    • Translation of inbox messages and discussions…
    • Smart search …
    • Intelligent Insights…

 

 

What to Know About Buying A Projector for School — from by Luke Edwards
Buy the right projector for school with these helpful tips and guidance.

Picking the right projector for school can be a tough decision as the types and prices range pretty widely. From affordable options to professional grade pricing, there are many choices. The problem is that the performance is also hugely varied. This guide aims to be the solution by offering all you need to know about buying the right projector for school where you are.

Luke covers a variety of topics including:

  • Types of projectors
  • Screen quality
  • Light type
  • Connectivity
  • Pricing

From DSC:
I posted this because Luke covered a variety of topics — and if you’re set on going with a projector, this is a solid article. But I hesitated to post this, as I’m not sure of the place that projectors will have in the future of our learning spaces. With voice-enabled apps and appliances continuing to be more prevalent — along with the presence of AI-based human-computer interactions and intelligent systems — will projectors be the way to go? Will enhanced interactive whiteboards be the way to go? Will there be new types of displays? I’m not sure. Time will tell.

 


Bill Gates Reveals Superhuman AI Prediction — from youtube.com by Rufus Griscom, Bill Gates, Andy Sack, and Adam Brotman

This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.

Key moments:

00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.


OpenAI Introduces CriticGPT: A New Artificial Intelligence AI Model based on GPT-4 to Catch Errors in ChatGPT’s Code Output — from marktechpost.com

The team has summarized their primary contributions as follows.

  1. The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
  1. Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
  1. Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
  1. This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.

Character.AI now allows users to talk with AI avatars over calls — from techcrunch.com by Ivan Mehta

a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.

The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.


Google Translate Just Added 110 More Languages — from lifehacker.com by
You can now use the app to communicate in languages you’ve never even heard of.

Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.




Listen to your favorite books and articles voiced by Judy Garland, James Dean, Burt Reynolds and Sir Laurence Olivier — from elevenlabs.io
ElevenLabs partners with estates of iconic stars to bring their voices to the Reader App

 
 

A Guide to the GPT-4o ‘Omni’ Model — from aieducation.substack.com by Claire Zau
The closest thing we have to “Her” and what it means for education / workforce

Today, OpenAI introduced its new flagship model, GPT-4o, that delivers more powerful capabilities and real-time voice interactions to its users. The letter “o” in GPT-4o stands for “Omni”, referring to its enhanced multimodal capabilities. While ChatGPT has long offered a voice mode, GPT-4o is a step change in allowing users to interact with an AI assistant that can reason across voice, text, and vision in real-time.

Facilitating interaction between humans and machines (with reduced latency) represents a “small step for machine, giant leap for machine-kind” moment.

Everyone gets access to GPT-4: “the special thing about GPT-4o is it brings GPT-4 level intelligence to everyone, including our free users”, said CTO Mira Murati. Free users will also get access to custom GPTs in the GPT store, Vision and Code Interpreter. ChatGPT Plus and Team users will be able to start using GPT-4o’s text and image capabilities now

ChatGPT launched a desktop macOS app: it’s designed to integrate seamlessly into anything a user is doing on their keyboard. A PC Windows version is also in the works (notable that a Mac version is being released first given the $10B Microsoft relationship)


Also relevant, see:

OpenAI Drops GPT-4 Omni, New ChatGPT Free Plan, New ChatGPT Desktop App — from theneuron.ai [podcast]

In a surprise launch, OpenAI dropped GPT-4 Omni, their new leading model. They also made a bunch of paid features in ChatGPT free and announced a new desktop app. Pete breaks down what you should know and what this says about AI.


What really matters — from theneurondaily.com

  • Free users get 16 ChatGPT-4o messages per 3 hours.
  • Plus users get 80 ChatGPT-4o messages per 3 hours
  • Teams users 160 ChatGPT-4o messages per 3 hours.
 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

Smart(er) Glasses: Introducing New Ray-Ban | Meta Styles + Expanding Access to Meta AI with Vision — from meta.com

  • Share Your View on a Video Call
  • Meta AI Makes Your Smart Glasses Smarter
  • All In On AI-Powered Hardware

New Ray-Ban | Meta Smart Glasses Styles and Meta AI Updates — from about.fb.com

Takeaways

  • We’re expanding the Ray-Ban Meta smart glasses collection with new styles.
  • We’re adding video calling with WhatsApp and Messenger to share your view on a video call.
  • We’re rolling out Meta AI with Vision, so you can ask your glasses about what you’re seeing and get helpful information — completely hands-free.

 
© 2024 | Daniel Christian