Is Your AI Model Going Off the Rails? There May Be an Insurance Policy for That — from wsj.com by Belle Lin; via Brainyacts
As generative AI creates new risks for businesses, insurance companies sense an opportunity to cover the ways AI could go wrong

The many ways a generative artificial intelligence project can go off the rails poses an opportunity for insurance companies, even as those grim scenarios keep business technology executives up at night.

Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches several years ago, insurance providers have started taking steps into the AI space by offering financial protection against models that fail.

Corporate technology leaders say such policies could help them address risk-management concerns from board members, chief executives and legal departments.

 



AI Meets Med School— from insidehighered.com by Lauren Coffey
Adding to academia’s AI embrace, two institutions in the University of Texas system are jointly offering a medical degree paired with a master’s in artificial intelligence.

Doctor AI

The University of Texas at San Antonio has launched a dual-degree program combining medical school with a master’s in artificial intelligence.

Several universities across the nation have begun integrating AI into medical practice. Medical schools at the University of Florida, the University of Illinois, the University of Alabama at Birmingham and Stanford and Harvard Universities all offer variations of a certificate in AI in medicine that is largely geared toward existing professionals.

“I think schools are looking at, ‘How do we integrate and teach the uses of AI?’” Dr. Whelan said. “And in general, when there is an innovation, you want to integrate it into the curriculum at the right pace.”

Speaking of emerging technologies and med school, also see:


Though not necessarily edu-related, this was interesting to me and hopefully will be to some profs and/or students out there:


How to stop AI deepfakes from sinking society — and science — from nature.com by Nicola Jones; via The Neuron
Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.





Exploring the Impact of AI in Education with PowerSchool’s CEO & Chief Product Officer — from michaelbhorn.substack.com by Michael B. Horn

With just under 10 acquisitions in the last 5 years, PowerSchool has been active in transforming itself from a student information systems company to an integrated education company that works across the day and lifecycle of K–12 students and educators. What’s more, the company turned heads in June with its announcement that it was partnering with Microsoft to integrate AI into its PowerSchool Performance Matters and PowerSchool LearningNav products to empower educators in delivering transformative personalized-learning pathways for students.


AI Learning Design Workshop: The Trickiness of AI Bootcamps and the Digital Divide — from eliterate.usby Michael Feldstein

As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.


Global AI Legislation Tracker— from iapp.org; via Tom Barrett

Countries worldwide are designing and implementing AI governance legislation commensurate to the velocity and variety of proliferating AI-powered technologies. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases, and voluntary guidelines and standards.

This tracker identifies legislative policy and related developments in a subset of jurisdictions. It is not globally comprehensive, nor does it include all AI initiatives within each jurisdiction, given the rapid and widespread policymaking in this space. This tracker offers brief commentary on the wider AI context in specific jurisdictions, and lists index rankings provided by Tortoise Media, the first index to benchmark nations on their levels of investment, innovation and implementation of AI.


Diving Deep into AI: Navigating the L&D Landscape — from learningguild.com by Markus Bernhardt

The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.



Learning Lab | ChatGPT in Higher Education: Exploring Use Cases and Designing Prompts — from events.educause.edu; via Robert Gibson on LinkedIn

Part 1: October 16 | 3:00–4:30 p.m. ET
Part 2: October 19 | 3:00–4:30 p.m. ET
Part 3: October 26 | 3:00–4:30 p.m. ET
Part 4: October 30 | 3:00–4:30 p.m. ET


Mapping AI’s Role in Education: Pioneering the Path to the Future — from marketscale.com by Michael B. Horn, Jacob Klein, and Laurence Holt

Welcome to The Future of Education with Michael B. Horn. In this insightful episode, Michael gains perspective on mapping AI’s role in education from Jacob Klein, a Product Consultant at Oko Labs, and Laurence Holt, an Entrepreneur In Residence at the XQ Institute. Together, they peer into the burgeoning world of AI in education, analyzing its potential, risks, and roadmap for integrating it seamlessly into learning environments.


Ten Wild Ways People Are Using ChatGPT’s New Vision Feature — from newsweek.com by Meghan Roos; via Superhuman

Below are 10 creative ways ChatGPT users are making use of this new vision feature.


 

ChatGPT can now see, hear, and speak — from openai.com
We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.

Voice and image give you more ways to use ChatGPT in your life. Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it. When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you.

We’re rolling out voice and images in ChatGPT to Plus and Enterprise users over the next two weeks. Voice is coming on iOS and Android (opt-in in your settings) and images will be available on all platforms.





OpenAI Seeks New Valuation of Up to $90 Billion in Sale of Existing Shares — from wsj.com (behind paywall)
Potential sale would value startup at roughly triple where it was set earlier this year


The World’s First AI Cinema Experience Starring YOU Is Open In NZ And Buzzy Doesn’t Cover It — from theedge.co.nz by Seth Gupwell
Allow me to manage your expectations.

Because it’s the first-ever on Earth, it’s hard to label what kind of entertainment Hypercinema is. While it’s marketed as a “live AI experience” that blends “theatre, film and digital technology”, Dr. Gregory made it clear that it’s not here to make movies and TV extinct.

Your face and personality are how HyperCinema sets itself apart from the art forms of old. You get 15 photos of your face taken from different angles, then answer a questionnaire – mine started by asking what my fave vegetable was and ended by demanding to know what I thought the biggest threat to humanity was. Deep stuff, but the questions are always changing, cos that’s how AI rolls.

All of this information is stored on your cube – a green, glowing accessory that you carry around for the whole experience and insert into different sockets to transfer your info onto whatever screen is in front of you. Upon inserting your cube, the “live AI experience” starts.

The AI has taken your photos and superimposed your face on a variety of made-up characters in different situations.


Announcing Microsoft Copilot, your everyday AI companion — from blogs.microsoft.com by Yusuf Mehdi

We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology. With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action. At Microsoft, we think about this as having a copilot to help navigate any task. We have been building AI-powered copilots into our most used and loved products – making coding more efficient with GitHub, transforming productivity at work with Microsoft 365, redefining search with Bing and Edge and delivering contextual value that works across your apps and PC with Windows.

Today we take the next step to unify these capabilities into a single experience we call Microsoft Copilot, your everyday AI companion. Copilot will uniquely incorporate the context and intelligence of the web, your work data and what you are doing in the moment on your PC to provide better assistance – with your privacy and security at the forefront.


DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
DALL·E 3 is now in research preview, and will be available to ChatGPT Plus and Enterprise customers in October, via the API and in Labs later this fall.


 

Why Shaquille O’Neal led edtech startup Edsoma’s $2.5M seed round — from techcrunch.com by Kirsten Korosec; via GSV

Edsoma is an app that uses an AI reading assistant to help people learn or improve their reading and communication.

For now, the company is targeting users in grades kindergarten to fourth grade based on the content that it has today. Wallgren noted that the Edsoma’s technology will work right through into university and he has ambitions to become the No. 1 literacy resource in the United States.


Outschool launches an AI-powered tool to help teachers write progress reports — from techcrunch.com by Lauren Forristal; via GSV

Outschool, the online learning platform that offers kid-friendly academic and interest-based classes, announced today the launch of its AI Teaching Assistant, a tool for tutors to generate progress reports for their students. The platform — mainly popular for its small group class offerings — also revealed that it’s venturing into one-on-one tutoring, putting it in direct competition with companies like Varsity Tutors, Tutor.com and Preply.

 

 

School Guide to Student Financial Literacy: What to Teach and When — from couponchief.com by Linda Phillips; with thanks to Karen Bell for this resource

It’s crucial – for individuals and the larger community – that students and young adults develop a solid foundation of personal finance knowledge, skills and habits in order to thrive. Practicing good money habits means the difference between long-term financial security and serious financial straits.

Financial literacy education is the responsibility of everyone, but most particularly parents and teachers. This guide focuses primarily on teaching financial literacy in elementary, middle and high schools. However, the concepts discussed below – and many of the resources listed – are also helpful for parents and others interested in promoting sound personal finance practices by kids and teens alike. Below you’ll find our suggestions for what concepts should be taught to kids from pre-k through grade 12, and the best times to introduce those concepts. You’ll also find an extensive list of some of the best resources – books, lesson plans, activities, videos, games and more – to supplement financial literacy education in the classroom.

 

Next, The Future of Work is… Intersections — from linkedin.com by Gary A. Bolles; via Roberto Ferraro

So much of the way that we think about education and work is organized into silos. Sure, that’s one way to ensure a depth of knowledge in a field and to encourage learners to develop mastery. But it also leads to domains with strict boundaries. Colleges are typically organized into school sub-domains, managed like fiefdoms, with strict rules for professors who can teach in different schools.

Yet it’s at the intersections of seemingly-disparate domains where breakthrough innovation can occur.

Maybe intersections bring a greater chance of future work opportunity, because that young person can increase their focus in one arena or another as they discover new options for work — and because this is what meaningful work in the future is going to look like.

From DSC:
This posting strikes me as an endorsement for interdisciplinary degrees. I agree with much of this. It’s just hard to find the right combination of disciplines. But I supposed that depends upon the individual student and what he/she is passionate or curious about.


Speaking of the future of work, also see:

Centaurs and Cyborgs on the Jagged Frontier — from oneusefulthing.org by Ethan Mollick
I think we have an answer on whether AIs will reshape work…

A lot of people have been asking if AI is really a big deal for the future of work. We have a new paper that strongly suggests the answer is YES.
.

Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. Those are some very big impacts. Now, let’s add in the nuance.

 

An excerpt from ‘The Magnificent Seven’ posting from Brandon Busteed on LinkedIn:

6. Create externship programs for faculty. Many college and university faculty have never worked outside of academia. Given a chance to be exposed to modern workplaces and work challenges, faculty will find innovative and creative ways to weave more work-integrated learning into their curriculum.

From DSC:
This is a great idea — thanks Brandon!

I might add another couple of thoughts here as well:

  • And/or treat your Adjunct Faculty Members much better as well!
  • And/or work with more L&D Departments at local companies (i.e., to develop closer, more beneficial/WIN-WIN collaborations).
 

Are your students prepared for active learning? You can help them! — from The Educationalist at educationalist.substack.com by Alexandra Mihai

What does active learning require from students?
There is no secret that PBL and all other active learning approaches are much more demanding from students compared to traditional methods, mainly in terms of skills and attitudes towards learning. Here are some of the aspects where students, especially when first faced to active learning, seem to struggle:

  • Formulating own learning goals and following through with independent study. While in traditional teaching the learning goals are given to students, in PBL (or at least in some of its purest variants), they need to come up with their own, for each problem they are solving. This requires understanding the problem well but also a certain frame of mind where one can assess what is necessary to solve it and make a plan of how to go about it (independently and as a group). All these seemingly easy steps are often new to students and something they intrinsically expect from us as educators.

From DSC:
The above excerpt re: formulating one’s own learning goals reminded me of project management and learning how to be a project manager.

It reminded me of a project that I was assigned back at Kraft (actually Kraft General Foods at the time).  It was an online-based directory of everyone in the company at the time. When it was given to me, several questions arose in my mind:
  • Where do I start?
  • How do I even organize this project?
  • What is the list of to-do’s?
  • Who will I need to work with?

Luckily I had a mentor/guide who helped me get going and an excellent contact with the vendor who educated me and helped me get the ball rolling. 

I’ll end with another quote and a brief comment:

Not being afraid of mistakes and learning from them.
The education system, at all stages, still penalises mistakes, often with long term consequences. So it’s no wonder students are afraid of making mistakes…
From DSC:
How true.
 

The Prompt #14: Your Guide to Custom Instructions — from noisemedia.ai by Alex Banks

Whilst we typically cover a single ‘prompt’ to use with ChatGPT, today we’re exploring a new feature now available to everyone: custom instructions.

You provide specific directions for ChatGPT leading to greater control of the output. It’s all about guiding the AI to get the responses you really want.

To get started:
Log into ChatGPT ? Click on your name/email bottom left corner ? select ‘Custom instructions’


Meet Zoom AI Companion, your new AI assistant! Unlock the benefits with a paid Zoom account — from blog.zoom.us by Smita Hashim

We’re excited to introduce you to AI Companion (formerly Zoom IQ), your new generative AI assistant across the Zoom platform. AI Companion empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.

Envision being able to interact with AI Companion through a conversational interface and ask for help on a whole range of tasks, similarly to how you would with a real assistant. You’ll be able to ask it to help prepare for your upcoming meeting, get a consolidated summary of prior Zoom meetings and relevant chat threads, and even find relevant documents and tickets from connected third-party applications with your permission.

From DSC:
You can ask AI Companion to catch you up on what you missed during a meeting in progress.”

And what if some key details were missed? Should you rely on this? I’d treat this with care/caution myself.



A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data — from fortune.com by Stephen Pastis (behind paywall)

That’s because, as it turns out, it’s nearly impossible to remove a user’s data from a trained A.I. model without resetting the model and forfeiting the extensive money and effort put into training it. To use a human analogy, once an A.I. has “seen” something, there is no easy way to tell the model to “forget” what it saw. And deleting the model entirely is also surprisingly difficult.

This represents one of the thorniest, unresolved, challenges of our incipient artificial intelligence era, alongside issues like A.I. “hallucinations” and the difficulties of explaining certain A.I. outputs. 


More companies see ChatGPT training as a hot job perk for office workers — from cnbc.com by Mikaela Cohen

Key points:

  • Workplaces filled with artificial intelligence are closer to becoming a reality, making it essential that workers know how to use generative AI.
  • Offering specific AI chatbot training to current employees could be your next best talent retention tactic.
  • 90% of business leaders see ChatGPT as a beneficial skill in job applicants, according to a report from career site Resume Builder.

OpenAI Plugs ChatGPT Into Canva to Sharpen Its Competitive Edge in AI — from decrypt.co by Jose Antonio Lanz
Now ChatGPT Plus users can “talk” to Canva directly from OpenAI’s bot, making their workflow easier.

This strategic move aims to make the process of creating visuals such as logos, banners, and more, even more simple for businesses and entrepreneurs.

This latest integration could improve the way users generate visuals by offering a streamlined and user-friendly approach to digital design.


From DSC:
This Tweet addresses a likely component of our future learning ecosystems:


Large language models aren’t people. Let’s stop testing them as if they were. — from technologyreview.com by Will Douglas Heaven
With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.

That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”

“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.”


We Analyzed Millions of ChatGPT User Sessions: Visits are Down 29% since May, Programming Assistance is 30% of Use — from sparktoro.com by Rand Fishkin

In concert with the fine folks at Datos, whose opt-in, anonymized panel of 20M devices (desktop and mobile, covering 200+ countries) provides outstanding insight into what real people are doing on the web, we undertook a challenging project to answer at least some of the mystery surrounding ChatGPT.



Crypto in ‘arms race’ against AI-powered scams — Quantstamp co-founder — from cointelegraph.com by Tom Mitchelhill
Quantstamp’s Richard Ma explained that the coming surge in sophisticated AI phishing scams could pose an existential threat to crypto organizations.

With the field of artificial intelligence evolving at near breakneck speed, scammers now have access to tools that can help them execute highly sophisticated attacks en masse, warns the co-founder of Web3 security firm Quantstamp.


 

Why Christians need to support diversity professionals, not demonize them — from religionnews.com by Michelle Loyd-Paige
Even among Christians, DEI leaders find themselves isolated and unsupported.

For nearly 39 years, I have taught about and advocated for diversity, equity, inclusion, anti-racism and social justice in Christian contexts. I have been sustained by the knowledge that diversity is a part of God’s good creation and is celebrated in the Bible. 

And not just diversity, but love for our neighbors, care for the immigrant, and justice for the marginalized and oppressed. In fact, the Hebrew and Greek words for justice appear in Scripture more than 1,000 times. 

It could be argued that Jesus’ ministry on earth exemplified the value of diversity, the importance of inclusion and the obligation of justice and restoration. Our ministry — in schools, churches, business, wherever we find ourselves — should reflect the same.

From DSC:
I was at Calvin (then College) when Michelle was there. I am very grateful for her work over my 10+ years there. I learned many things from her and had my “lenses” refined several times due to her presentations, questions, and the media that she showed. Thank you Michelle for all of your work and up-hill efforts! It’s made a difference! It impacted the culture at Calvin. It impacted me.

The other thing that hepled me in my background was when my family moved to a much more diverse area. And I’ve tried to continue that perspective in my own family. I don’t know half of the languages that are spoken in our neighborhood, but I love the diversity there! I believe our kids (now mostly grown) have benefited from it and are better prepared for what they will encounter in the real world.

 

Future of Work Report AI at Work — from economicgraph.linkedin.com; via Superhuman

The intersection of AI and the world of work: Not only are job postings increasing, but we’re seeing more LinkedIn members around the globe adding AI skills to their profiles than ever before. We’ve seen a 21x increase in the share of global English-language job postings that mention new AI technologies such as GPT or ChatGPT since November 2022. In June 2023, the number of AI-skilled members was 9x larger than in January 2016, globally.

The state of play of Generative AI (GAI) in the workforce: GAI technologies, including ChatGPT, are poised to start to change the way we work. In fact, 47% of US executives believe that using generative AI will increase productivity, and 92% agree that people skills are more important than ever. This means jobs won’t necessarily go away but they will change as will the skills necessary to do them.

Also relevant/see:

The Working Future: More Human, Not Less — from bain.com
It’s time to change how we think about work

Contents

  • Introduction
  • Motivations for Work Are Changing.
  • Beliefs about What Makes a “Good Job” Are Diverging
  • Automation Is Helping to Rehumanize Work
  • Technological Change Is Blurring the Boundaries of the Firm
  • Young Workers Are Increasingly Overwhelmed
  • Rehumanizing Work: The Journey Ahead
 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 

A TV show with no ending — from joinsuperhuman.ai by Zain Kahn
ALSO: Turbocharged GPT is here

We’re standing on the cusp of artificially generated content that could theoretically never end. According to futurist Sinéad Bovell, “Generative artificial intelligence also means that say we don’t want a movie or a series to end. It doesn’t have to, you could use AI to continue to generate more episodes and other sequels and have this kind of ongoing storyline.”

If we take this logic further, we could also see hyper-personalized content that’s created just for us. Imagine getting an AI generated album from your favourite artist every week. Or a brand new movie starring actors who are no longer alive, like a new romcom with Marylin Monroe and Frank Sinatra.

While this sounds like a compelling proposition for consumers, it’s mostly bad news for actors, writers, and other professionals working in the media industry. Hollywood studios are already investing heavily in generative AI, and many professionals working in the industry are afraid to lose their jobs.



 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

This is how the billable hour dies — from jordanfurlong.substack.com by Jordan Furlong
Let me tell you a story about the AI-driven evolution of pricing in the legal market. It might not happen for many years. It might happen much sooner. But when it does, I expect it’ll look like this.

So assemble some of your most creative, forward-thinking people, and ask them: “If the firm could no longer bill our work by the hour, how could we turn a profit?” Give them this article from 2012 to get them started. Show them the firm’s financials for the last 24 months, so that they know how you’re making money now. Have them speak with clients, technology experts, and pricing consultants for insights — might as well get them to ask ChatGPT, too. The answers you get will form the basis of your future strategic plans.

 
© 2025 | Daniel Christian