ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

From DSC:
As Rob Toews points out in his recent article out at Forbes.com, we had better hope that the Taiwan Semiconductor Manufacturing Company (TSMC) builds out the capacity to make chips in various countries. Why? Because:

The following statement is utterly ludicrous. It is also true. The world’s most important advanced technology is nearly all produced in a single facility.

What’s more, that facility is located in one of the most geopolitically fraught areas on earth—an area in which many analysts believe that war is inevitable within the decade.

The future of artificial intelligence hangs in the balance.

The Taiwan Semiconductor Manufacturing Company (TSMC) makes ***all of the world’s advanced AI chips.*** Most importantly, this means Nvidia’s GPUs; it also includes the AI chips from Google, AMD, Amazon, Microsoft, Cerebras, SambaNova, Untether and every other credible competitor.

— from The Geopolitics Of AI Chips Will Define The Future Of AI
out at Forbes.com by Rob Toews

Little surprise, then, that Time Magazine described TSMC
as “the world’s most important company that you’ve
probably never heard of.”

 


From DSC:
If that facility was actually the only one and something happened to it, look at how many things would be impacted as of early May 2023!


 

Examples of generative AI models

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

Radar Trends to Watch: May 2023 Developments in Programming, Security, Web, and More — from oreilly.com by Mike Loukides

Excerpt:

Large language models continue to colonize the technology landscape. They’ve broken out of the AI category, and now are showing up in security, programming, and even the web. That’s a natural progression, and not something we should be afraid of: they’re not coming for our jobs. But they are remaking the technology industry.

One part of this remaking is the proliferation of “small” large language models. We’ve noted the appearance of llama.cpp, Alpaca, Vicuna, Dolly 2.0, Koala, and a few others. But that’s just the tip of the iceberg. Small LLMs are appearing every day, and some will even run in a web browser. This trend promises to be even more important than the rise of the “large” LLMs, like GPT-4. Only a few organizations can build, train, and run the large LLMs. But almost anyone can train a small LLM that will run on a well-equipped laptop or desktop.

 

 

College Inside - a biweekly newsletter about the future of postsecondary education in prisons

The future of computer programming in prison – College Inside; written by Open Campus national reporter Charlotte West.
A biweekly newsletter about the future of postsecondary education in prisons.

Excerpt:

Participant Leonard Bishop hadn’t touched technology in the 17 years he served in the federal system prior to transferring to the D.C. Jail in 2018. When he first got a tablet, he said it took him a few days to figure out how to navigate through it, but then “I couldn’t put it down.”

Bishop said he was surprised by how easy it was to learn the skills he needed to earn the AWS certification. “It helps you transition back into society, especially for someone who has been gone so long,” he said.


Also relevant/see:

This AWS Cloud certification program opens new paths for inmates — from amazon.com; with thanks to Paul Fain for this resource
A jail-based program aims to expand career opportunities through cloud-skills training.

Excerpt:

Julian Blair knew nothing about cloud computing when he became incarcerated in a Washington, D.C. jail more than two years ago.

“I’d never done anything with a computer besides video games, typing papers in college, and downloading music on an iPad,” said Blair.

Now, after three months of work with an educational program led by APDS and Amazon Web Services (AWS) inside the jail, Blair and 10 other residents at the facility have successfully passed the AWS Certified Cloud Practitioner exam.


 

How ChatGPT3 Impacts the Future of L&D in an AI World — from learningguild.com by Markus Bernhardt and Clark Quinn

Excerpt:

Recent advances in artificial intelligence (AI) are promising great things for learning. The potential here is impressive, but there also exist many questions and insecurities around deploying AI technology for learning: What can AI do? Where is it best utilized? What are the limits? And particularly: What does that leave for the instructional designer and other human roles in learning, such as coaching and training?

We want to suggest that these developments are for the benefit of everyone—from organizational development strategy devised in the C-suite, via content creation/curation by instructional designers, right through to the learners, as well as coaches and trainers who work with the learners.

Also somewhat relevant/see:

 

“Tech predictions for 2023 and beyond” — from allthingsdistributed.com by Werner Vogels, Chief Technology Officer at Amazon

Excerpts:

  • Prediction 1: Cloud technologies will redefine sports as we know them
  • Prediction 2: Simulated worlds will reinvent the way we experiment
  • Prediction 3: A surge of innovation in smart energy
  • Prediction 4: The upcoming supply chain transformation
  • Prediction 5: Custom silicon goes mainstream
 

From Teaching to Tech – Q&A With Joanna Cappuccilli — from devlinpeck.com by Devlin Peck and Joanna Cappuccilli
Would you like to transition out of the classroom and into a corporate instructional design role?

Excerpt:

In this Q&A session, we talk with Joanna Cappuccilli about how she transitioned from full-time teacher to full-time curriculum developer at Amazon Web Services, as well as the steps she took to get there.

Joanna and I discuss how developing a portfolio, networking effectively, and preparing extensively for interviews led to her successfully landing a tech role.

 

Amazon ups its cloud training investments — from workshift.opencampusmedia.org by Byelyse Ashburn
Amazon Web Services just launched a new skills center near D.C. and is expanding both its in-person and online training programs for cloud careers.

Excerpt:

The big idea: The skills center is just one part of AWS’ plan to spend hundreds of millions of dollars providing free training in cloud computing to 29 million people globally by 2025. In the past year, the company has dramatically increased its free cloud skills offerings, adding AWS Skill Builder, an online library of 500-plus self-paced courses. It’s also twice expanded re/Start, its cohort-based training program for workers who are unemployed or underemployed.

Thus far, the company has helped more than 13 million people gain cloud skills for free through its various offerings—seven million more than this time last year.

 

The Metaverse Will Reshape Our Lives. Let’s Make Sure It’s for the Better. — from time.com by Matthew Ball

Excerpts (emphasis DSC):

The metaverse, a 30-year-old term but nearly century-old idea, is forming around us. Every few decades, a platform shift occurs—such as that from mainframes to PCs and the internet, or the subsequent evolution to mobile and cloud computing. Once a new era has taken shape, it’s incredibly difficult to alter who leads it and how. But between eras, those very things usually do change. If we hope to build a better future, then we must be as aggressive about shaping it as are those who are investing to build it.

The next evolution to this trend seems likely to be a persistent and “living” virtual world that is not a window into our life (such as Instagram) nor a place where we communicate it (such as Gmail) but one in which we also exist—and in 3D (hence the focus on immersive VR headsets and avatars).

 

Inside a radical new project to democratize AI — from technologyreview.com by Melissa Heikkilä
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

Excerpt:

PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.

Another item re: AI:

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns — from protocol.com by Kate Kaye
The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year — “strongly encouraged”researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.

 

Radar Trends to Watch: July 2022 — from oreilly.com
Developments in AI, Metaverse, Programming, and More

Excerpt (emphasis DSC):

The most important issue facing technology might now be the protection of privacy. While that’s not a new concern, it’s a concern that most computer users have been willing to ignore, and that most technology companies have been willing to let them ignore. New state laws that criminalize having abortions out of state and the stockpiling of location information by antiabortion groups have made privacy an issue that can’t be ignored.

Also relevant/see:

 

Radar Trends to Watch: June 2022 — from oreilly.com

Excerpt:

The explosion of large models continues. Several developments are especially noteworthy. DeepMind’s Gato model is unique in that it’s a single model that’s trained for over 600 different tasks; whether or not it’s a step towards general intelligence (the ensuing debate may be more important than the model itself), it’s an impressive achievement. Google Brain’s Imagen creates photorealistic images that are impressive, even after you’ve seen what DALL-E 2 can do. And Allen AI’s Macaw (surely an allusion to Emily Bender and Timnit Gebru’s Stochastic Parrots paper) is open source, one tenth the size of GPT-3, and claims to be more accurate. Facebook/Meta is also releasing an open source large language model, including the model’s training log, which records in detail the work required to train it.

 

 
 
© 2024 | Daniel Christian