From DSC:
Great…we have another tool called Canvas. Or did you say Canva?

Introducing canvas — from OpenAI
A new way of working with ChatGPT to write and code

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.


Using AI to buy your home? These companies think it’s time you should — from usatoday.com by Andrea Riquier

The way Americans buy homes is changing dramatically.

New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.

A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.


The State of AI Report 2024 — from nathanbenaich.substack.com by Nathan Benaich


The Great Mismatch — from the-job.beehiiv.com. by Paul Fain
Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.

Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.

major new analysis from the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.

“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”



Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard
Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.

But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.

Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.

That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.

And speaking of that essay, here’s a summary from The Rundown AI:

Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.

The details:

  • Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
  • He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
  • The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
  • The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
  • He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.

Why it matters: 

  • As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.

AI in the Workplace: Answering 3 Big Questions — from gallup.com by Kate Den Houter

However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).

White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.

Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).


Nvidia Blackwell GPUs sold out for the next 12 months as AI market boom continues — from techspot.com by Skye Jacobs
Analysts expect Team Green to increase its already formidable market share

Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.


Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy
How and why I use the AI tools I do — an audio conversation

1. What are two useful new ways to use AI?

  • AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
  • Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
 


The race against time to reinvent lawyers — from jordanfurlong.substack.com by Jordan Furlong
Our legal education and licensing systems produce one kind of lawyer. The legal market of the near future will need another kind. If we can’t close this gap fast, we’ll have a very serious problem.

Excerpt (emphasis DSC):

Lawyers will still need competencies like legal reasoning and analysis, statutory and contractual interpretation, and a range of basic legal knowledge. But it’s unhelpful to develop these skills through activities that lawyers won’t be performing much longer, while neglecting to provide them with other skills and prepare them for other situations that they will face. Our legal education and licensing systems are turning out lawyers whose competence profiles simply won’t match up with what people will need lawyers to do.

A good illustration of what I mean can be found in an excellent recent podcast from the Practising Law Institute, “Shaping the Law Firm Associate of the Future.” Over the course of the episode, moderator Jennifer Leonard of Creative Lawyers asked Professors Alice Armitage of UC Law San Francisco and Heidi K. Brown of New York Law School to identify some of the competencies that newly called lawyers and law firm associates are going to need in future. Here’s some of what they came up with:

  • Agile, nimble, extrapolative thinking
  • Collaborative, cross-disciplinary learning
  • Entrepreneurial, end-user-focused mindsets
  • Generative AI knowledge (“Their careers will be shaped by it”)
  • Identifying your optimal individual workflow
  • Iteration, learning by doing, and openness to failure
  • Leadership and interpersonal communication skills
  • Legal business know-how, including client standards and partner expectations
  • Receiving and giving feedback to enhance effectiveness

Legal Tech for Legal Departments – What In-House Lawyers Need to Know — from legal.thomsonreuters.com by Sterling Miller

Whatever the reason, you must understand the problem inside and out. Here are the key points to understanding your use case:

  • Identify the problem.
  • What is the current manual process to solve the problem?
  • Is there technology that will replace this manual process and solve the problem?
  • What will it cost and do you have (or can you get) the budget?
  • Will the benefits of the technology outweigh the cost? And how soon will those benefits pay off the cost? In other words, what is the return on investment?
  • Do you have the support of the organization to buy it (inside the legal department and elsewhere, e.g., CFO, CTO)?

2024-05-13: Of Legal AI — from emergentbehavior.co

Long discussion with a senior partner at a major Bay Area law firm:

Takeaways

A) They expect legal AI to decimate the profession…
B) Unimpressed by most specific legal AI offerings…
C) Generative AI error rates are acceptable even at 10–20%…
D) The future of corporate law is in-house…
E) The future of law in general?…
F) Of one large legal AI player…


2024 Legal Technology Survey Results — from lexology.com

Additional findings of the annual survey include:

  • 77 percent of firms have a formal technology strategy in place
  • Interest and intentions regarding generative A.I. remain high, with almost 80 percent of participating firms expecting to leverage it within the next five years. Many have either already begun or are planning to undertake data hygiene projects as a precursor to using generative A.I. and other automation solutions. Although legal market analysts have hypothesized that proprietary building of generative A.I. solutions remain out of reach for mid-sized firms, several Meritas survey respondents are making traction. Many other firms are also licensing third-party generative A.I. solutions.
  • The survey showed strong technology progression among several Meritas member firms, with most adopting a tech stack of core, foundational systems of infrastructure technology and adding cloud-based practice management, document management, time, billing, and document drafting applications.
  • Most firms reported increased adoption and utilization of options already available within their current core systems, such as Microsoft Office 365 Teams, SharePoint, document automation, and other native functionalities for increasing efficiencies; these functions were used more often in place of dedicated purpose-built solutions such as comparison and proofreading tools.
  • The legal technology market serving Meritas’ member firms continues to be fractured, with very few providers emerging as market leaders.

AI Set to Save Professionals 12 Hours Per Week by 2029 — from legalitprofessionals.com

Thomson Reuters, a global content and technology company, today released its 2024 Future of Professionals report, an annual survey of more than 2,200 professionals working across legal, tax, and risk & compliance fields globally. Respondents predicted that artificial intelligence (AI) has the potential to save them 12 hours per week in the next five years, or four hours per week over the upcoming year – equating to 200 hours annually.

This timesaving potential is the equivalent productivity boost of adding an extra colleague for every 10 team members on staff. Harnessing the power of AI across various professions opens immense economic opportunities. For a U.S. lawyer, this could translate to an estimated $100,000 in additional billable hours.*

 

From DSC:
I realize I lose a lot of readers on this Learning Ecosystems blog because I choose to talk about my faith and integrate scripture into these postings. So I have stayed silent on matters of politics — as I’ve been hesitant to lose even more people. But I can no longer stay silent re: Donald Trump.

I, too, fear for our democracy if Donald Trump becomes our next President. He is dangerous to our democracy.

Also, I can see now how Hitler came to power.

And look out other countries that Trump doesn’t like. He is dangerous to you as well.

He doesn’t care about the people of the United States (nor any other nation). He cares only about himself and gaining power. Look out if he becomes our next president. 


From Stefan Bauschard:

Unlimited Presidential power. According to Trump vs the US, the “President may not be prosecuted for exercising his core constitutional powers, and he is entitled to at least presumptive immunity from prosecution for his official acts.” Justice Sotomayor says this makes the President a “king.” This power + surveillance + AGI/autonomous weapons mean the President is now the most powerful king in the history of the world.

Democracy is only 200 years old.

 

AI candidate running for Parliament in the U.K. says AI can humanize politics — from nbcnews.com by Angela Yang and Daniele Hamamdjian; via The Rundown AI
Voters can talk to AI Steve, whose name will be on the ballot for the U.K.’s general election next month, to ask policy questions or raise concerns.

Commentary from The Rundown AI:

The Rundown: An AI-powered candidate named ‘AI Steve’ is running for U.K. Parliament in next month’s general election — creating polarizing questions around AI’s use in government affairs.

The details:

  • AI Steve is represented by businessman Steve Endacott and will appear as an independent candidate in the upcoming election.
  • Voters can interact with AI Steve online to ask policy questions and raise concerns or suggestions, which the AI will incorporate based on feedback.
  • If elected, Endacott will serve as AI Steve’s human proxy in Parliament, attending meetings and casting votes based on the AI’s constituent-driven platform.

Why it matters: The idea of an AI running for office might sound like a joke, but the tech behind it could actually help make our politicians more independent and (ironically) autonomous. AI-assisted governance is likely coming someday, but it’s probably still a bit too early to be taken seriously.

Also related, see:


From The Deep View:

The details: Hearing aids have employed machine learning algorithms for decades. But these algorithms historically have not been powerful enough to tackle the ‘cocktail party’ problem; they weren’t able to isolate a single voice in a loud, crowded room.

Dr. DeLiang Wang has been working on the problem for decades and has published numerous studies in recent years that explore the application of deep learning within hearing aids.

Last year, Google partnered up with a number of organizations to design personalized, AI-powered hearing aids.

Why it matters: Wang’s work has found that deep learning algorithms, running in real-time, could separate speech from background noises, “significantly” improving intelligibility in hearing-impaired people.
The tech is beginning to become publicly available, with brands like Phonak and Starkey leveraging deep learning and AI to enhance their hearing aids.



 

 

Channel 1 -- personalized gloabl news network powered by generative AI

From DSC:
Hhhhhmmmmm……not sure yet that this is a good idea. But I doubt there’s any stopping it.

 



How AI ‘sees’ the world – what happened when we trained a deep learning model to identify poverty — from theconversation.com by Ola Hall Hamid Sarmadi Thorsteinn Rögnvaldsson

Recent advances in artificial intelligence (AI) have created a step change in how to measure poverty and other human development indicators. Our team has used a type of AI known as a deep convolutional neural network (DCNN) to study satellite imagery and identify some types of poverty with a level of accuracy close to that of household surveys.


E.U. reaches deal on landmark AI bill, racing ahead of U.S. — from washingtonpost.com by Anthony Faiola, Cat Zakrzewski and Beatriz Ríos (behind paywall)
The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

European Union officials reached a landmark deal Friday on the world’s most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

Along these lines, also see:


 

 

34 Big Ideas that will change our world in 2024 — from linkedin.com

34 Big Ideas that will change our world in 2024 -- from linkedin.com 

Excerpts:

6. ChatGPT’s hype will fade, as a new generation of tailor-made bots rises up
11. We’ll finally turn the corner on teacher pay in 2024
21. Employers will combat job applicants’ use of AI with…more AI
31. Universities will view the creator economy as a viable career path

 

MIT Technology Review — Big problems that demand bigger energy. — from technologyreview.com by various

Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.

MIT Technology Review interviews many people to weigh in on the underserved issues at the intersections of technology and society.

 

10 Free AI Tools for Graphic Designing — from medium.com by Qz Ruslan

With the advancements in Artificial Intelligence (AI), designers now have access to a wide array of free AI-powered tools that streamline their creative process, enhance productivity, and add a touch of uniqueness to their designs. In this article, we will explore ten such free AI tools websites for graphic designing that have revolutionized the way designers approach their craft.


Generative Art in Motion — from heatherbcooper.substack.com by Heather Cooper
Animation and video tools create an explosion of creative expression


World’s first AI cinema opening in Auckland to make all your Matrix fantasies come true — from stuff.co.nz by Jonny Mahon-Heap
Review: My HyperCinema experience was futuristic, sleek – and slightly insane as I became the star of my own show.


AI That Alters Voice and Imagery in Political Ads Will Require Disclosure on Google and YouTube — from usnews.com by Associated Press
Political ads using artificial intelligence on Google and YouTube must soon be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered

Google will soon require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.

AI-generated election ads on YouTube and other Google platforms that alter people or events must include a clear disclaimer located somewhere that users are likely to notice, the company said in an update this week to its political content policy.


 

A Guide to Finding Housing For The Previously Incarcerated — from todayshomeowner.com by Alexis Bennett & Alexis Curls

For many individuals stepping back into society after incarceration, finding a stable place to call home can be complicated. The reality is that those who have been previously incarcerated are almost 10 times more likely to face homelessness compared to the general public. With over 725,000 people leaving state and federal prisons each year, the quest for housing becomes not only a personal challenge but a broader societal concern. Stable housing is crucial for successful reintegration, providing a foundation for building a new chapter in life. In this article, we’ll shed light on the challenges and offer empowering resources for those on their journey to find housing after prison.

Table of Contents

  • Understanding the Housing Landscape
  • Utilizing Support Services
  • Creating a Housing Plan
  • Securing and Maintaining Housing
  • Continuing Personal Growth and Reintegration
  • Conclusion

From DSC:
I’m posting this in the hopes that this information may help someone out there. Also, my dad used to donate some of his time in retirement to an agency that helped people find housing. He mentioned numerous times how important it was for someone to have a safe place to stay that they could call their own.


 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

‘A second prison’: People face hidden dead ends when they pursue a range of careers post-incarceration — from hechingerreport.org by Tara Garcia Mathewson
Nearly 14,000 laws and regulations restrict people who have been convicted or even just arrested from getting professional licenses

For Wiese, it was all a big, expensive gamble — and, in one form or another, is one millions of people with criminal records take every year as they pursue education and workforce training on their way to jobs that require a license. Yet that effort might be wasted thanks to the nearly 14,000 laws and regulations that can restrict individuals with arrest and conviction histories from getting licensed in a given field.

Jesse Wiese served seven years in prison, but says that the barriers he found to working after leaving amount to a “second prison.” Credit: Noah Willman for the Hechinger Report

 
 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 
 
© 2024 | Daniel Christian