More people with disabilities are joining the U.S. workforce. Here’s why — and where they’re working. — from linkedin.com by Taylor Borden

“People with disabilities have been wanting to work, eager to work, capable of work,” he explained. “But it wasn’t until this huge change in the way we approached work that the opportunities presented themselves,” he continued.

People with disabilities have been entering the U.S. workforce at record levels over the past three years, the most recent Bureau of Labor Statistics data shows. A recent analysis from LinkedIn’s Economic Graph team suggests an increase in more amenable company policies and working accommodations contributes to this trend.

In May 2024, more than half (54.3%) of LinkedIn members who self-identified as having a disability applied for remote positions. What’s more? Since March 2021, members with disabilities have consistently accounted for a higher share of job applications to remote positions than members who report no disabilities.

Despite the opportunities created by the ADA — and the rise of remote work — many people with disabilities still face barriers in the workforce. LinkedIn’s data scientists and editors parsed the data to identify the most common roles for workers with disabilities, how those with disabilities are progressing in their careers and how employers can continue to support more inclusive hiring.

 

Instructure to be Acquired by KKR for $4.8 Billion — from prnewswire.com

SALT LAKE CITYJuly 25, 2024 /PRNewswire/ — Instructure Holdings, Inc. (NYSE: INST) (“Instructure”), a leading learning ecosystem, today announced that it has entered into a definitive agreement to be acquired by investment funds managed by KKR, a leading global investment firm, for $23.60 per share in an all-cash transaction valued at an enterprise value of approximately $4.8 billion. The per-share purchase price represents a premium of 16 percent over Instructure’s unaffected share price of $20.27 as of May 17, 2024, the last trading day prior to media reports regarding a potential transaction.  KKR, with participation from Dragoneer Investment Group, will acquire all outstanding shares, including those shares owned by Instructure’s existing majority owner, Thoma Bravo, a leading software investment firm, which took the company public in 2021.


Speaking of edtech-related vendors, also see:

 


“Who to follow in AI” in 2024? [Part I] — from ai-supremacy.com by Michael Spencer [some of posting is behind a paywall]
#1-20 [of 150] – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

Also see:

Along these lines, also see:


AI In Medicine: 3 Future Scenarios From Utopia To Dystopia — from medicalfuturist.com by Andrea Koncz
There’s a vast difference between baseless fantasizing and realistic forward planning. Structured methodologies help us learn how to “dream well”.

Key Takeaways

  • We’re often told that daydreaming and envisioning the future is a waste of time. But this notion is misguided.
  • We all instinctively plan for the future in small ways, like organizing a trip or preparing for a dinner party. This same principle can be applied to larger-scale issues, and smart planning does bring better results.
  • We show you a method that allows us to think “well” about the future on a larger scale so that it better meets our needs.

Adobe Unveils Powerful New Innovations in Illustrator and Photoshop Unlocking New Design Possibilities for Creative Pros — from news.adobe.com

  • Latest Illustrator and Photoshop releases accelerate creative workflows, save pros time and empower designers to realize their visions faster
  • New Firefly-enabled features like Generative Shape Fill in Illustrator along with the Dimension Tool, Mockup, Text to Pattern, the Contextual Taskbar and performance enhancement tools accelerate productivity and free up time so creative pros can dive deeper into the parts of their work they love
  • Photoshop introduces all-new Selection Brush Tool and the general availability of Generate Image, Adjustment Brush Tool and other workflow enhancements empowering creators to make complex edits and unique designs
    .


Nike is using AI to turn athletes’ dreams into shoes — from axios.com by Ina Fried

Zoom in: Nike used genAI for ideation, including using a variety of prompts to produce images with different textures, materials and color to kick off the design process.

What they’re saying: “It’s a new way for us to work,” Nike lead footwear designer Juliana Sagat told Axios during a media tour of the showcase on Tuesday.
.


AI meets ‘Do no harm’: Healthcare grapples with tech promises — from finance.yahoo.com by Maya Benjamin

Major companies are moving at high speed to capture the promises of artificial intelligence in healthcare while doctors and experts attempt to integrate the technology safely into patient care.

“Healthcare is probably the most impactful utility of generative AI that there will be,” Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to enhance drug discovery in the pharmaceutical industry, among other investments in healthcare companies, declared at the company’s AI Summit in June.


Mistral reignites this week’s LLM rivalry with Large 2 (source) — from superhuman.ai

Today, we are announcing Mistral Large 2, the new generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.


Meta releases the biggest and best open-source AI model yet — from theverge.com by Alex Heath
Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months.

Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.

Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.


4 ways to boost ChatGPT — from wondertools.substack.com by Jeremy Caplan & The PyCoach
Simple tactics for getting useful responses

To help you make the most of ChatGPT, I’ve invited & edited today’s guest post from the author of a smart AI newsletter called The Artificial Corner. I appreciate how Frank Andrade pushes ChatGPT to produce better results with four simple, clever tactics. He offers practical examples to help us all use AI more effectively.

Frank Andrade: Most of us fail to make the most of ChatGPT.

  1. We omit examples in our prompts.
  2. We fail to assign roles to ChatGPT to guide its behavior.
  3. We let ChatGPT guess instead of providing it with clear guidance.

If you rely on vague prompts, learning how to create high-quality instructions will get you better results. It’s a skill often referred to as prompt engineering. Here are several techniques to get you to the next level.

 

The race to deploy GenAI in the legal sector — from sifted.eu by Kai Nicol-Schwarz
LegalFly’s €15m Series A is the latest in a string of raises for European GenAI legaltech startups

Speak to any lawyer and you’ll soon discover that the job is a far cry from the fevered excitement of a courtroom drama. Behind the scenes, there’s an endless amount of laborious and typically manual tasks like drafting, reviewing and negotiating contracts and other legal documents that have to be done manually daily.

It was this realisation that led four product managers at dating app giant Tinder, frustrated by what they saw as a lack of AI adoption at the company, to jump ship and found Belgium-based LegalFly last year. The startup is building a generative AI copilot for lawyers which eventually, it says, will be able to automate entire workflows in the legal profession.

“We were looking at what GenAI was good at, which is synthesising data and generating content,” says founder and CEO Ruben Miessen. “What industry works like that? Law, and it does it all in a very manual way.”

“The legal industry is a global behemoth that’s seen minimal innovation since the advent of Microsoft Word in the 90s,” says Carina Namih, partner at Plural. “GenAI — especially with a human in the loop to keep accuracy high — is ideally suited to drafting, editing and negotiating legal documents.”


Legal Technology Company Relativity Announces OpenAI ChatGPT Integration — from lawfuel.com

CHICAGO– July 18 – Relativity, a global legal technology company, today announced it is integrating with OpenAI’s ChatGPT Enterprise Compliance API. The integration adds ChatGPT Enterprise as a Collect in RelativityOne data source, allowing users to seamlessly collect and process human-to-AI conversational data.

“The future around human and AI interaction is changing rapidly, calling for innovative legal data management solutions to include novel data sources, such as conversations with AI agents,” said Chris Brown, Chief Product Officer at Relativity. “In answering that call, we are committed to equipping our community with the tools they need to traverse the evolving future of human-to-AI conversational data and putting users in control of this new data landscape.”

 

OpenAI illegally barred staff from airing safety risks, whistleblowers say — from washingtonpost.com by Pranshu Verma, Cat Zakrzewski, and Nitasha Tiku
In a letter exclusively obtained by The Washington Post, whistleblowers asked the SEC to probe company’s allegedly restrictive non-disclosure agreements

OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.

The whistleblowers said OpenAI issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators, according to a seven-page letter sent to the SEC commissioner earlier this month that referred to the formal complaint. The letter was obtained exclusively by The Washington Post.

 




 
 


Bill Gates Reveals Superhuman AI Prediction — from youtube.com by Rufus Griscom, Bill Gates, Andy Sack, and Adam Brotman

This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.

Key moments:

00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.


OpenAI Introduces CriticGPT: A New Artificial Intelligence AI Model based on GPT-4 to Catch Errors in ChatGPT’s Code Output — from marktechpost.com

The team has summarized their primary contributions as follows.

  1. The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
  1. Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
  1. Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
  1. This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.

Character.AI now allows users to talk with AI avatars over calls — from techcrunch.com by Ivan Mehta

a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.

The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.


Google Translate Just Added 110 More Languages — from lifehacker.com by
You can now use the app to communicate in languages you’ve never even heard of.

Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.




Listen to your favorite books and articles voiced by Judy Garland, James Dean, Burt Reynolds and Sir Laurence Olivier — from elevenlabs.io
ElevenLabs partners with estates of iconic stars to bring their voices to the Reader App

 

Top 10 Emerging Technologies of 2024 — from weforum.org by the World Economic Forum

The Top 10 Emerging Technologies report is a vital source of strategic intelligence. First published in 2011, it draws on insights from scientists, researchers and futurists to identify 10 technologies poised to significantly influence societies and economies. These emerging technologiesare disruptive, attractive to investors and researchers, and expected to achieve considerable scale within five years. This edition expands its analysis by involving over 300 experts from the Forum’s Global Future Councils and a global network of comprising over 2,000 chief editors worldwide from top institutions through Frontiers, a leading publisher of academic research.

 

Latent Expertise: Everyone is in R&D — from oneusefulthing.org by Ethan Mollick
Ideas come from the edges, not the center

Excerpt (emphasis DSC):

And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.


OpenAI’s former chief scientist is starting a new AI company — from theverge.com by Emma Roth
Ilya Sutskever is launching Safe Superintelligence Inc., an AI startup that will prioritize safety over ‘commercial pressures.’

Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.

Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall)
OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.

Safe Superintelligence — from theneurondaily.com by Noah Edelman

Ilya Sutskever is kind of a big deal in AI, to put it lightly.

Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.

Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.

If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.


AI is exhausting the power grid. Tech firms are seeking a miracle solution. — from washingtonpost.com by Evan Halper and Caroline O’Donovan
As power needs of AI push emissions up and put big tech in a bind, companies put their faith in elusive — some say improbable — technologies.

As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.


Microsoft, OpenAI, Nvidia join feds for first AI attack simulation — from axios.com by Sam Sabin

Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.

Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.

The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.


Hot summer of AI video: Luma & Runway drop amazing new models — from heatherbcooper.substack.com by Heather Cooper
Plus an amazing FREE video to sound app from ElevenLabs

Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.

Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.


Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis
A new frontier for high-fidelity, controllable video generation.


AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange
The future of AI in filmmaking; participate in our AI for Agencies survey

AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.

Don’t believe us? You need to check out Runway ML’s new Gen-3 model they released this week.

They’re not the only ones. We also have Pika, which just raised $80M. And Google’s Veo. And OpenAI’s Sora. (+ many others)

 

2024 Global Skills Report -- from Coursera

  • AI literacy emerges as a global imperative
  • AI readiness initiatives drive emerging skill adoption across regions
  • The digital skills gap persists in a rapidly evolving job market
  • Cybersecurity skills remain crucial amid talent shortages and evolving threats
  • Micro-credentials are a rapid pathway for learners to prepare for in-demand jobs
  • The global gender gap in online learning continues to narrow, but regional disparities persist
  • Different regions prioritize different skills, but the majority focus on emerging or foundational capabilities

You can use the Global Skills Report 2024 to:

  • Identify critical skills for your students to strengthen employability
  • Align curriculum to drive institutional advantage nationally
  • Track emerging skill trends like GenAI and cybersecurity
  • Understand entry-level and digital role skill trends across six regions
 

Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 

AI Policy 101: a Beginners’ Framework — from drphilippahardman.substack.com by Dr. Philippa Hardman
How to make a case for AI experimentation & testing in learning & development


6 AI Tools Recommended By Teachers That Aren’t ChatGPT — from forbes.com by Dan Fitzpatrick

Here are six AI tools making waves in classrooms worldwide:

  • Brisk Teaching
  • SchoolAI
  • Diffit
  • Curipod
  • Skybox by Blockade Labs in ThingLink
  • Ideogram

With insights from educators who are leveraging their potential, let’s explore them in more detail.


AI Is Speeding Up L&D But Are We Losing the Learning? — from learningguild.com by Danielle Wallace

The role of learning & development
Given these risks, what can L&D professionals do to ensure generative AI contributes to effective learning? The solution lies in embracing the role of trusted learning advisors, guiding the use of AI tools in a way that prioritizes achieving learning outcomes over only speed. Here are three key steps to achieve this:

1. Playtest and Learn About AI
2. Set the Direction for AI to Be Learner-Centered…
3. Become Trusted Learning Advisors…


Some other tools to explore:

Descript: If you can edit text, you can edit videos. — per Bloomberg’s Vlad Savov
Descript is the AI-powered, fully featured, end-to-end video editor that you already know how to use.

A video editor that works like docs and slides
No need to learn a new tool — Descript works like the tools you’ve already learned.

Audeze | Filter — per Bloomberg’s Vlad Savov


AI Chatbots in Schools Findings from a Poll of K-12 Teachers, Students, Parents, and College Undergraduates — from Impact Research; via Michael Spencer and Lily Lee

Key Findings

  • In the last year, AI has become even more intertwined with our education system. More teachers, parents, and students are aware of it and have used it themselves on a regular basis. It is all over our education system today.
  • While negative views of AI have crept up over the last year, students, teachers, and parents feel very positive about it in general. On balance they see positive uses for the technology in school, especially if they have used it themselves.
  • Most K-12 teachers, parents, and students don’t think their school is doing much about AI, despite its widespread use. Most say their school has no policy on it, is doing nothing to offer desired teacher training, and isn’t meeting the demand of students who’d like a career in a job that will need AI.
  • The AI vacuum in school policy means it is currently used “unauthorized,” while instead people want policies that encourage AI. Kids, parents, and teachers are figuring it out on their own/without express permission, whereas all stakeholders would rather have a policy that explicitly encourages AI from a thoughtful foundation.

The Value of AI in Today’s Classrooms — from waltonfamilyfoundation.org

There is much discourse about the rise and prevalence of AI in education and beyond. These debates often lack the perspectives of key stakeholders – parents, students and teachers.

In 2023, the Walton Family Foundation commissioned the first national survey of teacher and student attitudes toward ChatGPT. The findings showed that educators and students embrace innovation and are optimistic that AI can meaningfully support traditional instruction.

A new survey conducted May 7-15, 2024, showed that knowledge of and support for AI in education is growing among parents, students and teachers. More than 80% of each group says it has had a positive impact on education.

 

 

A Right to Warn about Advanced Artificial Intelligence — from righttowarn.ai

We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.

We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks [123], as have governments across the world [456] and other AI experts [789].

We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.

 

Can Microsoft Copilot Replace Popular AI Tools Like ChatGPT, Gamma AI, and Midjourney? — from flexos.work by Daan van Rossum
Can Microsoft Copilot win from popular AI tools like ChatGPT, Gamma AI, and Midjourney, and which AI best fits your business?

From DSC:
The article talks about the pros and cons of Microsoft Copilot. But I really appreciated the following table/information:


Also regarding Microsoft and AI, see:

Windows Recall stores all your history UNENCRYPTED. — from bensbites.beehiiv.com by Ben Tossell

Remember Microsoft’s shiny new AI tool, “Recall”? It’s like your personal time machine, answering questions about your browsing history and laptop activity by taking screenshots every 5 seconds. Sounds cool, right? Well, it gets problematic.

What’s going on here?
Security researchers have found a potential privacy nightmare lurking within this seemingly convenient tool.

What does this mean?
Recall stores all those screenshots in an unencrypted database on your laptop. This means anyone with access to your device could potentially see everything you’ve been doing. Cybersecurity experts are already comparing it to spyware, and one ethical hacker even built a tool called “TotalRecall” (yes, like the movie) that can pull all the information Recall saves. Yikes.

 
© 2024 | Daniel Christian