Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.
Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.
Universities Can’t Accommodate All the Computer Science Majors — from insidehighered.com by Johanna Alonso High interest in the field has led to overcrowded classes and other issues. Now some institutions are adding requirements to help force students out of the major.
Before this year, if you wanted to major in computer science at the University of Michigan, your only barrier was getting accepted to the university.
But a new model requires all students who want to study computer science—whether they are incoming or already enrolled—to apply for the major separately.
Michael Wellman, Michigan’s chair of computer science and engineering, said that the university has worked for years to try to accommodate everyone who wants to study the subject, hiring as many as six faculty members annually in recent years and even building a new computer science facility. The number of CS degrees awarded rose from 132 in 2012 to 600 in 2022.
While federal law mandates public schools provide an appropriate education to students with disabilities, it’s often up to parents to enforce it.
Schwarten did what few people have the resources to do: she hired a lawyer and requested a due process hearing. It’s like a court case. And it’s intended to resolve disputes between families and schools over special education services.
It’s also a traumatic and adversarial process for families and schools that can rack up hundreds of thousands of dollars in legal fees and destroy relationships between parents and district employees. And even when families win, children don’t always get the public education they deserve.
But computer science lessons like the ones at Dzantik’i Heeni Middle School are relatively rare. Despite calls from major employers and education leaders to expand K-12 computer science instruction in response to the workforce’s increasing reliance on digital technology, access to the subject remains low — particularly for Native American students.
Only 67 percent of Native American students attend a school that offers a computer science course, the lowest percentage of any demographic group, according to a new study from the nonprofit Code.org. A recent report from the Kapor Foundation and the American Indian Science and Engineering Society, or AISES, takes a deep look at why Native students’ access to computer and technology courses in K-12 is so low, and examines the consequences.
Understanding the Disconnect
We often find ourselves in professional development sessions that starkly contrast with the interactive and student-centred learning environments we create. We sit as passive recipients rather than active participants, receiving generic content that seldom addresses our unique experiences or teaching challenges.
This common scenario highlights a significant gap in professional development: the failure to apply the principles of adult learning, or andragogy, which acknowledges that educators, like their students, benefit from a learning process that is personalised, engaging, and relevant.
The irony is palpable — while we foster environments of inquiry and engagement in our classrooms, our learning experiences often lack these elements.
The disconnect prompts a vital question: If we are to cultivate a culture of lifelong learning among our students, shouldn’t we also embody this within our professional growth? It’s time for the professional development of educators to reflect the principles we hold dear in our teaching practices.
Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:
New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)
Introducing GPTs — from openai.com You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.
I’m genuinely blown away by this.
The leap from text descriptions straight to 3D models? It’s next-level.
Think about the possibility: a stream of prompts turns into a treasure trove of 3D pieces. Gather them, and you’ve got a full scene ready to come to life.
OpenAI’s New Groundbreaking Update— from newsletter.thedailybite.co Everything you need to know about OpenAI’s update, what people are building, and a prompt to skim long YouTube videos…
But among all this exciting news, the announcement of user-created “GPTs” took the cake.
That’s right, your very own personalized version of ChatGPT is coming, and it’s as groundbreaking as it sounds.
OpenAI’s groundbreaking announcement isn’t just a new feature – it’s a personal AI revolution.
The upcoming customizable “GPTs” transform ChatGPT from a one-size-fits-all to a one-of-a-kind digital sidekick that is attuned to your life’s rhythm.
First, Elon Musk announced “Grok,” a ChatGPT competitor inspired by “The Hitchhiker’s Guide to the Galaxy.” Surprisingly, in just a few months, xAI has managed to surpass the capabilities of GPT-3.5, signaling their impressive speed of execution and establishing them as a formidable long-term contender.
Then, OpenAI hosted their inaugural Dev Day, unveiling “GPT-4 Turbo,” which boasts a 128k context window, API costs slashed by threefold, text-to-speech capabilities, auto-model switching, agents, and even their version of an app store slated for launch next month.
The Day That Changed Everything — from joinsuperhuman.ai by Zain Kahn ALSO: Everything you need to know about yesterday’s OpenAI announcements
OpenAI DevDay Part I: Custom ChatGPTs and the App Store of AI
OpenAI DevDay Part II: GPT-4 Turbo, Assistants, APIs, and more
Chat GPT reached 100 million users faster than any other app. By February 2023, the chat.openai.com website saw an average of 25 million daily visitors. How can this rise in AI usage benefit your business’s function?
45% of executives say the popularity of ChatGPT has led them to increase investment in AI. If executives are investing in AI personally, then how will their beliefs affect corporate investment in AI to drive automation further? Also, how will this affect the amount of workers hired to manage AI systems within companies?
eMarketer predicts that in 2024 at least 20% of Americans will use ChatGPT monthly and that a fifth of them are 25-34 year olds in the workforce. Does this mean that there are more young workers using AI?
It turns out that Willison’s experience is far from unique. Others have been spending hours talking to ChatGPT using its voice recognition and voice synthesis features, sometimes through car connections. The realistic nature of the voice interaction feels largely effortless, but it’s not flawless. Sometimes, it has trouble in noisy environments, and there can be a pause between statements. But the way the ChatGPT voices simulate vocal ticks and noises feels very human. “I’ve been using the voice function since yesterday and noticed that it makes breathing sounds when it speaks,” said one Reddit user. “It takes a deep breath before starting a sentence. And today, actually a minute ago, it coughed between words while answering my questions.”
From DSC: Hmmmmmmm….I’m not liking the sound of this on my initial take of it. But perhaps there are some real positives to this. I need to keep an open mind.
Conversational Prompting [From DSC: i.e., keep it simple]
Structured Prompting
For most people, [Conversational Prompting] is good enough to get started, and it is the technique I use most of the time when working with AI. Don’t overcomplicate things, just interact with the system and see what happens. After you have some experience, however, you may decide that you want to create prompts you can share with others, prompts that incorporate your expertise. We call this approach Structured Prompting, and, while improving AIs may make it irrelevant soon, it is currently a useful tool for helping others by encoding your knowledge into a prompt that anyone can use.
These fake images reveal how AI amplifies our worst stereotypes — from washingtonpost.com by Nitasha Tiku, Kevin Schaul, and Szu Yu Chen (behind paywall) AI image generators like Stable Diffusion and DALL-E amplify bias in gender and race, despite efforts to detoxify the data fueling these results.
Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.
These stereotypes don’t reflect the real world; they stem from the data that trains the technology. Grabbed from the internet, these troves can be toxic — rife with pornography, misogyny, violence and bigotry.
Abeba Birhane, senior advisor for AI accountability at the Mozilla Foundation, contends that the tools can be improved if companies work hard to improve the data — an outcome she considers unlikely. In the meantime, the impact of these stereotypes will fall most heavily on the same communities harmed during the social media era, she said, adding: “People at the margins of society are continually excluded.”
ChatGPT, the AI-powered chatbot from OpenAI, far outpaces all other AI chatbot apps on mobile devices in terms of downloads and is a market leader by revenue, as well. However, it’s surprisingly not the top AI app by revenue — several photo AI apps and even other AI chatbots are actually making more money than ChatGPT, despite the latter having become a household name for an AI chat experience.
According to new reports, OpenAI has begun rolling out a more streamlined approach to how people use ChatGPT. The new system will allow the AI to choose a model automatically, letting you run Python code, open a web browser, or generate images with DALL-E without extra interaction. Additionally, ChatGPT will now let you upload and analyze files.
Background (emphasis DSC):60 Minutesdid an interviewwith ‘the Godfather of AI’, Geoffrey Hinton. In response, Gary Marcus wrote a columnin which he inserted his own set of responses into the transcript,as though he were a panel participant. Neat idea. So, of course, I’m stealing it, and in what follows, I insert my own comments as I join the 60 Minutes panel with Geoffrey Hinton and Gary Marcus.
Usually I put everyone else’s text in italics, but for this post I’ll put it all in normal font, to keep the format consistent.
OpenAI, the company behind the viral conversational AI ChatGPT, is experiencing explosive revenue growth. The Information reports that CEO Sam Altman told the staff this week that OpenAI’s revenue is now crossing $1.3 billion on an annualized basis. This means the company is generating over $100 million per month—a 30% increase from just this past summer.
Since the launch of a paid version of ChatGPT in February, OpenAI’s financial growth has been nothing short of meteoric. Additionally, in August, the company announced the launch of ChatGPT Enterprise, a commercial version of its popular conversational AI chatbot aimed at business users.
For comparison, OpenAI’s total revenue for all of 2022 was just $28 million. The launch of ChatGPT has turbocharged OpenAI’s business, positioning it as a bellwether for demand for generative AI.
The State of AI Report 2023 is live!
Check it out here to learn about key AI developments in research, industry, politics, safety, and predictions for what’s next.
New ways to get inspired with generative AI in Search — from blog.google We’re testing new ways to get more done right from Search, like the ability to generate imagery with AI or creating the first draft of something you need to write.
Chris Perkins, associate partner, McKinsey: Promoting diversity in tech is more nuanced than driving traditional diversity initiatives. This is primarily because of the specialized hard and soft skills required to enter tech-oriented professions and succeed throughout their careers. Our research shows us that various actors, such as nonprofits, for-profits, government agencies, and educational institutions are approaching the problem in small pockets. Could we help catalyze an ecosystem with wraparound support across sectors?
To design this, we have to look at the full pipeline and its “leakage” points, from getting talent trained and in the door all the way up to the C-suite. These gaps are caused by lack of awareness and support in early childhood education through college, and lack of sponsorship and mentorship in early- and mid- career positions.
Next month Microsoft Corp. will start making its artificial intelligence features for Office widely available to corporate customers. Soon after, that will include the ability for it to read your emails, learn your writing style and compose messages on your behalf.
From DSC: As readers of this blog know, I’m generally pro-technology. I see most technologies as tools — which can be used for good or for ill. So I will post items both pro and con concerning AI.
But outsourcing email communications to AI isn’t on my wish list or to-do list.
AI Meets Med School— from insidehighered.com by Lauren Coffey Adding to academia’s AI embrace, two institutions in the University of Texas system are jointly offering a medical degree paired with a master’s in artificial intelligence.
The University of Texas at San Antonio has launched a dual-degree program combining medical school with a master’s in artificial intelligence.
Several universities across the nation have begun integrating AI into medical practice. Medical schools at the University of Florida, the University of Illinois, the University of Alabama at Birmingham and Stanford and Harvard Universities all offer variations of a certificate in AI in medicine that is largely geared toward existing professionals.
“I think schools are looking at, ‘How do we integrate and teach the uses of AI?’” Dr. Whelan said. “And in general, when there is an innovation, you want to integrate it into the curriculum at the right pace.”
Speaking of emerging technologies and med school, also see:
— Salma – Midjourney & SD AI Product Photographer (@Salmaaboukarr) September 29, 2023
How to stop AI deepfakes from sinking society — and science — from nature.com by Nicola Jones; via The Neuron Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.
48+ hours since Chat GPT-4V has started rolling out for Plus and enterprise users.
With just under 10 acquisitions in the last 5 years, PowerSchool has been active in transforming itself from a student information systems company to an integrated education company that works across the day and lifecycle of K–12 students and educators. What’s more, the company turned heads in June with its announcement that it was partnering with Microsoft to integrate AI into its PowerSchool Performance Matters and PowerSchool LearningNav products to empower educators in delivering transformative personalized-learning pathways for students.
As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.
Countries worldwide are designing and implementing AI governance legislation commensurate to the velocity and variety of proliferating AI-powered technologies. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases, and voluntary guidelines and standards.
This tracker identifies legislative policy and related developments in a subset of jurisdictions. It is not globally comprehensive, nor does it include all AI initiatives within each jurisdiction, given the rapid and widespread policymaking in this space. This tracker offers brief commentary on the wider AI context in specific jurisdictions, and lists index rankings provided by Tortoise Media, the first index to benchmark nations on their levels of investment, innovation and implementation of AI.
The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.
“AI is real”
JPMorgan CEO Jamie Dimon says artificial intelligence will be part of “every single process,” adding it’s already “doing all the equity hedging for us” https://t.co/EtsTbiME1apic.twitter.com/J9YD4slOpv
Part 1: October 16 | 3:00–4:30 p.m. ET
Part 2: October 19 | 3:00–4:30 p.m. ET
Part 3: October 26 | 3:00–4:30 p.m. ET
Part 4: October 30 | 3:00–4:30 p.m. ET
Welcome to The Future of Education with Michael B. Horn. In this insightful episode, Michael gains perspective on mapping AI’s role in education from Jacob Klein, a Product Consultant at Oko Labs, and Laurence Holt, an Entrepreneur In Residence at the XQ Institute. Together, they peer into the burgeoning world of AI in education, analyzing its potential, risks, and roadmap for integrating it seamlessly into learning environments.
AI-Powered Virtual Legal Assistants Transform Client Services— from abovethelaw.com by Olga V. Mack They can respond more succinctly than ever to answer client questions, triage incoming requests, provide details, and trigger automated workflows that ensure lawyers handle legal issues efficiently and effectively.
As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.
Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.
Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:
The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.
Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.
NVIDIA-related items
SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show— from blogs.nvidia.com by Brian Caulfield Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.
Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
“This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.
The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.
From DSC: Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.
Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.
Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.
…
Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.
Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.
Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU. All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.
The use cases are endless, but here are two immediate ones:
Hyperpersonalized content.
Imagine your favorite Netflix show but with every person hearing a slightly different script.
Customer support agents.
We’re talking about ones that are actually helpful, a far cry from the norm!
[NEW] – Joshua Avatar 2.0 ??. Both of these video clips were 100% AI-generated, featuring my own avatar and voice clone. ???
We’ve made massive enhancements to our life-style avatar’s video quality and fine-tuned our voice technology to mimic my unique accent and speech… pic.twitter.com/9EgxRA69dg
We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week:
1. Prompt examples: A blank page can be intimidating. At the beginning of a new chat, you’ll now see examples to help you get started.
2. Suggested replies: Go deeper with…
The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.
? Trailer: Genesis (Midjourney + Runway)
We gave them everything.
Trusted them with our world.
To become enslaved – become hunted.
We have no choice.
Humanity must rise again to reclaim.
Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.
The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.
We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.
From DSC: Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.
Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?
Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.
Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.
Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.
SEC’s Gensler Warns AI Risks Financial Stability— from bloomberg.com by Lydia Beyoud; via The Brainyacts SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs
The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.
The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.
The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.
It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them. The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.
Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.
Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.
Claude’s free version allows you to attach documents for analysis. ChatGPT’s doesn’t.
Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.
In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.