Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI
A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.
SEC’s Gensler Warns AI Risks Financial Stability — from bloomberg.com by Lydia Beyoud; via The Brainyacts
SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs
Per a recent Brainyacts posting:
The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.
The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.
The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.
How ChatGPT Code Interpreter (And Four Other AI Initiatives) Might Revolutionize Education — from edtechinsiders.substack.com by Phuong Do, Alex Sarlin, and Sarah Morin
And more on Meta’s Llama, education LLMs, the Supreme Court affirmative action ruling, and Byju’s continued unraveling
Let’s put it all together for emphasis. With Code Interpreter by ChatGPT, you can:
- Upload any file
- Tell ChatGPT what you want to do with it
- Receive your instructions translated into Python
- Execute the code
- Transform the output back into readable language (or visuals, charts, graphs, tables, etc.)
- Provide the results (and the underlying Python code)
— Daniel Christian (he/him/his) (@dchristian5) July 24, 2023
AI Tools and Links — from Wally Boston
It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them. The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.
Meet Claude — A helpful new AI assistant — from wondertools.substack.com by Jeremy Caplan
How to make the most of ChatGPT’s new alternative
Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.
Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.
The Next Frontier For Large Language Models Is Biology — from forbes.com by Rob Toews
Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.
In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.
Seven AI companies agree to safeguards in the US — from bbc.com by Shiona McCallum; via Tom Barrett
Seven leading companies in artificial intelligence have committed to managing risks posed by the tech, the White House has said.
This will include testing the security of AI, and making the results of those tests public.
Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI joined US President Joe Biden to make the announcement.