Inside a radical new project to democratize AI — from by Melissa Heikkilä
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.


PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.

Another item re: AI:

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns — from by Kate Kaye
The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year — “strongly encouraged”researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.