Just Because You Can Doesn’t Mean You Should: What Genetic Engineers Can Learn From ‘Jurassic World’ — from singularityhub.com by Andrew Maynard

Excerpt:

Maybe this is the abiding message of Jurassic World: Dominion—that despite incredible advances in genetic design and engineering, things can and will go wrong if we don’t embrace the development and use of the technology in socially responsible ways.

The good news is that we still have time to close the gap between “could” and “should” in how scientists redesign and reengineer genetic code. But as Jurassic World: Dominion reminds moviegoers, the future is often closer than it might appear.

 

Inside a radical new project to democratize AI — from technologyreview.com by Melissa Heikkilä
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

Excerpt:

PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.

Another item re: AI:

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns — from protocol.com by Kate Kaye
The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year — “strongly encouraged”researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.

 

The Future of Education | By Futurist Gerd Leonhard | A Video for EduCanada — from futuristgerd.com

Per Gerd:

Recently, I was invited by the Embassy of Canada in Switzerland to create this special presentation and promotional video discussing the Future of Education and to explore how Canada might be leading the way. Here are some of the key points I spoke about in the video. Watch the whole thing here: the Future of Education.

 

…because by 2030, I believe, the traditional way of learning — just in case — you know storing, downloading information will be replaced by learning just in time, on-demand, learning to learn, unlearning, relearning, and the importance of being the right person. Character skills, personality skills, traits, they may very well rival the value of having the right degree.

If you learn like a robot…you’ll never have a job to begin with.

Gerd Leonhard


Also relevant/see:

The Next 10 Years: Rethinking Work and Revolutionising Education (Gerd Leonhard’s keynote in Riga) — from futuristgerd.com


 
 

Will Learning Move into the Metaverse? — from learningsolutionsmag.com by Pamela Hogle

Excerpt:

In its 2022 Tech Trends report, the Future Today Institute predicts that, “The future of work will become more digitally immersive as companies deploy virtual meeting platforms, digital experiences, and mixed reality worlds.”

Learning leaders are likely to spearhead the integration of their organizations’ workers into a metaverse, whether by providing training in using the tools that make a metaverse possible or through developing training and performance support resources that learners will use in an immersive environment.

Advantages of moving some workplace collaboration and learning into a metaverse include ease of scaling and globalization. The Tech Trends report mentions personalization at scale and easy multilingual translation as advantages of “synthetic media”—algorithmically generated digital content, which could proliferate in metaverses.

Also see:

Future Institute Today -- Tech Trends 2022


Also from learningsolutionsmag.com, see:

Manage Diverse Learning Ecosystems with Federated Governance

Excerpt:

So, over time, the L&D departments eventually go back to calling their own shots.

What does this mean for the learning ecosystem? If each L&D team chooses its own learning platforms, maintenance and support will be a nightmare. Each L&D department may be happy with the autonomy but learners have no patience for navigating multiple LMSs or going to several systems to get their training records.

Creating common infrastructure among dispersed groups
Here you have the problem: How can groups that have no accountability to each other share a common infrastructure?

 

Every month Essentials publish an Industry Trend Report on AI in general and the following related topics:

  • AI Research
  • AI Applied Use Cases
  • AI Ethics
  • AI Robotics
  • AI Marketing
  • AI Cybersecurity
  • AI Healthcare

The Race to Hide Your Voice — from wired.com by Matt Burgess
Voice recognition—and data collection—have boomed in recent years. Researchers are figuring out how to protect your privacy.

AI: Where are we now? — from educause.edu by EDUCAUSE
Is the use of AI in higher education today invisible? dynamic? perilous? Maybe it’s all three.

What is artificial intelligence and how is it used? — from europart.europa.eu; with thanks to Tom Barrett for this resource

 

Radar Trends to Watch: June 2022 — from oreilly.com

Excerpt:

The explosion of large models continues. Several developments are especially noteworthy. DeepMind’s Gato model is unique in that it’s a single model that’s trained for over 600 different tasks; whether or not it’s a step towards general intelligence (the ensuing debate may be more important than the model itself), it’s an impressive achievement. Google Brain’s Imagen creates photorealistic images that are impressive, even after you’ve seen what DALL-E 2 can do. And Allen AI’s Macaw (surely an allusion to Emily Bender and Timnit Gebru’s Stochastic Parrots paper) is open source, one tenth the size of GPT-3, and claims to be more accurate. Facebook/Meta is also releasing an open source large language model, including the model’s training log, which records in detail the work required to train it.

 

 

How to ensure we benefit society with the most impactful technology being developed today — from deepmind.com by Lila Ibrahim

In 2000, I took a sabbatical from my job at Intel to visit the orphanage in Lebanon where my father was raised. For two months, I worked to install 20 PCs in the orphanage’s first computer lab, and to train the students and teachers to use them. The trip started out as a way to honour my dad. But being in a place with such limited technical infrastructure also gave me a new perspective on my own work. I realised that without real effort by the technology community, many of the products I was building at Intel would be inaccessible to millions of people. I became acutely aware of how that gap in access was exacerbating inequality; even as computers solved problems and accelerated progress in some parts of the world, others were being left further behind. 

After that first trip to Lebanon, I started reevaluating my career priorities. I had always wanted to be part of building groundbreaking technology. But when I returned to the US, my focus narrowed in on helping build technology that could make a positive and lasting impact on society. That led me to a variety of roles at the intersection of education and technology, including co-founding Team4Tech, a non-profit that works to improve access to technology for students in developing countries. 


Also relevant/see:

Microsoft AI news: Making AI easier, simpler, more responsible — from venturebeat.com by Sharon Goldman

But one common theme bubbles over consistently: For AI to become more useful for business applications, it needs to be easier, simpler, more explainable, more accessible and, most of all, responsible

 

 

Above video from Steve Kerr’s statement on school shooting in Texas

From DSC:
Steve Kerr has it right. Powerful. Critically important. 

“Enough!”  “We can’t get numb to this!”

 

The Future of Work and the Jobs we might have in 2040 — from futurist.com by Nikolas Badminton & Marianne Powers

Excerpt:

So, let’s set our sights on a future horizon of 2040 and we can wonder what the future of work and the future of jobs for our children may be. The world may feel and look the same but underneath we’ll need people to transition to new careers to support the hyper-fast, data-obsessed world. Let’s take a look at the Future of Work and Jobs in 2040

  • Human-centred Designers and Ethicists
  • Artificial Intelligence Psychologists
  • Metaverse Architects
 

AI research is a dumpster fire and Google’s holding the matches — from thenextweb.com by Tristan Greene
Scientific endeavor is no match for corporate greed

Excerpts:

The world of AI research is in shambles. From the academics prioritizing easy-to-monetize schemes over breaking novel ground, to the Silicon Valley elite using the threat of job loss to encourage corporate-friendly hypotheses, the system is a broken mess.

And Google deserves a lion’s share of the blame.

Google, more than any other company, bears responsibility for the modern AI paradigm. That means we need to give big G full marks for bringing natural language processing and image recognition to the masses.

It also means we can credit Google with creating the researcher-eat-researcher environment that has some college students and their big-tech-partnered professors treating research papers as little more than bait for venture capitalists and corporate headhunters.

But the system’s set up to encourage the monetization of algorithms first, and to further the field second. In order for this to change, big tech and academia both need to commit to wholesale reform in how research is presented and reviewed.

Also relevant/see:

Every month Essentials publish an Industry Trend Report on AI in general and the following related topics:

  • AI Research
  • AI Applied Use Cases
  • AI Ethics
  • AI Robotics
  • AI Marketing
  • AI Cybersecurity
  • AI Healthcare

It’s never too early to get your AI ethics right — from protocol.com by Veronica Irwin
The Ethical AI Governance Group wants to give startups a framework for avoiding scandals and blunders while deploying new technology.

Excerpt:

To solve this problem, a group of consultants, venture capitalists and executives in AI created the Ethical AI Governance Group last September. In March, it went public, and published a survey-style “continuum” for investors to use in advising the startups in their portfolio.

The continuum conveys clear guidance for startups at various growth stages, recommending that startups have people in charge of AI governance and data privacy strategy, for example. EAIGG leadership argues that using the continuum will protect VC portfolios from value-destroying scandals.

 

The rise of tech ethicists shows how the industry is changing — from protocol.com by Veronica Irwin
Though the job titles are new, the ways to attract new talent are virtually the same.

Excerpt:

In 2022, “responsible tech” is a career path. Job titles range from “trust and safety officer” to “policy lead.” And several organizations and academic institutions are engaged in ecosystem-mapping projects to define which academic programs best prepare students to work in the field, how the jobs are described and what companies are pursuing ethical tech in earnest.

“There’s a lot of appetite for this, especially as the public has become very aware of highly publicized problems with technology,” Tweed, now the program director for All Tech is Human, said. “I see that continuing to grow for the foreseeable future.”

Speaking of careers, here’s another item:

 

From DSC:
There are many things that are not right here — especially historically speaking. But this is one WE who are currently living can work on resolving.

*******

The Cost of Connection — from chronicle.com by Katherine Mangan
The internet is a lifeline for students on far-flung tribal campuses. Too often, they’re priced out of learning.

Excerpt:

Affordable and reliable broadband access can be a lifeline for tribal colleges, usually located on or near Native American reservations, often in remote, rural areas across the Southwest and Midwest. Chartered by their respective tribal governments, the country’s 35 accredited tribal colleges operate in more than 75 campus sites across 16 states, serving more than 160,000 American Indians and Alaska Natives each year. They emphasize and help sustain the culture, languages, and traditions of their tribal communities and are often the only higher-education option available for Native students in some of the nation’s poorest rural regions.

Also relevant/see:

Tribal Colleges Will Continue Online, Despite Challenges — from chronicle.com by Taylor Swaak
Other institutions could learn from their calculus.

Excerpt:

Two years after tribal colleges shuttered alongside institutions nationwide, many remain largely, if not fully, online, catering to students who’ve historically faced barriers to attending in person. Adult learners — especially single mothers who may struggle to find child care, or those helping to support multigenerational households — make up the majority of students at more than half of the 32 federally recognized institutions in the Tribal Colleges and Universities Program. These colleges are also often located in low-income, rural areas, where hours of daily commute time (and the cost of gas) can prove untenable for students simultaneously working part- or full-time jobs.

Also relevant/see:

Why Tribal Colleges Struggle to Get Reliable Internet Service — from chronicle.com by Katherine Mangan and Jacquelyn Elias
For tribal colleges across the country, the pandemic magnified internet-access inequities. Often located on far-flung tribal lands, their campuses are overwhelmingly in areas with few broadband service providers, sometimes leaving them with slow speeds and spotty coverage.

“You can be driving from a nearby town, and as soon as you hit the reservation, the internet and cellphone signals drop off,” said Cheryl Crazy Bull, president of the American Indian College Fund and a member of the Sicangu Lakota Nation. “Students would be in the middle of class and their Wi-Fi access dropped off.”

Worsening matters, many students have been limited by outdated equipment. “We had students who were trying to take classes on their flip phones,” Crazy Bull said. Such stories were cropping up throughout Indian territory.

 

Announcing the 2022 AI Index Report — from hai.stanford.edu by Stanford University

Excerpt/description:

Welcome to the Fifth Edition of the AI Index

The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report trackscollatesdistills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.

The 2022 AI Index report measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, and more. The latest edition includes data from a broad set of academic, private, and non-profit organizations as well as more self-collected data and original analysis than any previous editions.

Also relevant/see:

  • Andrew Ng predicts the next 10 years in AI — from venturebeat.com by George Anadiotis
  • Nvidia’s latest AI wizardry turns 2D photos into 3D scenes in milliseconds — from thenextweb.com by Thomas Macaulay
    The Polaroid of the future?
    Nvidia events are renowned for mixing technical bravado with splashes of showmanship — and this year’s GTC conference was no exception. The company ended a week that introduced a new enterprise GPU and an Arm-based “superchip” with a trademark flashy demo. Some 75 years after the world’s first instant photo captured the 3D world in a 2D picture…

Nvidia believes Instant NeRF could generate virtual worlds, capture video conferences in 3D, and reconstruct scenes for 3D maps.

 

China Is About to Regulate AI—and the World Is Watching — from wired.com by Jennifer Conrad
Sweeping rules will cover algorithms that set prices, control search results, recommend videos, and filter content.

Excerpt:

On March 1, China will outlaw this kind of algorithmic discrimination as part of what may be the world’s most ambitious effort to regulate artificial intelligence. Under the rules, companies will be prohibited from using personal information to offer users different prices for a product or service.

The sweeping rules cover algorithms that set prices, control search results, recommend videos, and filter content. They will impose new curbs on major ride-hailing, ecommerce, streaming, and social media companies.

 
© 2024 | Daniel Christian