Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly
Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.
It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.
The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.
Key questions for the ethical and societal analysis of advanced AI assistants include:
- What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
- What capabilities would an advanced AI assistant have? How capable could these assistants be?
- What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
- Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
- What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
- What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
- What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
- How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
- Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
- …