The legal and ethical minefield of AI: ‘Tech has the power to do harm as well as good’ — from by Joanna Goodman


Artificial intelligence and machine learning tools are already embedded in our lives, but how should businesses that use such technology manage the associated risks?

As artificial intelligence (AI) penetrates deeper into business operations and services, even supporting judicial decision-making, are we approaching a time when the greatest legal mind could be a machine? According to Prof Dame Wendy Hall, co-author of the report Growing the Artificial Intelligence Industry in the UK, we are just at the beginning of the AI journey and now is the time to set boundaries.

“All tech has the power to do harm as well as good,” Hall says. “So we have to look at regulating companies and deciding what they can and cannot do with the data now.”

AI and robotics professor Noel Sharkey highlights the “legal and moral implications of entrusting human decisions to algorithms that we cannot fully understand”. He explains that the narrow AI systems that businesses currently use (to draw inferences from large volumes of data) apply algorithms that learn from experience and feed back to real-time and historical data. But these systems are far from perfect.

Potential results include flawed outcomes or reasoning, but difficulties also arise from the lack of transparency. This supports Hall’s call for supervision and regulation. Businesses that use AI in their operations need to manage the ethical and legal risks, and the legal profession will have a major role to play in assessing and apportioning risk, responsibility and accountability.



Also see: