What will be important in the learn and work ecosystem in 2030? How do we prepare? — from evolllution.com by Holly Zanville | Senior Advisor for Credentialing and Workforce Development, Lumina Foundation

Excerpt:

These seven suggested actions—common to all scenarios—especially resonated with Lumina:

  1. Focus on learning: All learners will need a range of competencies and skills, most critically: learning how to learn; having a foundation in math, science, IT and cross-disciplines; and developing the behaviors of grit, empathy and effective communication.
  2. Prepare all “systems”: Schools will continue to be important places to teach competencies and skills. Parents will be important teachers for children. Workplaces will also be important places for learning, and many learners will need instruction on how to work effectively as part of human/machine teams.
  3. Integrate education and work: Education systems will need to be integrated with work in an education/work ecosystem. To enable movement within the ecosystem, credentials will be useful, but only if they are transparent and portable. The competencies and skills that stand behind credentials will need to be identifiable, using a common language to enable (a) credential providers to educate/train for an integrated education/work system; (b) employers to hire people and upgrade their skills; and (c) governments (federal/state/local) to incentivize and regulate programs and policies that support the education/work system.
  4. Assess learning: Assessing competencies and skills acquired in multiple settings and modes (including artificial reality and virtual reality tools), will be essential. AI will enable powerful new assessment tools to collect and analyze data about what humans know and can do.
  5. Build fair, moral AI: There will be a high priority on ensuring that AI has built-in checks and balances that reflect moral values and honor different cultural perspectives.
  6. Prepare for human/machine futures: Machines will join humans in homes, schools and workplaces. Machines will likely be viewed as citizens with rights. Humans must prepare for side-by-side “relationships” with machines, especially in situations in which machines will be managing aspects of education, work and life formerly managed by humans. Major questions will also arise about the ownership of AI structures—what ownership looks like, and who profits from ubiquitous AI structures.
  7. Build networks for readiness/innovation: Open and innovative partnerships will be needed for whatever future scenarios emerge. In a data-rich world, we won’t solve problems alone; networks, partnerships and communities will be key.

 

 

Also see:

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 

Evaluating the impact of artificial intelligence on human rights — from today.law.harvard.edu by Carolyn Schmitt
Report from Berkman Klein Center for Internet & Society provides new foundational framework for considering risks and benefits of AI on human rights

Excerpt:

From using artificial intelligence (AI) to determine credit scores to using AI to determine whether a defendant or criminal may offend again, AI-based tools are increasingly being used by people and organizations in positions of authority to make important, often life-altering decisions. But how do these instances impact human rights, such as the right to equality before the law, and the right to an education?

A new report from the Berkman Klein Center for Internet & Society (BKC) addresses this issue and weighs the positive and negative impacts of AI on human rights through six “use cases” of algorithmic decision-making systems, including criminal justice risk assessments and credit scores. Whereas many other reports and studies have focused on ethical issues of AI, the BKC report is one of the first efforts to analyze the impacts of AI through a human rights lens, and proposes a new framework for thinking about the impact of AI on human rights. The report was funded, in part, by the Digital Inclusion Lab at Global Affairs Canada.

“One of the things I liked a lot about this project and about a lot of the work we’re doing [in the Algorithms and Justice track of the Ethics and Governance of AI Initiative] is that it’s extremely current and tangible. There are a lot of far-off science fiction scenarios that we’re trying to think about, but there’s also stuff happening right now,” says Professor Christopher Bavitz, the WilmerHale Clinical Professor of Law, Managing Director of the Cyberlaw Clinic at BKC, and senior author on the report. Bavitz also leads the Algorithms and Justice track of the BKC project on the Ethics and Governance of AI Initiative, which developed this report.

 

 

Also see:

  • Morality in the Machines — from today.law.harvard.edu by Erick Trickey
    Researchers at Harvard’s Berkman Klein Center for Internet & Society are collaborating with MIT scholars to study driverless cars, social media feeds, and criminal justice algorithms, to make sure openness and ethics inform artificial intelligence.

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

How AI could help solve some of society’s toughest problems — from MIT Tech Review by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

  • Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
  • Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
  • Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
  • One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.

 

 

How AI can be a force for good — from science.sciencemag.org

Excerpt:

The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.

 

 

Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.

 

 

Robot wars — from ethicaljournalismnetwork.org by James Ball
How artificial intelligence will define the future of news

Excerpt:

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Alibaba looks to arm hotels, cities with its AI technology — from zdnet.com by Eileen Yu
Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.

Excerpt:

Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.

Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.

 

 

Accenture Introduces Ella and Ethan, AI Bots to Improve a Patient’s Health and Care Using the Accenture Intelligent Patient Platform — from marketwatch.com

Excerpt:

Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.

 

 

German firm’s 7 commandments for ethical AI — from france24.com

Excerpt:

FRANKFURT AM MAIN (AFP) –
German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.

 

 

 

 

Luke 10:25-37 New International Version (NIV) — from biblegateway.com
The Parable of the Good Samaritan

25 On one occasion an expert in the law stood up to test Jesus. “Teacher,” he asked, “what must I do to inherit eternal life?”

26 “What is written in the Law?” he replied. “How do you read it?”

27 He answered, “‘Love the Lord your God with all your heart and with all your soul and with all your strength and with all your mind’; and, ‘Love your neighbor as yourself.’”

28 “You have answered correctly,” Jesus replied. “Do this and you will live.”

29 But he wanted to justify himself, so he asked Jesus, “And who is my neighbor?”

30 In reply Jesus said: “A man was going down from Jerusalem to Jericho, when he was attacked by robbers. They stripped him of his clothes, beat him and went away, leaving him half dead. 31 A priest happened to be going down the same road, and when he saw the man, he passed by on the other side. 32 So too, a Levite, when he came to the place and saw him, passed by on the other side.33 But a Samaritan, as he traveled, came where the man was; and when he saw him, he took pity on him. 34 He went to him and bandaged his wounds, pouring on oil and wine. Then he put the man on his own donkey, brought him to an inn and took care of him. 35 The next day he took out two denarii and gave them to the innkeeper. ‘Look after him,’ he said, ‘and when I return, I will reimburse you for any extra expense you may have.’

36 “Which of these three do you think was a neighbor to the man who fell into the hands of robbers?”

37 The expert in the law replied, “The one who had mercy on him.”

Jesus told him, “Go and do likewise.”

 

From DSC:
The Samaritan had to sacrifice something here — time and money come to mind, but also, as our pastor said the other day, the Samaritan took an enormous risk caring for this wounded man. The Samaritan himself could have been beaten up (or worse) back in that time.

 

 

 

Why emerging technology needs to retain a human element — from forbes.com by Samantha Radocchia
Technology opens up new, unforeseen issues. And humans are necessary for solving the problems automated services can’t.

Excerpt (emphasis DSC):

With technological advancements comes change. Rather than avoiding new technology for as long as possible, and then accepting the inevitable, people need to be actively thinking about how it will change us as individuals and as a society.

Take your phone for instance. The social media, gaming and news apps are built to keep you addicted so companies can collect data on you. They’re designed to be used constantly so you back for more the instant you feel the slightest twinge of boredom.

And yet, other apps—sometimes the same ones I just mentioned—allow you to instantly communicate with people around the world. Loved ones, colleagues, old friends—they’re all within reach now.

Make any technology decisions carefully, because their impact down the road may be tremendous.

This is part of the reason why there’s been a push lately for ethics to be a required part of any computer science or vocational training program. And it makes sense. If people want to create ethical systems, there’s a need to remember that actual humans are behind them. People make bad choices sometimes. They make mistakes. They aren’t perfect.

 

To ignore the human element in tech is to miss the larger point: Technology should be about empowering people to live their best lives, not making them fearful of the future.

 

 

 

 

About Law2020: The Podcast
Last month we launched the Law2020 podcast, an audio companion to Law2020, our four-part series of articles about how artificial intelligence and similar emerging technologies are reshaping the practice and profession of law. The podcast episodes and featured guests are as follows:

  1. Access to JusticeDaniel Linna, Professor of Law in Residence and the Director of LegalRnD – The Center for Legal Services Innovation at Michigan State University College of Law.
  2. Legal EthicsMegan Zavieh, ethics and state bar defense lawyer.
  3. Legal ResearchDon MacLeod, Manager of Knowledge Management at Debevoise & Plimpton and author of How To Find Out Anything and The Internet Guide for the Legal Researcher.
  4. Legal AnalyticsAndy Martens, SVP & Global Head Legal Product and Editorial at Thomson Reuters.

The podcasts are short and lively, and we hope you’ll give them a listen. And if you haven’t done so already, we invite you to read the full feature stories over at the Law2020 website. Enjoy!

Listen to Law2020 Podcast

 

Activists urge killer robot ban ‘before it is too late’ — from techxplore.com by Nina Larson

Excerpt:

Countries should quickly agree a treaty banning the use of so-called killer robots “before it is too late”, activists said Monday as talks on the issue resumed at the UN.

They say time is running out before weapons are deployed that use lethal force without a human making the final kill-order and have criticised the UN body hosting the talks—the Convention of Certain Conventional Weapons (CCW)—for moving too slowly.

“Killer robots are no longer the stuff of science fiction,” Rasha Abdul Rahim, Amnesty International’s advisor on artificial intelligence and human rights, said in a statement.

“From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law,” she said.

 

Activists urge killer robot ban before it is too late

 

From DSC:
I’ve often considered how much out front many technologies are in our world today. It takes the rest of society some time to catch up with emerging technologies and ask whether we should be implementing technology A, B, or C.  Just because we can, doesn’t mean we should. A worn-out statement perhaps, but given the exponential pace of technological change, one that is highly relevant to our world today.

 

 



Addendum on 9/8/18:



 

 

Smart Machines & Human Expertise: Challenges for Higher Education — from er.educause.edu by Diana Oblinger

Excerpts:

What does this mean for higher education? One answer is that AI, robotics, and analytics become disciplines in themselves. They are emerging as majors, minors, areas of emphasis, certificate programs, and courses in many colleges and universities. But smart machines will catalyze even bigger changes in higher education. Consider the implications in three areas: data; the new division of labor; and ethics.

 

Colleges and universities are challenged to move beyond the use of technology to deliver education. Higher education leaders must consider how AI, big data, analytics, robotics, and wide-scale collaboration might change the substance of education.

 

Higher education leaders should ask questions such as the following:

  • What place does data have in our courses?
  • Do students have the appropriate mix of mathematics, statistics, and coding to understand how data is manipulated and how algorithms work?
  • Should students be required to become “data literate” (i.e., able to effectively use and critically evaluate data and its sources)?

Higher education leaders should ask questions such as the following:

  • How might problem-solving and discovery change with AI?
  • How do we optimize the division of labor and best allocate tasks between humans and machines?
  • What role do collaborative platforms and collective intelligence have in how we develop and deploy expertise?


Higher education leaders should ask questions such as the following:

  • Even though something is possible, does that mean it is morally responsible?
  • How do we achieve a balance between technological possibilities and policies that enable—or stifle—their use?
  • An algorithm may represent a “trade secret,” but it might also reinforce dangerous assumptions or result in unconscious bias. What kind of transparency should we strive for in the use of algorithms?

 

 

 

It’s time to address artificial intelligence’s ethical problems — from wired.co.uk by Abigail Beall
AI is already helping us diagnose cancer and understand climate change, but regulation and oversight are needed to stop the new technology being abused

Excerpt:

The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.

Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.

The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.

 

 

‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace — from nytimes.com by Steve Lohr

Excerpt:

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

 

 

AI for Virtual Medical Assistants – 4 Current Applications — from techemergence.com by Kumba Sennaar

Excerpt:

In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.

This article will set out to determine the answers to the following questions:

  • What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
  • How is the healthcare market implementing these AI applications?

 

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says — from nytimes.com by Natasha Singer

Excerpt:

In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.

On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”

 

Back from January:

 

 

 

Responsibility & AI: ‘We all have a role when it comes to shaping the future’ — from re-work.co by Fiona McEvoy

Excerpt:

As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?

All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.

For reasons that are somewhat understandable, at present much of this tech ethics talk happens behind closed doors, and typically only engages a handful of industry and academic voices. Currently, these elite figures are the only participants in a dialogue that will determine all of our futures. At least in part, I started YouTheData.com because I wanted to bring “ivory tower” discussions down to the level of the engaged consumer, and be part of efforts to democratize this particular consultation process. As a former campaigner, I place a lot of value in public awareness and scrutiny.

To be clear, the message I wish to convey is not a criticism of the worthy academic and advisory work being done in this field (indeed, I have some small hand in this myself). It’s about acknowledging that engineers, technologists – and now ethicists, philosophers and others – still ultimately need public assent and a level of consumer “buy in” that is only really possible when complex ideas are made more accessible.

 

 

Digital Surgery’s AI platform guides surgical teams through complex procedures — from venturebeat.com by Kyle Wiggers

Excerpt:

Digital Surgery, a health tech startup based in London, today launched what it’s calling the world’s first dynamic artificial intelligence (AI) system designed for the operating room. The reference tool helps support surgical teams through complex medical procedures — cofounder and former plastic surgeon Jean Nehme described it as a “Google Maps” for surgery.

“What we’ve done is applied artificial intelligence … to procedures … created with surgeons globally,” he told VentureBeat in a phone interview. “We’re leveraging data with machine learning to build a [predictive] system.”

 

 

Why business Lleaders need to embrace artificial intelligence — from thriveglobal.com by Howard Yu
How companies should work with AI—not against it.

 

 

 

 

The title of this article is: Schools can not get facial recognition tech for free. Should they?

Schools can not get facial recognition tech for free. Should they? — from wired.com by Issie Lapowsky

Excerpt:

Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.

 

From DSC:
Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example, look at what’s happening in China already these days!

The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.

Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.).  What’s the unspoken message behind those cameras?  What’s being stated by their very presence around us?

No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.

 

 

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”

 

 

 

The title of this article being linked here is: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras
Per this week’s Next e-newsletter from edsurge.com

Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.

The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.

It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.

 

 

Also see:

  • As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber
    Excerpt:
    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

 

 

 

 

Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras — from nytimes.com by Paul Mozur

Excerpts:

ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station.

In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival.

In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor.

With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

 

In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States.

Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.

 

 

A very slippery slope has now been setup in China with facial recognition infrastructures

 

From DSC:
A veeeeery slippery slope here. The usage of this technology starts out as looking for criminals, but then what’s next? Jail time for people who disagree w/ a government official’s perspective on something? Persecution for people seen coming out of a certain place of worship?  

Very troubling stuff here….

 

 

 
© 2025 | Daniel Christian