AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 

Intelligent Machines: One of the fathers of AI is worried about its future — from technologyreview.com by Will Knight
Yoshua Bengio wants to stop talk of an AI arms race and make the technology more accessible to the developing world.

Excerpts:

Yoshua Bengio is a grand master of modern artificial intelligence.

Alongside Geoff Hinton and Yann LeCun, Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet.

Deep learning involves feeding data to large neural networks that crudely simulate the human brain, and it has proved incredibly powerful and effective for all sorts of practical tasks, from voice recognition and image classification to controlling self-driving cars and automating business decisions.

Bengio has resisted the lure of any big tech company. While Hinton and LeCun joined Google and Facebook, respectively, he remains a full-time professor at the University of Montreal. (He did, however, cofound Element AI in 2016, and it has built a very successful business helping big companies explore the commercial applications of AI research.)

Bengio met with MIT Technology Review’s senior editor for AI, Will Knight, at an MIT event recently.

What do you make of the idea that there’s an AI race between different countries?

I don’t like it. I don’t think it’s the right way to do it.

We could collectively participate in a race, but as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.

 

 

10 predictions for tech in 2019 — from enterprisersproject.com by Carla Rudder
IT leaders look at the road ahead and predict what’s next for containers, security, blockchain, and more

Excerpts:

We asked IT leaders and tech experts what they see on the horizon for the future of technology. We intentionally left the question open-ended, and as a result, the answers represent a broad range of what IT professionals may expect to face in the new year. Let’s dig in…

3. Security becomes must-have developer skill.
Developers who have job interviews next year will see a new question added to the usual list.

5. Ethics take center stage with tech talent
Robert Reeves, CTO and co-founder, Datical: “More companies (prompted by their employees) will become increasingly concerned about the ethics of their technology. Microsoft is raising concerns of the dangers of facial recognition technology; Google employees are very concerned about their AI products being used by the Department of Defense. The economy is good for tech right now and the job market is becoming tighter. Thus, I expect those companies to take their employees’ concerns very seriously. Of course, all bets are off when (not if) we dip into a recession. But, for 2019, be prepared for more employees of tech giants to raise ethical concerns and for those concerns to be taken seriously and addressed.”’

7. Customers expect instant satisfaction
All customers will be the customer of ‘now,’ with expectations of immediate and personalized service; single-click approval for loans, sales quotes on the spot, and deliveries in hours instead of days. The window of opportunity for customer satisfaction will keep closing and technology will evolve to keep pace. Real-time analytics will become faster and smarter as data that is external to the organization, such as social, news and weather, will be included for more insights. The move to the cloud will accelerate with the growing adoption of open-source vendors.”

 

From DSC:
Regarding #7 above…as the years progress, how do you suppose this type of environment where people expect instant satisfaction and personalized service will impact education/training?

 

 

 

Is Amazon’s algorithm cashing in on the Camp Fire by raising the cost of safety equipment? — from wired.co.uk by Matthew Chapman
Sudden and repeated price increases on fire extinguishers, axes and escape ladders sold on Amazon are seemingly linked to increased demand driven by California’s Camp Fire

Excerpt:

Amazon’s algorithm has allegedly been raising the price of fire safety equipment in response to increased demand during the California wildfires. The practice, known as surge pricing, has caused products including fire extinguishers and escape ladders to fluctuate significantly on Amazon, seemingly as a result of the retailer’s pricing system responding to increased demand.

An industry source with knowledge of the firm’s operations claims a similar price surge was triggered by the Grenfell Tower fire. A number of recent price rises coincide directly with the outbreak of the Camp Fire, which has been the deadliest in California’s history and resulted in at least 83 deaths.

 

From DSC:
I’ve been thinking a lot more about Amazon.com and Jeff Bezos in recent months, though I’m not entirely sure why. I think part of it has to do with the goals of capitalism.

If you want to see a winner in the way America trains up students, entrepreneurs, and business people, look no further than Jeff Bezos. He is the year-in-and-year-out champion of capitalism. He is the winner. He is the Michael Jordan of business. He is the top. He gets how the game is played and he’s a master at it. By all worldly standards, Jeff Bezos is the winner.

But historically speaking, he doesn’t come across like someone such as Bill Gates — someone who has used his wealth to literally, significantly, and positively change millions of lives. (Though finally that looks to be changing a bit, with the Bezos Day 1 Families Fund; the first grants of that fund total $97 million and will be given to 24 organizations working to address family homelessness. Source.)

Along those same lines — and expanding the scope a bit — I’m struggling with what the goals of capitalism are for us today…especially in an era of AI, algorithms, robotics, automation and the like. If the goal is simply to make as much profit as possible, we could be in trouble. If what occurs to people and families is much lower down the totem pole…what are the ramifications of that for our society? Yes, it’s a tough, cold world. But does it always have to be that way? What is the best, most excellent goal to pursue? What are we truly seeking to accomplish?

After my Uncle Chan died years ago, my Aunt Gail took over the family’s office supply business and ran it like a family. She cared about her employees and made decisions with an eye towards how things would impact her employees and their families. Yes, she had to make sound business decisions, but there was true caring in the way that she ran her business. I realize that the Amazon’s of the world are in a whole different league, but the values and principles involved here should not be lost just because of size.

 

To whom much is given…much is expected.

 

 

 

Also see:

GM to lay off 15 percent of salaried workers, halt production at five plants in U.S. and Canada — from washingtonpost.com by Taylor Telford

Wall Street applauded the news, with GM’s stock climbing more than 7 percent following the announcement.

 

From DSC:
Well, I bet those on Wall Street aren’t a part of the 15% of the folks being impacted. The applause is not heard at all from those folks who are being impacted today…whose families are being impacted today…and will be feeling the impact of these announcements for quite a while yet.

 

 

Beijing to judge every resident based on behavior by end of 2020 — from bloomberg.com

  • China capital plans ‘social credit’ system by end of 2020
  • Citizens with poor scores will be ‘unable to move’ a step

Excerpt:

China’s plan to judge each of its 1.3 billion people based on their social behavior is moving a step closer to reality, with  Beijing set to adopt a lifelong points program by 2021 that assigns personalized ratings for each resident.

The capital city will pool data from several departments to reward and punish some 22 million citizens based on their actions and reputations by the end of 2020, according to a plan posted on the Beijing municipal government’s website on Monday. Those with better so-called social credit will get “green channel” benefits while those who violate laws will find life more difficult.

The Beijing project will improve blacklist systems so that those deemed untrustworthy will be “unable to move even a single step,” according to the government’s plan.

 

From DSC:
Matthew 18:21-35 comes to mind big time here! I’m glad the LORD isn’t like this…we would all be in trouble.

 

 

Mama Mia It’s Sophia: A Show Robot Or Dangerous Platform To Mislead? — from forbes.com by Noel Sharkey

Excerpts:

A collective eyebrow was raised by the AI and robotics community when the robot Sophia was given Saudia citizenship in 2017 The AI sharks were already circling as Sophia’s fame spread with worldwide media attention. Were they just jealous buzz-kills or is something deeper going on? Sophia has gripped the public imagination with its interesting and fun appearances on TV and on high-profile conference platforms.

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.

 

 

A dangerous path for our rights and security
For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.

It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

 

 

Can employees change the ethics of tech firms? — from knowledge.wharton.upenn.edu

Excerpts:

“[An] extremely important factor that tech managers now have to consider is how the ethical and moral implications of their choices affect their ability to attract and retain talent.”

“We’re in a space now where these companies are really on the hook,” said the Shorenstein Center’s Ghosh. “Regulation is coming and this whole industry is going to have to figure out a way to socialize the ideas that it has and to make decisions that are a little bit more in the public interest. That’s where this whole conversation is going. I think that they are going to have to start thinking more about what’s in it for the world, and if they don’t, other people are going to step in and decide for them.”

 

These news anchors are professional and efficient. They’re also not human. — from washingtonpost.com by Taylor Telford

Excerpt:

The new anchors at China’s state-run news agency have perfect hair and no pulse.

Xinhua News just unveiled what it is calling the world’s first news anchors powered by artificial intelligence, at the World Internet Conference on Wednesday in China’s Zhejiang province. From the outside, they are almost indistinguishable from their human counterparts, crisp-suited and even-keeled. Although Xinhua says the anchors have the “voice, facial expressions and actions of a real person,” the robotic anchors relay whatever text is fed to them in stilted speech that sounds less human than Siri or Alexa.

 

From DSC:
The question is…is this what we want our future to look like? Personally, I don’t care to watch a robotic newscaster giving me the latest “death and dying report.” It comes off bad enough — callous enough — from human beings backed up by TV networks/stations that have agendas of their own; let alone from a robot run by AI.

 

 

Should self-driving cars have ethics? — from npr.org by Laurel Wamsley

Excerpt:

In the not-too-distant future, fully autonomous vehicles will drive our streets. These cars will need to make split-second decisions to avoid endangering human lives — both inside and outside of the vehicles.

To determine attitudes toward these decisions a group of researchers created a variation on the classic philosophical exercise known as “the Trolley problem.” They posed a series of moral dilemmas involving a self-driving car with brakes that suddenly give out…

 

 

 

Gartner: Immersive experiences among top tech trends for 2019 — from campustechnology.com by Dian Schaffhauser

Excerpt:

IT analyst firm Gartner has named its top 10 trends for 2019, and the “immersive user experience” is on the list, alongside blockchain, quantum computing and seven other drivers influencing how we interact with the world. The annual trend list covers breakout tech with broad impact and tech that could reach a tipping point in the near future.

 

 

 

MIT plans $1B computing college, AI research effort — from educationdive.com by James Paterson

Dive Brief (emphasis DSC):

  • The Massachusetts Institute of Technology is creating a College of Computing with the help of a $350 million gift from billionaire investor Stephen A. Schwarzman, who is the CEO and co-founder of the private equity firm Blackstone, in a move the university said is its “most significant reshaping” since 1950.
  • Featuring 50 new faculty positions and a new headquarters building, the $1 billion interdisciplinary initiative will bring together computer science, artificial intelligence (AI), data science and related programs across the institution. MIT will establish a new deanship for the college.
  • The new college…will explore and promote AI’s use in non-technology disciplines with a focus on ethical considerations, which are a growing concern as the technology becomes embedded in many fields.

 

Also see:

Alexa Sessions You Won’t Want to Miss at AWS re:Invent 2018 — from developer.amazon.com

Excerpts — with an eye towards where this might be leading in terms of learning spaces:

Alexa and AWS IoT — Voice is a natural interface to interact not just with the world around us, but also with physical assets and things, such as connected home devices, including lights, thermostats, or TVs. Learn how you can connect and control devices in your home using the AWS IoT platform and Alexa Skills Kit.

Connect Any Device to Alexa and Control Any Feature with the Updated Smart Home Skill API — Learn about the latest update to the Smart Home Skill API, featuring new capability interfaces you can use as building blocks to connect any device to Alexa, including those that fall outside of the traditional smart home categories of lighting, locks, thermostats, sensors, cameras, and audio/video gear. Start learning about how you can create a smarter home with Alexa.

Workshop: Build an Alexa Skill with Multiple Models — Learn how to build an Alexa skill that utilizes multiple interaction models and combines functionality into a single skill. Build an Alexa smart home skill from scratch that implements both custom interactions and smart home functionality within a single skill. Check out these resources to start learning:

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 

Evaluating the impact of artificial intelligence on human rights — from today.law.harvard.edu by Carolyn Schmitt
Report from Berkman Klein Center for Internet & Society provides new foundational framework for considering risks and benefits of AI on human rights

Excerpt:

From using artificial intelligence (AI) to determine credit scores to using AI to determine whether a defendant or criminal may offend again, AI-based tools are increasingly being used by people and organizations in positions of authority to make important, often life-altering decisions. But how do these instances impact human rights, such as the right to equality before the law, and the right to an education?

A new report from the Berkman Klein Center for Internet & Society (BKC) addresses this issue and weighs the positive and negative impacts of AI on human rights through six “use cases” of algorithmic decision-making systems, including criminal justice risk assessments and credit scores. Whereas many other reports and studies have focused on ethical issues of AI, the BKC report is one of the first efforts to analyze the impacts of AI through a human rights lens, and proposes a new framework for thinking about the impact of AI on human rights. The report was funded, in part, by the Digital Inclusion Lab at Global Affairs Canada.

“One of the things I liked a lot about this project and about a lot of the work we’re doing [in the Algorithms and Justice track of the Ethics and Governance of AI Initiative] is that it’s extremely current and tangible. There are a lot of far-off science fiction scenarios that we’re trying to think about, but there’s also stuff happening right now,” says Professor Christopher Bavitz, the WilmerHale Clinical Professor of Law, Managing Director of the Cyberlaw Clinic at BKC, and senior author on the report. Bavitz also leads the Algorithms and Justice track of the BKC project on the Ethics and Governance of AI Initiative, which developed this report.

 

 

Also see:

  • Morality in the Machines — from today.law.harvard.edu by Erick Trickey
    Researchers at Harvard’s Berkman Klein Center for Internet & Society are collaborating with MIT scholars to study driverless cars, social media feeds, and criminal justice algorithms, to make sure openness and ethics inform artificial intelligence.

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

How AI could help solve some of society’s toughest problems — from MIT Tech Review by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

  • Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
  • Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
  • Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
  • One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.

 

 

How AI can be a force for good — from science.sciencemag.org

Excerpt:

The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.

 

 

Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.

 

 

Robot wars — from ethicaljournalismnetwork.org by James Ball
How artificial intelligence will define the future of news

Excerpt:

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Alibaba looks to arm hotels, cities with its AI technology — from zdnet.com by Eileen Yu
Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.

Excerpt:

Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.

Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.

 

 

Accenture Introduces Ella and Ethan, AI Bots to Improve a Patient’s Health and Care Using the Accenture Intelligent Patient Platform — from marketwatch.com

Excerpt:

Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.

 

 

German firm’s 7 commandments for ethical AI — from france24.com

Excerpt:

FRANKFURT AM MAIN (AFP) –
German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.

 

 

 

 
© 2024 | Daniel Christian