From DSC:
I’ll say it again, just because we can, doesn’t mean we should.

From the article below…we can see another unintended consequence is developing on society’s landscapes. I really wish the 20 and 30 somethings that are being hired by the big tech companies — especially at Amazon, Facebook, Google, Apple, and Microsoft — who are developing these things would ask themselves:

  • “Just because we can develop this system/software/application/etc., SHOULD we be developing it?”
  • What might the negative consequences be? 
  • Do the positive contributions outweigh the negative impacts…or not?

To colleges professors and teachers:
Please pass these thoughts onto your students now, so that this internal questioning/conversations begin to take place in K-16.


Report: Colleges Must Teach ‘Algorithm Literacy’ to Help Students Navigate Internet — from edsurge.com by Rebecca Koenig

Excerpt (emphasis DSC):

If the Ancient Mariner were sailing on the internet’s open seas, he might conclude there’s information everywhere, but nary a drop to drink.

That’s how many college students feel, anyway. A new report published this week about undergraduates’ impressions of internet algorithms reveals students are skeptical of and unnerved by tools that track their digital travels and serve them personalized content like advertisements and social media posts.

And some students feel like they’ve largely been left to navigate the internet’s murky waters alone, without adequate guidance from teachers and professors.

Researchers set out to learn “how aware students are about their information being manipulated, gathered and interacted with,” said Alison Head, founder and director of Project Information Literacy, in an interview with EdSurge. “Where does that awareness drop off?”

They found that many students not only have personal concerns about how algorithms compromise their own data privacy but also recognize the broader, possibly negative implications of tools that segment and customize search results and news feeds.

 

From DSC:
Very disturbing that citizens had no say in this. Legislators, senators, representatives, lawyers, law schools, politicians, engineers, programmers, professors, teachers, and more…please reflect upon our current situation here. How can we help create the kind of future that we can hand down to our kids and rest well at night…knowing we did all that we could to provide a dream — and not a nightmare — for them?


The Secretive Company That Might End Privacy as We Know It — from nytimes.com by Kashmir Hill
A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

 

Excerpts:

“But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year…”

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

 

Indian police are using facial recognition to identify protesters in Delhi — from fastcompany.com by Kristin Toussaint

Excerpt:

At Modi’s rally on December 22, Delhi police used Automated Facial Recognition System (AFRS) software—which officials there acquired in 2018 as a tool to find and identify missing children—to screen the crowd for faces that match a database of people who have attended other protests around the city, and who officials said could be disruptive.

According to the Indian Express, Delhi police have long filmed these protest events, and the department announced Monday that officials fed that footage through AFRS. Sources told the Indian news outlet that once “identifiable faces” are extracted from that footage, a dataset will point out and retain “habitual protesters” and “rowdy elements.” That dataset was put to use at Modi’s rally to keep away “miscreants who could raise slogans or banners.”

 

From DSC:
Here in the United States…are we paying attention to today’s emerging technologies and collaboratively working to create a future dream — versus a future nightmare!?!  A vendor or organization might propose a beneficial reason to use their product or technology — and it might even meet the hype at times…but then comes along other unintended uses and consequences of that technology. For example, in the article above, what started out as a technology that was supposed to be used to find/identify missing children (a benefit) was later used to identify protesters (an unintended consequence, and a nightmare in terms of such an expanded scope of use I might add)!

Along these lines, the youth of today have every right to voice their opinions and to have a role in developing or torpedoing emerging techs. What we build and put into place now will impact their lives bigtime!

 

7 Artificial Intelligence Trends to Watch in 2020 — from interestingengineering.com by Christopher McFadden

Excerpts:

Per this article, the following trends were listed:

  1. Computer Graphics will greatly benefit from AI
  2. Deepfakes will only get better, er, worse
  3. Predictive text should get better and better
  4. Ethics will become more important as time goes by
  5. Quantum computing will supercharge AI
  6. Facial recognition will appear in more places
  7. AI will help in the optimization of production pipelines

Also, this article listed several more trends:

According to sources like The Next Web, some of the main AI trends for 2020 include:

  • The use of AI to make healthcare more accurate and less costly
  • Greater attention paid to explainability and trust
  • AI becoming less data-hungry
  • Improved accuracy and efficiency of neural networks
  • Automated AI development
  • Expanded use of AI in manufacturing
  • Geopolitical implications for the uses of AI

Artificial Intelligence offers great potential and great risks for humans in the future. While still in its infancy, it is already being employed in some interesting ways.

According to sources like Forbes, some of the next “big things” in technology include, but are not limited to:

  • Blockchain
  • Blockchain As A Service
  • AI-Led Automation
  • Machine Learning
  • Enterprise Content Management
  • AI For The Back Office
  • Quantum Computing AI Applications
  • Mainstreamed IoT

Also see:

Artificial intelligence predictions for 2020: 16 experts have their say — from verdict.co.uk by Ellen Daniel

Excerpts:

  • Organisations will build in processes and policies to prevent and address potential biases in AI
  • Deepfakes will become a serious threat to corporations
  • Candidate (and employee) care in the world of artificial intelligence
  • AI will augment humans, not replace them
  • Greater demand for AI understanding
  • Ramp up in autonomous vehicles
  • To fully take advantage of AI technologies, you’ll need to retrain your entire organisation
  • Voice technologies will infiltrate the office
  • IT will run itself while data acquires its own DNA
  • The ethics of AI
  • Health data and AI
  • AI to become an intrinsic part of robotic process automation (RPA)
  • BERT will open up a whole new world of deep learning use cases

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of what’s in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

 

 

Art-filled journeys into the future — methods of futures education for children in lower stage comprehensive school — from kultus.fi by Ilpo Rybatzki and Otto Tähkäpää

Art-filled futures education

 

See this PDF file which contains the following excerpt:

In art, futures literacy plays a significant role. Art has the ability to point elsewhere; to fool and mess around with things and shake up conventions without needing to achieve measurable benefits (Varto, 2008). Art ensures a solid background for imagining alternative worlds. It is important to support a permissive atmosphere that supports experimentation! From the perspective of art pedagogy, activities focus on the idea of art experience as meeting place (Pääjoki, 2004) where people can see themselves in a new light beside another person’s thoughts and imagination. Strengthening futures literacy means supporting transformative learning that aims for change. Through this type of learning, we can question norms, roles, identities and the concept of what is ‘normal’ (Lehtonen et al., 2018).

When discussing the future, we are always discussing values: what kind of future is desirable for any one person? Artistic activity can produce materials through which human meanings can be communicated from one person to another and questions about values in life can be discussed (Varto, 2008; Valkeapää, 2012). Encounters create opportunities for dialogue and enriching one’s perspectives. Important aspects include creating safe settings, the individual expression of the participants, the courage to open up and thrown oneself into the centre of an experience, as well as the courage to question or even completely let go of presumptions. In the age of the environmental crisis, art has a critical role in all of society. We cannot solve difficult problems using the same kind of thinking that created the problems in the first place.

 

Don’t trust AI until we build systems that earn trust — from economist.com
Progress in artificial intelligence belies a lack of transparency that is vital for its adoption, says Gary Marcus, coauthor of “Rebooting AI”

Excerpts:

Mr Marcus argues that it would be foolish of society to put too much stock in today’s AI techniques since they are so prone to failures and lack the transparency that researchers need to understand how algorithms reached their conclusions.

As part of The Economist’s Open Future initiative, we asked Mr Marcus about why AI can’t do more, how to regulate it and what teenagers should study to remain relevant in the workplace of the future.

Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other field. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.

The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high. It’s fine if autotagging people in photos turns out to be only 90 percent reliable—if it is just about personal photos that people are posting to Instagram—but it better be much more reliable when the police start using it to find suspects in surveillance photos.

 

2019 AI report tracks profound growth — from ide.mit.edu by Paula Klein

Excerpt:

Until now “we’ve been sorely lacking good data about basic questions like ‘How is the technology advancing’ and ‘What is the economic impact of AI?’ ” Brynjolfsson said. The new index, which tracks three times as many data sets as last year’s report, goes a long way toward providing answers.

  1. Education
  • At the graduate level, AI has rapidly become the most popular specialization among computer science PhD students in North America. In 2018, over 21% of graduating Computer Science PhDs specialize in Artificial Intelligence/Machine Learning.
  • Industry is the largest consumer of AI talent. In 2018, over 60% of AI PhD graduates went to industry, up from 20% in 2004.
  • In the U.S., AI faculty leaving academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

In the U.S., #AI faculty leaving #academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

Greta Thunberg is the youngest TIME Person of the Year ever. Here’s how she made history — from time.com

Excerpt:

The politics of climate action are as entrenched and complex as the phenomenon itself, and Thunberg has no magic solution. But she has succeeded in creating a global attitudinal shift, transforming millions of vague, middle-of-the-night anxieties into a worldwide movement calling for urgent change. She has offered a moral clarion call to those who are willing to act, and hurled shame on those who are not. She has persuaded leaders, from mayors to Presidents, to make commitments where they had previously fumbled: after she spoke to Parliament and demonstrated with the British environmental group Extinction Rebellion, the U.K. passed a law requiring that the country eliminate its carbon footprint. She has focused the world’s attention on environmental injustices that young indigenous activists have been protesting for years. Because of her, hundreds of thousands of teenage “Gretas,” from Lebanon to Liberia, have skipped school to lead their peers in climate strikes around the world.

 

Young people! You CAN and will make a big impact/difference!

 

Artificial Intelligence has a gender problem — why it matters for everyone — from nbcnews.com by Halley Bondy
To fight the rise of bias in AI, more representation is critical in the computing workforce, where only 26 percent of workers are women, 3 percent are African-American women, and 2 percent are Latinx.

Excerpt:

More women and minorities must work in tech, or else they risk being left behind in every industry.

This grim future was painted by Artificial Intelligence (AI) equality experts who spoke at a conference Thursday hosted by LivePerson, an AI company that connects brands and consumers.

In that future, if AI goes unchecked, workplaces will be completely homogenous, hiring only white, nondisabled men.

Guest speaker Cathy O’Neil, who authored “Weapons of Math Destruction,” explained how hiring bias works with AI: company algorithms are created by (mostly white male) data scientists, and they are based on the company’s historic wins. If a CEO is specifically looking for hirees who won’t leave the company after a year, for example, he might turn to AI to look for candidates based on his company’s retention rates. Chances are, most of his company’s historic wins only include white men, said O’Neil.

 

The future of law and computational technologies: Two sides of the same coin — from law.mit.edu by Daniel Linna
Law and computation are often thought of as being two distinct fields. Increasingly, that is not the case. Dan Linna explores the ways a computational approach could help address some of the biggest challenges facing the legal industry.

Excerpt:

The rapid advancement of artificial intelligence (“AI”) introduces opportunities to improve legal processes and facilitate social progress. At the same time, AI presents an original set of inherent risks and potential harms. From a Law and Computational Technologies perspective, these circumstances can be broadly separated into two categories. First, we can consider the ethics, regulations, and laws that apply to technology. Second, we can consider the use of technology to improve the delivery of legal services, justice systems, and the law itself. Each category presents an unprecedented opportunity to use significant technological advancements to preserve and expand the rule of law.

For basic legal needs, access to legal services might come in the form of smartphones or other devices that are capable of providing users with an inventory of their legal rights and obligations, as well as providing insights and solutions to common legal problems. Better yet, AI and pattern matching technologies can help catalyze the development of proactive approaches to identify potential legal problems and prevent them from arising, or at least mitigate their risk.

We risk squandering abundant opportunities to improve society with computational technologies if we fail to proactively create frameworks to embed ethics, regulation, and law into our processes by design and default.

To move forward, technologists and lawyers must radically expand current notions of interdisciplinary collaboration. Lawyers must learn about technology, and technologists must learn about the law.

 

 

Considering AI in hiring? As its use grows, so do the legal implications for employers. — from forbes.com by Alonzo Martinez; with thanks to Paul Czarapata for his posting on Twitter on this

Excerpt:

As employers grapple with a widespread labor shortage, more are turning to artificial intelligence tools in their search for qualified candidates.

Hiring managers are using increasingly sophisticated AI solutions to streamline large parts of the hiring process. The tools scrape online job boards and evaluate applications to identify the best fits. They can even stage entire online interviews and scan everything from word choice to facial expressions before recommending the most qualified prospects.

But as the use of AI in hiring grows, so do the legal issues surrounding it. Critics are raising alarms that these platforms could lead to discriminatory hiring practices. State and federal lawmakers are passing or debating new laws to regulate them. And that means organizations that implement these AI solutions must not only stay abreast of new laws, but also look at their hiring practices to ensure they don’t run into legal trouble when they deploy them.

 

Amazon’s Ring planned neighborhood “watch lists” built on facial recognition — from theintercept.com by Sam Biddle

Excerpts (emphasis DSC):

Ring, Amazon’s crime-fighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.

Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds.

Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.

 

From DSC:
Those working or teaching within the legal realm — this one’s for you. But it’s also for the leadership of the C-Suites in our corporate world — as well as for all of those programmers, freelancers, engineers, and/or other employees working on AI within the corporate world.

By the way, and not to get all political here…but who’s to say what happens with our data when it’s being reviewed in Ukraine…?

 

Also see:

  • Opinion: AI for good is often bad — from wired.com by Mark Latonero
    Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

Emerging Tech Trend: Patient-Generated Health Data — from futuretodayinstitute.com — Newsletter Issue 124

Excerpt:

Near-Futures Scenarios (2023 – 2028):

Pragmatic: Big tech continues to develop apps that are either indispensably convenient, irresistibly addictive, or both, and we pay for them, not with cash, but with the data we (sometimes unwittingly) let the apps capture. But for the apps for health care and medical insurance, the stakes could literally be life-and-death. Consumers receive discounted premiums, co-pays, diagnostics and prescription fulfillment, but the data we give up in exchange leaves them more vulnerable to manipulation and invasion of privacy.

Catastrophic: Profit-driven drug makers exploit private health profiles and begin working with the Big Nine. They use data-based targeting to over prescribe patients, netting themselves billions of dollars. Big Pharma target and prey on people’s addictions, mental health predispositions and more, which, while undetectable on an individual level, take a widespread societal toll.

Optimistic: Health data enables prescient preventative care. A.I. discerns patterns within gargantuan data sets that are otherwise virtually undetectable to humans. Accurate predictive algorithms identifies complex combinations of risk factors for cancer or Parkinson’s, offers early screening and testing to high-risk patients and encourages lifestyle shifts or treatments to eliminate or delay the onset of serious diseases. A.I. and health data creates a utopia of public health. We happily relinquish our privacy for a greater societal good.

Watchlist: Amazon; Manulife Financial; GE Healthcare; Meditech; Allscripts; eClinicalWorks; Cerner; Validic; HumanAPI; Vivify; Apple; IBM; Microsoft; Qualcomm; Google; Medicare; Medicaid; national health systems; insurance companies.

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian