30 influential AI presentations from 2019 — from re-work.co

Excerpt:

It feels as though 2019 has gone by in a flash, that said, it has been a year in which we have seen great advancement in AI application methods and technical discovery, paving the way for future development. We are incredibly grateful to have had the leading minds in AI & Deep Learning present their latest work at our summits in San Francisco, Boston, Montreal and more, so we thought we would share thirty of our highlight videos with you as we think everybody needs to see them!. (Some are hosted on our Videohub and some on our YouTube, but all are free to view!).

Example presenters:

  • Dawn Song, Professor, UC Berkeley.
  • Doina Precup, Research Team Lead, DeepMind.
  • Natalie Jakomis, Group Director of Data, goCompare.
  • Ian Goodfellow, Director, Apple.
  • Timnit Gebru, Ethical AI Team, Google.
  • Cathy Pearl, Head of Conversation Design Outreach, Google.
  • Zoya Bylinskii, Research Scientist, Adobe Research.
  • …and many others
 

FTI 2020 Trend Report for Entertainment, Media, & Technology [FTI]

 

FTI 2020 Trend Report for Entertainment, Media, & Technology — from futuretodayinstitute.com

Our 3rd annual industry report on emerging entertainment, media and technology trends is now available.

  • 157 trends
  • 28 optimistic, pragmatic and catastrophic scenarios
  • 10 non-technical primers and glossaries
  • Overview of what events to anticipate in 2020
  • Actionable insights to use within your organization

KEY TAKEAWAYS

  • Synthetic media offers new opportunities and challenges.
  • Authenticating content is becoming more difficult.
  • Regulation is coming.
  • We’ve entered the post-fixed screen era.
  • Voice Search Optimization (VSO) is the new Search Engine Optimization (SEO).
  • Digital subscription models aren’t working.
  • Advancements in AI will mean greater efficiencies.

 

 

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos — from zdnet.com by Liam Tung
Google has created 3,000 videos using actors and manipulation software to help improve detection.

Excerpt:

Google has released a huge database of deepfake videos that it’s created using paid actors. It hopes the database will bolster systems designed to detect AI-generated fake videos.

With the 2020 US Presidential elections looming, the race is on to build better systems to detect deepfake videos that could be used to manipulate and divide public opinion.

Earlier this month, Facebook and Microsoft announced a $10m project to create deepfake videos to help build systems for detecting them.

 

Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

FTC reportedly hits Facebook with record $5 billion settlement — from wired.com by Issie Lapowsky and Caitlin Kelly

Excerpt:

AFTER MONTHS OF negotiations, the Federal Trade Commission fined Facebook a record-setting $5 billion on Friday for privacy violations, according to multiple reports. The penalty comes after an investigation that lasted over a year, and marks the largest in the agency’s history by an order of magnitude. If approved by the Justice Department’s civil division, it will also be the first substantive punishment for Facebook in the US, where the tech industry has gone largely unregulated. But Washington has taken a harsher stance toward Silicon Valley lately, and Friday’s announcement marks its most aggressive action yet to curb its privacy overreaches.

 

Also see:

 

 

Reflections on “Clay Shirky on Mega-Universities and Scale” [Christian]

Clay Shirky on Mega-Universities and Scale — from philonedtech.com by Clay Shirky
[This was a guest post by Clay Shirky that grew out of a conversation that Clay and Phil had about IPEDS enrollment data. Most of the graphs are provided by Phil.]

Excerpts:

Were half a dozen institutions to dominate the online learning landscape with no end to their expansion, or shift what Americans seek in a college degree, that would indeed be one of the greatest transformations in the history of American higher education. The available data, however, casts doubt on that idea.

Though much of the conversation around mega-universities is speculative, we already know what a mega-university actually looks like, one much larger than any university today. It looks like the University of Phoenix, or rather it looked like Phoenix at the beginning of this decade, when it had 470,000 students, the majority of whom took some or all of their classes online. Phoenix back then was six times the size of the next-largest school, Kaplan, with 78,000 students, and nearly five times the size of any university operating today.

From that high-water mark, Phoenix has lost an average of 40,000 students every year of this decade.

 

From DSC:
First of all, I greatly appreciate both Clay’s and Phil’s thought leadership and their respective contributions to education and learning through the years. I value their perspectives and their work.  Clay and Phil offer up a great article here — one worth your time to read.  

The article made me reflect on what I’ve been building upon and tracking for the last decade — a next generation ***PLATFORM*** that I believe will represent a powerful piece of a global learning ecosystem. I call this vision, “Learning from the Living [Class] Room.” Though the artificial intelligence-backed platform that I’m envisioning doesn’t yet fully exist — this new era and type of learning-based platform ARE coming. The emerging signs, technologies, trends — and “fingerprints”of it, if you will — are beginning to develop all over the place.

Such a platform will:

  • Be aimed at the lifelong learner.
  • Offer up major opportunities to stay relevant and up-to-date with one’s skills.
  • Offer access to the program offerings from many organizations — including the mega-universities, but also, from many other organizations that are not nearly as large as the mega-universities.
  • Be reliant upon human teachers, professors, trainers, subject matter experts, but will be backed up by powerful AI-based technologies/tools. For example, AI-based tools will pulse-check the open job descriptions and the needs of business and present the top ___ areas to go into (how long those areas/jobs last is anyone’s guess, given the exponential pace of technological change).

Below are some quotes that I want to comment on:

Not nothing, but not the kind of environment that will produce an educational Amazon either, especially since the top 30 actually shrank by 0.2% a year.

 

Instead of an “Amazon vs. the rest” dynamic, online education is turning into something much more widely adopted, where the biggest schools are simply the upper end of a continuum, not so different from their competitors, and not worth treating as members of a separate category.

 

Since the founding of William and Mary, the country’s second college, higher education in the U.S. hasn’t been a winner-take-all market, and it isn’t one today. We are not entering a world where the largest university operates at outsized scale, we’re leaving that world; 

 

From DSC:
I don’t see us leaving that world at all…but that’s not my main reflection here. Instead, I’m not focusing on how large the mega-universities will become. When I speak of a forthcoming Walmart of Education or Amazon of Education, what I have in mind is a platform…not one particular organization.

Consider that the vast majority of Amazon’s revenues come from products that other organizations produce. They are a platform, if you will. And in the world of platforms (i.e., software), it IS a winner take all market. 

Bill Gates reflects on this as well in this recent article from The Verge:

“In the software world, particularly for platforms, these are winner-take-all markets.

So it’s all about a forthcoming platform — or platforms. (It could be more than one platform. Consider Apple. Consider Microsoft. Consider Google. Consider Facebook.)

But then the question becomes…would a large amount of universities (and other types of organizations) be willing to offer up their courses on a platform? Well, consider what’s ALREADY happening with FutureLearn:

Finally…one more excerpt from Clay’s article:

Eventually the new ideas lose their power to shock, and end up being widely copied. Institutional transformation starts as heresy and ends as a section in the faculty handbook. 

From DSC:
This is a great point. Reminds me of this tweet from Fred Steube (and I added a piece about Western Telegraph):

 

Some things to reflect upon…for sure.

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

 

Addendum on 4/20/19:

Amazon is now making its delivery drivers take selfies — from theverge.com by Shannon Liao
It will then use facial recognition to double-check

From DSC:
I don’t like this piece re: Amazon’s use of facial recognition at all. Some organization like Amazon asserts that they need facial recognition to deliver services to its customers, and then, the next thing we know, facial recognition gets its foot in the door…sneaks in the back way into society’s house. By then, it’s much harder to get rid of. We end up with what’s currently happening in China. I don’t want to pay for anything with my face. Ever. As Mark Zuckerberg has demonstrated time and again, I don’t trust humankind to handle this kind of power. Plus, the developing surveillance states by several governments is a chilling thing indeed. China is using it to identify/track Muslims.

China using AI to track Muslims

Can you think of some “groups” that people might be in that could be banned from receiving goods and services? I can. 

The appalling lack of privacy that’s going on in several societies throughout the globe has got to be stopped. 

 

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

Check out the top 10:

1) Alphabet (Google); Internet
2) Facebook; Internet
3) Amazon; Internet
4) Salesforce; Internet
5) Deloitte; Management Consulting
6) Uber; Internet
7) Apple; Consumer Electronics
8) Airbnb; Internet
9) Oracle; Information Technology & Services
10) Dell Technologies; Information Technology & Services

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian