Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 
 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

Is this the future of (low-cost) healthcare? — from computerworld.com by Johnny Evans
A Zipnostic pilot program in New York hints at how Apple tech could transform healthcare.

Excerpt:

The thing is, the home visit isn’t by a doctor but an onsite “care coordinator” equipped with a full set of professional testing equipment and direct video contact with the doctor.

The coordinator runs through tests using a high-resolution camera, ultrasound, EKG, glucometer, blood pressure, oximeter, and other state-of-the-art equipment, all of which is controlled using Zipnostic’s own apps.

Test data is made available to the real doctor at the end of the camera, who can take control of the testing procedure and provide an on-the-spot medical diagnosis based on real data.

The idea is that a diagnosis can be provided at around a fifteenth of the cost of a visit to the ER, and that the data driving the diagnosis can be much more accurate than you get from, say, a video chat using an app.

 

 

Reflections on “Clay Shirky on Mega-Universities and Scale” [Christian]

Clay Shirky on Mega-Universities and Scale — from philonedtech.com by Clay Shirky
[This was a guest post by Clay Shirky that grew out of a conversation that Clay and Phil had about IPEDS enrollment data. Most of the graphs are provided by Phil.]

Excerpts:

Were half a dozen institutions to dominate the online learning landscape with no end to their expansion, or shift what Americans seek in a college degree, that would indeed be one of the greatest transformations in the history of American higher education. The available data, however, casts doubt on that idea.

Though much of the conversation around mega-universities is speculative, we already know what a mega-university actually looks like, one much larger than any university today. It looks like the University of Phoenix, or rather it looked like Phoenix at the beginning of this decade, when it had 470,000 students, the majority of whom took some or all of their classes online. Phoenix back then was six times the size of the next-largest school, Kaplan, with 78,000 students, and nearly five times the size of any university operating today.

From that high-water mark, Phoenix has lost an average of 40,000 students every year of this decade.

 

From DSC:
First of all, I greatly appreciate both Clay’s and Phil’s thought leadership and their respective contributions to education and learning through the years. I value their perspectives and their work.  Clay and Phil offer up a great article here — one worth your time to read.  

The article made me reflect on what I’ve been building upon and tracking for the last decade — a next generation ***PLATFORM*** that I believe will represent a powerful piece of a global learning ecosystem. I call this vision, “Learning from the Living [Class] Room.” Though the artificial intelligence-backed platform that I’m envisioning doesn’t yet fully exist — this new era and type of learning-based platform ARE coming. The emerging signs, technologies, trends — and “fingerprints”of it, if you will — are beginning to develop all over the place.

Such a platform will:

  • Be aimed at the lifelong learner.
  • Offer up major opportunities to stay relevant and up-to-date with one’s skills.
  • Offer access to the program offerings from many organizations — including the mega-universities, but also, from many other organizations that are not nearly as large as the mega-universities.
  • Be reliant upon human teachers, professors, trainers, subject matter experts, but will be backed up by powerful AI-based technologies/tools. For example, AI-based tools will pulse-check the open job descriptions and the needs of business and present the top ___ areas to go into (how long those areas/jobs last is anyone’s guess, given the exponential pace of technological change).

Below are some quotes that I want to comment on:

Not nothing, but not the kind of environment that will produce an educational Amazon either, especially since the top 30 actually shrank by 0.2% a year.

 

Instead of an “Amazon vs. the rest” dynamic, online education is turning into something much more widely adopted, where the biggest schools are simply the upper end of a continuum, not so different from their competitors, and not worth treating as members of a separate category.

 

Since the founding of William and Mary, the country’s second college, higher education in the U.S. hasn’t been a winner-take-all market, and it isn’t one today. We are not entering a world where the largest university operates at outsized scale, we’re leaving that world; 

 

From DSC:
I don’t see us leaving that world at all…but that’s not my main reflection here. Instead, I’m not focusing on how large the mega-universities will become. When I speak of a forthcoming Walmart of Education or Amazon of Education, what I have in mind is a platform…not one particular organization.

Consider that the vast majority of Amazon’s revenues come from products that other organizations produce. They are a platform, if you will. And in the world of platforms (i.e., software), it IS a winner take all market. 

Bill Gates reflects on this as well in this recent article from The Verge:

“In the software world, particularly for platforms, these are winner-take-all markets.

So it’s all about a forthcoming platform — or platforms. (It could be more than one platform. Consider Apple. Consider Microsoft. Consider Google. Consider Facebook.)

But then the question becomes…would a large amount of universities (and other types of organizations) be willing to offer up their courses on a platform? Well, consider what’s ALREADY happening with FutureLearn:

Finally…one more excerpt from Clay’s article:

Eventually the new ideas lose their power to shock, and end up being widely copied. Institutional transformation starts as heresy and ends as a section in the faculty handbook. 

From DSC:
This is a great point. Reminds me of this tweet from Fred Steube (and I added a piece about Western Telegraph):

 

Some things to reflect upon…for sure.

 
 

Is your college future-ready? — from jisc.ac.uk by Robin Ghurbhurun

Excerpt:

Artificial intelligence (AI) is increasingly becoming science fact rather than science fiction. Alexa is everywhere from the house to the car, Siri is in the palm of your hand and students and the wider community can now get instant responses to their queries. We as educators have a duty to make sense of the information out there, working alongside AI to facilitate students’ curiosities.

Instead of banning mobile phones on campus, let’s manage our learning environments differently

We need to plan strategically to avoid a future where only the wealthy have access to human teachers, whilst others are taught with AI. We want all students to benefit from both. We should have teacher-approved content from VLEs and AI assistants supporting learning and discussion, everywhere from the classroom to the workplace. Let’s learn from the domestic market; witness the increasing rise of co-bot workers coming to an office near you.

 

 

Stanford team aims at Alexa and Siri with a privacy-minded alternative — from nytimes.com by John Markoff

Excerpt:

Now computer scientists at Stanford University are warning about the consequences of a race to control what they believe will be the next key consumer technology market — virtual assistants like Amazon’s Alexa and Google Assistant.

The group at Stanford, led by Monica Lam, a computer systems designer, last month received a $3 million grant from the National Science Foundation. The grant is for an internet service they hope will serve as a Switzerland of sorts for systems that use human language to control computers, smartphones and internet devices in homes and offices.

The researchers’ biggest concern is that virtual assistants, as they are designed today, could have a far greater impact on consumer information than today’s websites and apps. Putting that information in the hands of one big company or a tiny clique, they say, could erase what is left of online privacy.

 

Amazon sends Alexa developers on quest for ‘holy grail of voice science’ — from venturebeat.com by Khari Johnson

Excerpt:

At Amazon’s re:Mars conference last week, the company rolled out Alexa Conversations in preview. Conversations is a module within the Alexa Skills Kit that stitches together Alexa voice apps into experiences that help you accomplish complex tasks.

Alexa Conversations may be Amazon’s most intriguing and substantial pitch to voice developers in years. Conversations will make creating skills possible with fewer lines of code. It will also do away with the need to understand the many different ways a person can ask to complete an action, as a recurrent neural network will automatically generate dialogue flow.

For users, Alexa Conversations will make it easier to complete tasks that require the incorporation of multiple skills and will cut down on the number of interactions needed to do things like reserve a movie ticket or order food.

 

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

Augmented Reality and Virtual Reality | The Future of Healthcare — from creativitism.blog by with thanks to Woontack Woo for this resource

Excerpt:

When we talk about virtual reality, most people think about its advancement in the gaming industry. But now Virtual Reality (VR) and Augmented Reality (AR) is being introduced in other sectors as well. A great example is the use of VR in the medical sector: the application of this latest technology has entered the field of healthcare and can bring great difference and be of great help both in training and in the practice of medical activities.

In fact, the medical sector is one of the main fields of action of Virtual Reality. There are so many applications to help both doctors and patients. The advantages of Virtual Reality are now applied in surgeries, in patients with disorders and phobias, in the treatment of diseases and especially in medical training.Frequently, people have such disorders as: tachycardia, panic attacks, antisocial behavior, anxiety, as well as psychological trauma after violence, traffic accidents, etc. Using VR / AR applications, patients receive a course of rehabilitation therapy.

 

AR and VR -- the future of healthcare

 

Also see:

 

 

After nearly a decade of Augmented World Expo (AWE), founder Ori Inbar unpacks the past, present, & future of augmented reality — from next.reality.news by Adario Strange

Excerpts:

I think right now it’s almost a waste of time to talk about a hybrid device because it’s not relevant. It’s two different devices and two different use cases. But like you said, sometime in the future, 15, 20, 50 years, I imagine a point where you could open your eyes to do AR, and close your eyes to do VR.

I think there’s always room for innovation, especially with spatial computing where we’re in the very early stages. We have to develop a new visual approach that I don’t think we have yet. What does it mean to interact in a world where everything is visual and around you, and not on a two-dimensional screen? So there’s a lot to do there.

 

A big part of mainstream adoption is education. Until you get into AR and VR, you don’t really know what you’re missing. You can’t really learn about it from videos. And that education takes time. So the education, plus the understanding of the need, will create a demand.

— Ori Inbar

 

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 
© 2024 | Daniel Christian