From DSC:
Are you kidding me!? Geo-fencing technology or not, I don’t trust this for one second.


Amazon patents ‘surveillance as a service’ tech for its delivery drones — from theverge.com by Jon Porter
Including technology that cuts out footage of your neighbor’s house

Excerpt:

The patent gives a few hints how the surveillance service could work. It says customers would be allowed to pay for visits on an hourly, daily, or weekly basis, and that drones could be equipped with night vision cameras and microphones to expand their sensing capabilities.

 

Amazon launches Personalize, a fully managed AI-powered recommendation service — from venturebeat.com Kyle Wiggers

Excerpt:

Amazon [on 6/10/19] announced the general availability of Amazon Personalize, an AWS service that facilitates the development of websites, mobile apps, and content management and email marketing systems that suggest products, provide tailored search results, and customize funnels on the fly.

 

 

 

How surveillance cameras could be weaponized with A.I. — from nytimes.com by Niraj Chokshi
Advances in artificial intelligence could supercharge surveillance cameras, allowing footage to be constantly monitored and instantly analyzed, the A.C.L.U. warned in a new report.

Excerpt:

In the report, the organization imagined a handful of dystopian uses for the technology. In one, a politician requests footage of his enemies kissing in public, along with the identities of all involved. In another, a life insurance company offers rates based on how fast people run while exercising. And in another, a sheriff receives a daily list of people who appeared to be intoxicated in public, based on changes to their gait, speech or other patterns.

Analysts have valued the market for video analytics at as much as $3 billion, with the expectation that it will grow exponentially in the years to come. The important players include smaller businesses as well as household names such as Amazon, Cisco, Honeywell, IBM and Microsoft.

 

From DSC:
We can no longer let a handful of companies tell the rest of us how our society will be formed/shaped/act. 

For example, Amazon should NOT be able to just send its robots/drones to deliver packages — that type of decision is NOT up to them. I have a suspicion that Amazon cares more about earning larger profits and pleasing Wall Street rather than being concerned with our society at large. If Amazon is able to introduce their robots all over the place, what’s to keep any and every company from introducing their own army of robots or drones? If we allow this to occur, it won’t be long before our streets, sidewalks, and air spaces are filled with noise and clutter.

So…a  question for representatives, senators, legislators, mayors, judges, lawyers, etc.:

  • What should we be building in order to better allow citizens to weigh in on emerging technologies and whether any given emerging technology — or a specific product/service — should be rolled out…or not? 

 

 

Is your college future-ready? — from jisc.ac.uk by Robin Ghurbhurun

Excerpt:

Artificial intelligence (AI) is increasingly becoming science fact rather than science fiction. Alexa is everywhere from the house to the car, Siri is in the palm of your hand and students and the wider community can now get instant responses to their queries. We as educators have a duty to make sense of the information out there, working alongside AI to facilitate students’ curiosities.

Instead of banning mobile phones on campus, let’s manage our learning environments differently

We need to plan strategically to avoid a future where only the wealthy have access to human teachers, whilst others are taught with AI. We want all students to benefit from both. We should have teacher-approved content from VLEs and AI assistants supporting learning and discussion, everywhere from the classroom to the workplace. Let’s learn from the domestic market; witness the increasing rise of co-bot workers coming to an office near you.

 

 

Stanford team aims at Alexa and Siri with a privacy-minded alternative — from nytimes.com by John Markoff

Excerpt:

Now computer scientists at Stanford University are warning about the consequences of a race to control what they believe will be the next key consumer technology market — virtual assistants like Amazon’s Alexa and Google Assistant.

The group at Stanford, led by Monica Lam, a computer systems designer, last month received a $3 million grant from the National Science Foundation. The grant is for an internet service they hope will serve as a Switzerland of sorts for systems that use human language to control computers, smartphones and internet devices in homes and offices.

The researchers’ biggest concern is that virtual assistants, as they are designed today, could have a far greater impact on consumer information than today’s websites and apps. Putting that information in the hands of one big company or a tiny clique, they say, could erase what is left of online privacy.

 

Amazon sends Alexa developers on quest for ‘holy grail of voice science’ — from venturebeat.com by Khari Johnson

Excerpt:

At Amazon’s re:Mars conference last week, the company rolled out Alexa Conversations in preview. Conversations is a module within the Alexa Skills Kit that stitches together Alexa voice apps into experiences that help you accomplish complex tasks.

Alexa Conversations may be Amazon’s most intriguing and substantial pitch to voice developers in years. Conversations will make creating skills possible with fewer lines of code. It will also do away with the need to understand the many different ways a person can ask to complete an action, as a recurrent neural network will automatically generate dialogue flow.

For users, Alexa Conversations will make it easier to complete tasks that require the incorporation of multiple skills and will cut down on the number of interactions needed to do things like reserve a movie ticket or order food.

 

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

Microsoft rolls out healthcare bot: How it will change healthcare industry — from yourtechdiet.com by Brian Curtis

Excerpt:

AI and the Healthcare Industry
This technology is evidently the game changer in the healthcare industry. According to the reports by Frost & Sullivan, the AI market for healthcare is likely to experience a CAGR of 40% by 2021, and has the potential to change industry outcomes by 30-40%, while cutting treatment costs in half.

In the words of Satya Nadella, “AI is the runtime that is going to shape all of what we do going forward in terms of the applications as well as the platform advances”.

Here are a few ways Microsoft’s Healthcare Bot will shape the Healthcare Industry…

 

Also see:

  • Why AI will make healthcare personal — from weforum.org by Peter Schwartz
    Excerpt:
    Digital assistants to provide a 24/7 helping hand
    The digital assistants of the future will be full-time healthcare companions, able to monitor a patient’s condition, transmit results to healthcare providers, and arrange virtual and face-to-face appointments. They will help manage the frequency and dosage of medication, and provide reliable medical advice around the clock. They will remind doctors of patients’ details, ranging from previous illnesses to past drug reactions. And they will assist older people to access the care they need as they age, including hospice care, and help to mitigate the fear and loneliness many elderly people feel.

 

  • Introducing New Alexa Healthcare Skills — from developer.amazon.com by Rachel Jiang
    Excerpts:
    The new healthcare skills that launched today are:Express Scripts (a leading Pharmacy Services Organization)
    Cigna Health Today (by Cigna, the global health service company)
    My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital)
    Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics)
    Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia)
    Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions)

Voice as the Next Frontier for Conveniently Accessing Healthcare Services

 

  • Got health care skills? Big Tech wants to hire you — from linkedin.com Jaimy Lee
    Excerpt:
    As tech giants like Amazon, Apple and Google place bigger and bigger bets on the U.S. health care system, it should come as no surprise that the rate at which they are hiring workers with health care skills is booming.We took a deep dive into the big tech companies on this year’s LinkedIn Top Companies list in the U.S., uncovering the most popular health care skills among their workers — and what that says about the future of health care in America.
 

Check out the top 10:

1) Alphabet (Google); Internet
2) Facebook; Internet
3) Amazon; Internet
4) Salesforce; Internet
5) Deloitte; Management Consulting
6) Uber; Internet
7) Apple; Consumer Electronics
8) Airbnb; Internet
9) Oracle; Information Technology & Services
10) Dell Technologies; Information Technology & Services

 

Is Thomas Frey right? “…by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet.”

From a fairly recent e-newsletter from edsurge.com — though I don’t recall the exact date (emphasis DSC):

New England is home to some of the most famous universities in the world. But the region has also become ground zero for the demographic shifts that promise to disrupt higher education.

This week saw two developments that fit the narrative. On Monday, Southern Vermont College announced that it would shut its doors, becoming the latest small rural private college to do so. Later that same day, the University of Massachusetts said it would start a new online college aimed at a national audience, noting that it expects campus enrollments to erode as the number of traditional college-age students declines in the coming years.

“Make no mistake—this is an existential threat to entire sectors of higher education,” said UMass president Marty Meehan in announcing the online effort.

The approach seems to parallel the U.S. retail sector, where, as a New York Times piece outlines this week, stores like Target and WalMart have thrived by building online strategies aimed at competing with Amazon, while stores like Gap and Payless, which did little to move online, are closing stores. Of course, college is not like any other product or service, and plenty of campuses are touting the richness of the experience that students get by actually coming to a campus. And it’s not clear how many colleges can grow online to a scale that makes their investments pay off.

 

“It’s predicted that over the next several years, four to five major national players with strong regional footholds will be established. We intend to be one of them.”

University of Massachusetts President Marty Meehan

 

 

From DSC:
That last quote from UMass President Marty Meehan made me reflect upon the idea of having one or more enormous entities that will provide “higher education” in the future. I wonder if things will turn out to be that we’ll have more lifelong learning providers and platforms in the future — with the idea of a 60-year curriculum being an interesting idea that may come into fruition.

Long have I predicted that such an enormous entity would come to pass. Back in 2008, I named it the Forthcoming Walmart of Education. But then as the years went by, I got bumbed out on some things that Walmart was doing, and re-branded it the Forthcoming Amazon.com of Higher Education. We’ll see how long that updated title lasts — but you get the point. In fact, the point aligns very nicely with what futurist Thomas Frey has been predicting for years as well:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider. (source)

I realize that education doesn’t always scale well…but I’m thinking that how people learn in the future may be different than how we did things in the past…communities of practice comes to mind…as does new forms of credentialing…as does cloud-based learner profiles…as does the need for highly efficient, cost-effective, and constant opportunities/means to reinvent oneself.

Also see:

 

 

Addendum:

74% of consumers go to Amazon when they’re ready to buy something. That should be keeping retailers up at night. — from cnbc.com

Key points (emphasis DSC)

  • Amazon remains a looming threat for some of the biggest retailers in the country — like Walmart, Target and Macy’s.
  • When consumers are ready to buy a specific product, nearly three-quarters of them, or 74 percent, are going straight to Amazon to do it, according to a new study by Feedvisor.
  • By the end of this year, Amazon is expected to account for 52.4 percent of the e-commerce market in the U.S., up from 48 percent in 2018.

 

“In New England, there will be between 32,000 and 54,000 fewer college-aged students just seven years from now,” Meehan said. “That means colleges and universities will have too much capacity and not enough demand at a time when the economic model in higher education is already straining under its own weight.” (Marty Meehan at WBUR)

 

 

A prediction for blockchain transformation in higher education  — from blockchain.capitalmarketsciooutlook.com by Michael Mathews

Excerpt:

Ironically, blockchain entered the scene in a very neutral way, while Bitcoin created all the noise, simply because it used an aspect of blockchain. Bitcoin, cyber coins, and/or token concepts will come and go, just as the various forms of web browsers did. However, just as the Internet lives on, so will blockchain. In fact, blockchain may very well become the best of the Internet and IoT merged with the trust factor of the ISBN/MARC code concept. As history unveils itself blockchain will stand the test of time and become a form of a future generation of the Internet (i.e. Internet 4.0) without the need for cyber security.

With a positive prediction on blockchain for future coupled with lessons learned from the Internet, blockchain will become the single largest influencer on education. I have only gone on record of predicting two shifts in technology over a 5-10 year period of time, and both have come to pass now. This is my third prediction, with the greatest potential for transformation.

 

 

What I did not know until last year was a neutral technology called blockchain would show up in the history of the world; and at the same time Amazon would start designing blockchain templates to reduce all the processes to allow educational decisions to become as easy as ordering and receiving Amazon products.

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 
© 2025 | Daniel Christian