Report: Smart-city IoT isn’t smart enough yet — from networkworld.com by Jon Gold
A report from Forrester Research details vulnerabilities affecting smart-city internet of things (IoT) infrastructure and offers some methods of mitigation.

 

Governments take first, tentative steps at regulating AI — from heraldnet.com by James McCusker
Can we control artificial intelligence’s potential for disrupting markets? Time will tell.

Excerpt:

State legislatures in New York and New Jersey have proposed legislation that represents the first, tentative steps at regulation. While the two proposed laws are different, they both have elements of information gathering about the risks to such things as privacy, security and economic fairness.

 

 

You’re already being watched by facial recognition tech. This map shows where — from fastcompany.com by Katharine Schwab
Digital rights nonprofit Fight for the Future has mapped out the physical footprint of the controversial technology, which is in use in cities across the country.

 

 

The coming deepfakes threat to businesses — from axios.com by Kaveh Waddell and Jennifer Kingson

Excerpt:

In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.

Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company’s stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.

  • Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company’s share price would crumple.

What’s happening: For all the talk about fake videos, it’s deepfake audio that has emerged as the first real threat to the private sector.

 

From DSC…along these same lines see:

 

I opted out of facial recognition at the airport — it wasn’t easy — from wired.com by Allie Funk

Excerpt (emphasis DSC):

As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

As I watched traveler after traveler stand in front of a facial scanner before boarding our flight, I had an eerie vision of a new privacy-invasive status quo. With our faces becoming yet another form of data to be collected, stored, and used, it seems we’re sleepwalking toward a hyper-surveilled environment, mollified by assurances that the process is undertaken in the name of security and convenience. I began to wonder: Will we only wake up once we no longer have the choice to opt out?

Until we have evidence that facial recognition is accurate and reliable—as opposed to simply convenient—travelers should avoid the technology where they can.

 

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. 

 

From DSC:
Readers of this blog will know that I am generally a pro-technology person. That said, there are times when I don’t trust humankind to use the power of some of these emerging technologies appropriately and ethically. Along these lines, I don’t like where facial recognition could be heading…and citizens don’t seem to have effective ways to quickly weigh in on this emerging technology. I find this to be a very troubling situation. How about you?

 

Daniel Christian -- A technology is meant to be a tool, it is not meant to rule.

 

 

From DSC:
A couple of somewhat scary excerpts from Meet Hemingway: The Artificial Intelligence Robot That Can Copy Your Handwriting (from forbes.com by Bernard Marr):

The Handwriting Company now has a robot that can create beautifully handwritten communication that mimics the style of an individual’s handwriting while a robot from Brown University can replicate handwriting from a variety of languages even though it was just trained on Japanese characters.

Hemingway is The Handwriting Company’s robot that can mimic anyone’s style of handwriting. All that Hemingway’s algorithm needs to mimic an individual’s handwriting is a sample of handwriting from that person.

 

From DSC:
So now there are folks out there that can generate realistic “fakes” using videos, handwriting, audio and more. Super. Without technologies to determine such fakes, things could get ugly…especially as we approach a presidential election next year. I’m trying not to be negative, but it’s hard when the existence of fakes is a serious topic and problem these days.

 

Addendum on 7/5/19:
AI poised to ruin Internet using “massive tsunami” of fake news — from futurism.com

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning… we feel it is an incredibly important topic with far too little discussion currently,” Tynski told The Verge.

 

From DSC:
Are you kidding me!? Geo-fencing technology or not, I don’t trust this for one second.


Amazon patents ‘surveillance as a service’ tech for its delivery drones — from theverge.com by Jon Porter
Including technology that cuts out footage of your neighbor’s house

Excerpt:

The patent gives a few hints how the surveillance service could work. It says customers would be allowed to pay for visits on an hourly, daily, or weekly basis, and that drones could be equipped with night vision cameras and microphones to expand their sensing capabilities.

 

From DSC:
I just ran across this recently…what do you think of it?!

 

 

From DSC:
For me, this is extremely disturbing. And if I were a betting man, I’d wager that numerous nations/governments around the world — most certainly that includes the U.S. — have been developing new weapons of warfare for years that are based on artificial intelligence, robotics, automation, etc.

The question is, now what do we do?

Some very hard questions that numerous engineers and programmers need to be asking themselves these days…

By the way, the background audio on the clip above should either be non-existent or far more ominous — this stuff is NOT a joke.

Also see this recent posting. >>

 

Addendum on 6/26/19:

 

Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a “human in the loop” — that is, with no person involved at any point between identifying a target and killing them. And as facial recognition and decision-making algorithms become more powerful, it will only get easier.

 

 

Russian hackers behind ‘world’s most murderous malware’ probing U.S. power grid — from digitaltrends.com Georgina Torbet

 

U.S. Escalates Online Attacks on Russia’s Power Grid — from nytimes.com by David Sanger and Nicole Perlroth

 

 

 

 

 

 

From DSC:
As many times happens with humans use of technologies, some good and some bad here. Exciting. Troubling. Incredible. Alarming.

Companies, please make sure you’re not giving the keys to a $137,000, powerful Maserati to your “16 year olds.”

Just because we can…

And to you “16 year olds out there”…ask for / seek wisdom. Ask yourself whether you should be developing what you are developing. Is it helpful or hurtful to society? Don’t just collect the paycheck. You have a responsibility to humankind.

To whom much is given…

 

How surveillance cameras could be weaponized with A.I. — from nytimes.com by Niraj Chokshi
Advances in artificial intelligence could supercharge surveillance cameras, allowing footage to be constantly monitored and instantly analyzed, the A.C.L.U. warned in a new report.

Excerpt:

In the report, the organization imagined a handful of dystopian uses for the technology. In one, a politician requests footage of his enemies kissing in public, along with the identities of all involved. In another, a life insurance company offers rates based on how fast people run while exercising. And in another, a sheriff receives a daily list of people who appeared to be intoxicated in public, based on changes to their gait, speech or other patterns.

Analysts have valued the market for video analytics at as much as $3 billion, with the expectation that it will grow exponentially in the years to come. The important players include smaller businesses as well as household names such as Amazon, Cisco, Honeywell, IBM and Microsoft.

 

From DSC:
We can no longer let a handful of companies tell the rest of us how our society will be formed/shaped/act. 

For example, Amazon should NOT be able to just send its robots/drones to deliver packages — that type of decision is NOT up to them. I have a suspicion that Amazon cares more about earning larger profits and pleasing Wall Street rather than being concerned with our society at large. If Amazon is able to introduce their robots all over the place, what’s to keep any and every company from introducing their own army of robots or drones? If we allow this to occur, it won’t be long before our streets, sidewalks, and air spaces are filled with noise and clutter.

So…a  question for representatives, senators, legislators, mayors, judges, lawyers, etc.:

  • What should we be building in order to better allow citizens to weigh in on emerging technologies and whether any given emerging technology — or a specific product/service — should be rolled out…or not? 

 

 

Stanford engineers make editing video as easy as editing text — from news.stanford.edu by Andrew Myers
A new algorithm allows video editors to modify talking head videos as if they were editing text – copying, pasting, or adding and deleting words.

Excerpts:

In television and film, actors often flub small bits of otherwise flawless performances. Other times they leave out a critical word. For editors, the only solution so far is to accept the flaws or fix them with expensive reshoots.

Imagine, however, if that editor could modify video using a text transcript. Much like word processing, the editor could easily add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping them as needed to assemble a finished video that looks almost flawless to the untrained eye.

The work could be a boon for video editors and producers but does raise concerns as people increasingly question the validity of images and videos online, the authors said. However, they propose some guidelines for using these tools that would alert viewers and performers that the video has been manipulated.

 

Addendum on 6/13/19:

 

An image created from a fake video of former president Barack Obama displays elements of facial mapping used in new technology that allows users to create convincing fabricated footage of real people, known as “deepfakes.” (AP)

 

 

Facial recognition smart glasses could make public surveillance discreet and ubiquitous — from theverge.com by James Vincent; with thanks to Mr. Paul Czarapata, Ed.D. out on Twitter for this resource
A new product from UAE firm NNTC shows where this tech is headed next. <– From DSC: though hopefully not!!!

Excerpt:

From train stations and concert halls to sport stadiums and airports, facial recognition is slowly becoming the norm in public spaces. But new hardware formats like these facial recognition-enabled smart glasses could make the technology truly ubiquitous, able to be deployed by law enforcement and private security any time and any place.

The glasses themselves are made by American company Vuzix, while Dubai-based firm NNTC is providing the facial recognition algorithms and packaging the final product.

 

From DSC…I commented out on Twitter:

Thanks Paul for this posting – though I find it very troubling. Emerging technologies race out ahead of society. It would be interested in knowing the age of the people developing these technologies and if they care about asking the tough questions…like “Just because we can, should we be doing this?”

 

Addendum on 6/12/19:

 

State Attempts to Nix Public School’s Facial Recognition Plans — from futurism.com by Kristin Houser
But it might not have the authority to actually stop an upcoming trial.

Excerpt (emphasis DSC):

Chaos Reigns
New York’s Lockport City School District (CSD) was all set to become the first public school district in the U.S. to test facial recognition on its students and staff. But just two days after the school district’s superintendent announced the project’s June 3 start date, the New York State Education Department (NYSED) attempted to put a stop to the trial, citing concerns for students’ privacy. Still, it’s not clear whether the department has the authority to actually put the project on hold — *****the latest sign that the U.S. is in desperate need of clear-cut facial recognition legislation.*****

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 
© 2025 | Daniel Christian