From Google: New AR features in Search rolling out later this month.

 

 

Along these lines, see:

 

 
 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

26 incredibly useful things you didn’t know Google Calendar could do — from fastcompany.com by Jr Raphael
Upgrade your agenda with this cornucopia of advanced options, shortcuts, and features for Google Calendar.

Excerpt:

If you rely on Google Calendar like I do—or even if you just use it casually to keep track of occasional appointments—you’ll get more out of it once you’ve discovered all of its advanced tricks and time-saving possibilities. And if you’re too busy to tackle this right now, no worries: I happen to know a spectacular tool for setting reminders and making sure you never forget anything on your agenda.

(Unless otherwise noted, all the instructions mentioned below are specific to Calendar’s web version.)

 

27 Incredibly Useful Things You Didn’t Know Chrome Could Do — from fastcompany.com by Jr Raphael
Give your internet experience a jolt of fresh energy with these easily overlooked features, options, and shortcuts for Google’s browser.

 

Microsoft rolls out healthcare bot: How it will change healthcare industry — from yourtechdiet.com by Brian Curtis

Excerpt:

AI and the Healthcare Industry
This technology is evidently the game changer in the healthcare industry. According to the reports by Frost & Sullivan, the AI market for healthcare is likely to experience a CAGR of 40% by 2021, and has the potential to change industry outcomes by 30-40%, while cutting treatment costs in half.

In the words of Satya Nadella, “AI is the runtime that is going to shape all of what we do going forward in terms of the applications as well as the platform advances”.

Here are a few ways Microsoft’s Healthcare Bot will shape the Healthcare Industry…

 

Also see:

  • Why AI will make healthcare personal — from weforum.org by Peter Schwartz
    Excerpt:
    Digital assistants to provide a 24/7 helping hand
    The digital assistants of the future will be full-time healthcare companions, able to monitor a patient’s condition, transmit results to healthcare providers, and arrange virtual and face-to-face appointments. They will help manage the frequency and dosage of medication, and provide reliable medical advice around the clock. They will remind doctors of patients’ details, ranging from previous illnesses to past drug reactions. And they will assist older people to access the care they need as they age, including hospice care, and help to mitigate the fear and loneliness many elderly people feel.

 

  • Introducing New Alexa Healthcare Skills — from developer.amazon.com by Rachel Jiang
    Excerpts:
    The new healthcare skills that launched today are:Express Scripts (a leading Pharmacy Services Organization)
    Cigna Health Today (by Cigna, the global health service company)
    My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital)
    Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics)
    Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia)
    Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions)

Voice as the Next Frontier for Conveniently Accessing Healthcare Services

 

  • Got health care skills? Big Tech wants to hire you — from linkedin.com Jaimy Lee
    Excerpt:
    As tech giants like Amazon, Apple and Google place bigger and bigger bets on the U.S. health care system, it should come as no surprise that the rate at which they are hiring workers with health care skills is booming.We took a deep dive into the big tech companies on this year’s LinkedIn Top Companies list in the U.S., uncovering the most popular health care skills among their workers — and what that says about the future of health care in America.
 

Check out the top 10:

1) Alphabet (Google); Internet
2) Facebook; Internet
3) Amazon; Internet
4) Salesforce; Internet
5) Deloitte; Management Consulting
6) Uber; Internet
7) Apple; Consumer Electronics
8) Airbnb; Internet
9) Oracle; Information Technology & Services
10) Dell Technologies; Information Technology & Services

 

The growing marketplace for AI ethics — from forbes.com by Forbes Insights with Intel AI

Excerpt:

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow*—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI.

 

*Insert DSC:
Read this as a very powerful, chaotic, massive WILD, WILD, WEST.  Can law schools, legislatures, governments, businesses, and more keep up with this new pace of technological change?

 

Also see:

 
 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

AR will spark the next big tech platform — call it Mirrorworld — from wired.com by Kevin Kelly

Excerpt:

It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call place­ness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”

The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.

 

Also see:
Google Maps in augmented reality points you in the right direction — from mashable.com by Sasha Lekach

 

 

Bobst launches augmented reality helpline — from proprint.com.au by Sheree Young

Excerpt:

Swiss packaging and label equipment supplier Bobst has launched a new augmented reality smart headset to help answer customer questions.

Rapid problem solving thanks to a new augmented reality helpline service introduced by Swiss packaging and label equipment supplier Bobst stands to save printers time and money, the company says.

The Helpline Plus AR innovation provides a remote assistance service to Bobst’s customers using a smart headset with augmented reality glasses. The technology is being gradually rolled out globally, Bobst says.

Customers can use the headset to contact technical experts and iron out any issues they may be having as well as receive real time advice and support.

 

 

 

Making New Drugs With a Dose of Artificial Intelligence — from nytimes.com by Cade Metz

Excerpt:

DeepMind specializes in “deep learning,” a type of artificial intelligence that is rapidly changing drug discovery science. A growing number of companies are applying similar methods to other parts of the long, enormously complex process that produces new medicines. These A.I. techniques can speed up many aspects of drug discovery and, in some cases, perform tasks typically handled by scientists.

“It is not that machines are going to replace chemists,” said Derek Lowe, a longtime drug discovery researcher and the author of In the Pipeline, a widely read blog dedicated to drug discovery. “It’s that the chemists who use machines will replace those that don’t.”

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian