Emerging Tech Trend: Patient-Generated Health Data — from futuretodayinstitute.com — Newsletter Issue 124

Excerpt:

Near-Futures Scenarios (2023 – 2028):

Pragmatic: Big tech continues to develop apps that are either indispensably convenient, irresistibly addictive, or both, and we pay for them, not with cash, but with the data we (sometimes unwittingly) let the apps capture. But for the apps for health care and medical insurance, the stakes could literally be life-and-death. Consumers receive discounted premiums, co-pays, diagnostics and prescription fulfillment, but the data we give up in exchange leaves them more vulnerable to manipulation and invasion of privacy.

Catastrophic: Profit-driven drug makers exploit private health profiles and begin working with the Big Nine. They use data-based targeting to over prescribe patients, netting themselves billions of dollars. Big Pharma target and prey on people’s addictions, mental health predispositions and more, which, while undetectable on an individual level, take a widespread societal toll.

Optimistic: Health data enables prescient preventative care. A.I. discerns patterns within gargantuan data sets that are otherwise virtually undetectable to humans. Accurate predictive algorithms identifies complex combinations of risk factors for cancer or Parkinson’s, offers early screening and testing to high-risk patients and encourages lifestyle shifts or treatments to eliminate or delay the onset of serious diseases. A.I. and health data creates a utopia of public health. We happily relinquish our privacy for a greater societal good.

Watchlist: Amazon; Manulife Financial; GE Healthcare; Meditech; Allscripts; eClinicalWorks; Cerner; Validic; HumanAPI; Vivify; Apple; IBM; Microsoft; Qualcomm; Google; Medicare; Medicaid; national health systems; insurance companies.

 

A face-scanning algorithm increasingly decides whether you deserve the job — from washingtonpost.com by Drew Harwell
HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

Excerpt:

An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce — and how prospective employees prove their worth.

Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score.

 

The system, they argue, will assume a critical role in helping decide a person’s career. But they doubt it even knows what it’s looking for: Just what does the perfect employee look and sound like, anyway?

“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York.

 

From DSC:
If you haven’t been screened out by an algorithm from an Applicant Tracking System recently, then you haven’t been looking for a job in the last few years. If that’s the case:

  • Then you might not be very interested in this posting.
  • You will be very surprised in the future, when you do need to search for a new job.

Because the truth is, it’s very difficult to get the eyes of a human being to even look at your resume and/or to meet you in person. The above posting/article should disturb you even more. I don’t think that the programmers have programmed everything inside an experienced HR professional’s mind.

 

Also see:

  • In case after case, courts reshape the rules around AI — from muckrock.com
    AI Now Institute recommends improvements and highlights key AI litigation
    Excerpt:
    When undercover officers with the Jacksonville Sheriff’s Office bought crack cocaine from someone in 2015, they couldn’t actually identify the seller. Less than a year later, though, Willie Allen Lynch was sentenced to 8 years in prison, picked through a facial recognition system. He’s still fighting in court over how the technology was used, and his case and others like it could ultimately shape the use of algorithms going forward, according to a new report.
 

Deepfakes: When a picture is worth nothing at all — from law.com by Katherine Forrest

Excerpt:

“Deepfakes” is the name for highly realistic, falsified imagery and sound recordings; they are digitized and personalized impersonations. Deepfakes are made by using AI-based facial and audio recognition and reconstruction technology; AI algorithms are used to predict facial movements as well as vocal sounds. In her Artificial Intelligence column, Katherine B. Forrest explores the legal issues likely to arise as deepfakes become more prevalent.

 

YouTube’s algorithm hacked a human vulnerability, setting a dangerous precedent — from which-50.com by Andrew Birmingham

Excerpt (emphasis DSC):

Even as YouTube’s recommendation algorithm was rolled out with great fanfare, the fuse was already burning. A project of The Google Brain and designed to optimise engagement, it did something unforeseen — and potentially dangerous.

Today, we are all living with the consequences.

As Zeynep Tufekci, an associate professor at the University of North Carolina, explained to attendees of Hitachi Vantara’s Next 2019 conference in Las Vegas this week, “What the developers did not understand at the time is that YouTube’ algorithm had discovered a human vulnerability. And it was using this [vulnerability] at scale to increase YouTube’s engagement time — without a single engineer thinking, ‘is this what we should be doing?’”

 

The consequence of the vulnerability — a natural human tendency to engage with edgier ideas — led to YouTube’s users being exposed to increasingly extreme content, irrespective of their preferred areas of interest.

“What they had done was use machine learning to increase watch time. But what the machine learning system had done was to discover a human vulnerability. And that human vulnerability is that things that are slightly edgier are more attractive and more interesting.”

 

From DSC:
Just because we can…

 

 

Can you make AI fairer than a judge? Play our courtroom algorithm game — from technologyreview.com by Karen Hao and Jonathan Stray
Play our courtroom algorithm game The US criminal legal system uses predictive algorithms to try to make the judicial process less biased. But there’s a deeper problem.

Excerpt:

As a child, you develop a sense of what “fairness” means. It’s a concept that you learn early on as you come to terms with the world around you. Something either feels fair or it doesn’t.

But increasingly, algorithms have begun to arbitrate fairness for us. They decide who sees housing ads, who gets hired or fired, and even who gets sent to jail. Consequently, the people who create them—software engineers—are being asked to articulate what it means to be fair in their code. This is why regulators around the world are now grappling with a question: How can you mathematically quantify fairness? 

This story attempts to offer an answer. And to do so, we need your help. We’re going to walk through a real algorithm, one used to decide who gets sent to jail, and ask you to tweak its various parameters to make its outcomes more fair. (Don’t worry—this won’t involve looking at code!)

The algorithm we’re examining is known as COMPAS, and it’s one of several different “risk assessment” tools used in the US criminal legal system.

 

But whether algorithms should be used to arbitrate fairness in the first place is a complicated question. Machine-learning algorithms are trained on “data produced through histories of exclusion and discrimination,” writes Ruha Benjamin, an associate professor at Princeton University, in her book Race After Technology. Risk assessment tools are no different. The greater question about using them—or any algorithms used to rank people—is whether they reduce existing inequities or make them worse.

 

You can also see change in these articles as well:

 

 

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos — from zdnet.com by Liam Tung
Google has created 3,000 videos using actors and manipulation software to help improve detection.

Excerpt:

Google has released a huge database of deepfake videos that it’s created using paid actors. It hopes the database will bolster systems designed to detect AI-generated fake videos.

With the 2020 US Presidential elections looming, the race is on to build better systems to detect deepfake videos that could be used to manipulate and divide public opinion.

Earlier this month, Facebook and Microsoft announced a $10m project to create deepfake videos to help build systems for detecting them.

 

From DSC:
The two postings below show the need for more collaboration and the use of teams:


 

The future of law and computational technologies: Two sides of the same coin — from legaltechlever.com by Daniel Linna Jr.

Excerpt (emphasis DSC):

An increasing number of lawyers today work with allied professionals to improve processes, better manage projects, embrace data-driven methods, and leverage technology to improve legal services and systems. Legal-services and lawyer regulations are evolving. And basic technologies and AI are slowly making their way into the legal industry, from legal aid organizations and courts to large law firms, corporate legal departments, and governments.

If we are to realize the potential to improve society with computational technologies, law, regulation, and ethical principles must be front and center at every stage, from problem definition, design, data collection, and data cleaning to training, deployment, and monitoring and maintenance of products and systems. To achieve this, technologists and lawyers must collaborate and share a common vocabulary. Lawyers must learn about technology, and technologists must learn about law. Multidisciplinary teams with a shared commitment to law, regulation, and ethics can proactively address today’s AI challenges, and advance our collaborative problem-solving capabilities to address tomorrow’s increasingly complex problems. Lawyers and technologists must work together to create a better future for everyone.

 

From DSC:
As with higher education in general, we need more team-based efforts in the legal realm as well as more TrimTab Groups.

 

 

Excerpts:

Why does this distinction matter? Because law—like so many industries—is undergoing a tectonic shift. It is morphing from a lawyer dominated, practice-centric, labor-intensive guild to a tech-enabled, process and data-driven, multi-disciplinary global industry. The career paths, skills, and expectations of lawyers are changing. So too are how, when, and on what financial terms they are engaged; with whom and from what delivery models they work; their performance metrics, and the resources—human and machine—they collaborate with.  Legal practice is shrinking and the business of delivering legal services is expanding rapidly.

Law is no longer the exclusive province of lawyers. Legal knowledge is not the sole element of legal delivery—business and technological competencies are equally important. It’s a new ballgame—one that most lawyers are unprepared for.

How did we get here and are legal careers  for most a dead end? Spoiler alert: there’s tremendous opportunity in the legal industry. The caveat: all lawyers must have basic business and technological competency whether they pursue practice careers or leverage their legal knowledge as a skill in legal delivery and/or allied professional careers.

Upskilling the legal profession is already a key issue, a requisite for career success. Lawyers must learn new skills like project management, data analytics, deployment of technology, and process design to leverage their legal knowledge. Simply knowing the law will not cut it anymore.

 

From DSC:
I really appreciate the work of the above two men whose articles I’m highlighting here. I continue to learn a lot from them and am grateful for their work.

That said, just like it’s a lot to expect a faculty member (in higher ed) who teaches online to not only be a subject matter expert, but also to be skilled in teaching, web design, graphic design, navigation design, information design, audio design, video editing, etc…it’s a lot to expect for a lawyer to be a skilled lawyer, business person, and technician. I realize that Mark was only saying a basic level of competency…but even that can be difficult to achieve at times. Why? Because people have different skillsets, passions, and interests. One might be a good lawyer, but not a solid technician…or vice versa. One might be a solid professor, but isn’t very good with graphic design. 

 

Top jobs in 2040 will involve virtual reality, artificial intelligence & robotics — from themanufacturer.com by Jonny Williamson
Emerging technologies such as virtual reality (VR), artificial intelligence (AI) and robotics will strongly influence the careers we do in the future, according to new research from BAE Systems.

Excerpt:

  • Almost half of young people (47%) aged between 16-24 believe that one day they will work in a role that doesn’t exist yet, but only one-in-five (18%) think they are equipped with the skills required to future-proof their careers.
    .
  • Three-quarters (74%) also feel that they are not getting enough information about careers that will be available in the future.

 

 

A momentous change in the legal industry garnering little attention — from forbes.com by Hendrik Pretorius

Excerpt:

The needed evolution in legal service delivery may receive a big push in the near future. Surprisingly, this issue seems to be flying under the radar for many in the legal industry.

The California Bar, through its Task Force on Access Through Innovation of Legal Services, created in 2018, seeks to “identify possible regulatory changes to enhance the delivery of, and access to, legal services through the use of technology, including artificial intelligence and online legal service delivery models.”

A report commissioned by this task force stated that “[m]odifying the ethics rules to facilitate greater collaboration across law and other disciplines will (1) drive down costs; (2) improve access; (3) increase predictability and transparency of legal services; (4) aid the growth of new businesses; and (5) elevate the reputation of the legal profession.”

 

Herein lies one of the fundamental challenges within the legal industry: viewing the law as the delivery of a legal product, and understanding that this delivery needs to revolve around the user, not the lawyer. There is a real and growing divide between the current model of legal service delivery put forth by a traditional law firm model and what the public wants. Consumers have raised the bar based on what they are experiencing in interacting with other businesses in other industries.

I love what many of these legal tech companies are doing: They are applying standards from outside the entrenched legal industry and changing entire delivery models. This should be a real wake-up call. But how can law firms truly compete and play a role?

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

Why AIoT Is Emerging As The Future Of Industry 4.0 — from forbes.com by Janakiram MSV]

Excerpts:

“By combining AI with industrial IoT, we add an important ability 2connected systems – Act.”

AI goes beyond the visualizations by acting on the patterns and correlations from the telemetry data. It plugs the critical gap by taking appropriate actions based on the data. Instead of just presenting the facts to humans to enable them to act, AI closes the loop by automatically taking an action. It essentially becomes the brain of the connected systems.

 

 

The future of industrial automation lies in the convergence of AI and IoT. Artificial Intelligence of Things will impact almost every industry verticle including automotive, aviation, finance, healthcare, manufacturing and supply chain.

 


 

From DSC:
I’ve often wondered which emerging technologies will be combined with each other to produce something powerful. According to the article referenced above, AI + IoT = AIoT is something to put on the radar.  I’m not at all crazy about the word “lethal” being used in this article/context though — I certainly hope that’s not the case.

 


 

Also relevant/see:

 

Artificial intelligence (AI) has, of late, been the subject of so many announcements, proclamations, predictions and premonitions that it could occupy its own 24-hour cable news channel. In technology circles, it has become a kind of holy grail, akin to fire, the wheel or the steam engine in terms of world-changing potential. Whether these forecasts come to pass is still an open question. What is less in doubt are the vast ethical ramifications of AI development and use, and the need to address them before AI becomes a part of everyday life.

 

Israeli tech co. uses virtual & augmented reality tech to help Christians engage with the Bible — with thanks to Heidi McDow for the resource
Compedia Partners with U.S. Clients to Utilize Company’s Biblical Knowledge and Technological Expertise

TEL AVIV, Israel, Aug. 7, 2019 – Compedia, an Israel-based business-to-business tech company, is using virtual reality technology to service Christian clients with products that help users engage with the Bible in a meaningful way.

Compedia partnered with The Museum of the Bible in Washington, D.C., which attracted more than 1 million visitors during its first year of operation, to help bring the museum’s exhibits to life. With the help of Compedia’s innovation, visitors to the museum can immerse themselves in 34 different biblical sites through augmented reality tours, allowing them to soar across the Sea of Galilee, climb the stairs of the Temple Mount, explore the Holy Sepulchre and so much more. In addition to creating on-site attractions for The Museum of the Bible, Compedia also created a Bible curriculum for high-school students that includes interactive maps, 3-D guides, quizzes, trivia and more.

“Many people are dubious of augmented and virtual reality, but we see how they can be used for God’s glory,” said Illutowich. “When clients recognize how attentive users are to the Bible message when it’s presented through augmented and virtual reality, they see the power of it, too.”

In addition to their passion for furthering Bible education, Compedia is committed to developing products that help educators engage students of all types. The company is currently in partnership with a number of educational institutions and schools around the U.S. to utilize its interactive technology both in the classroom and in the online learning space. Other client collaborations include Siemens, Sony and Intel, to name a few.

About Compedia
Compedia uses cutting-edge technology to help students succeed by making education more fun, engaging, and meaningful. With over 30 years of experience in developing advanced learning solutions for millions of people in 50 countries and 35 languages, Compedia offers expertise in visual computing, augmented reality, virtual reality and advanced systems, as well as instructional design and UX.

 


 

 

 


 

 

A handful of US cities have banned government use of facial recognition technology due to concerns over its accuracy and privacy. WIRED’s Tom Simonite talks with computer vision scientist and lawyer Gretchen Greene about the controversy surrounding the use of this technology.

 

 

The coming deepfakes threat to businesses — from axios.com by Kaveh Waddell and Jennifer Kingson

Excerpt:

In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.

Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company’s stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.

  • Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company’s share price would crumple.

What’s happening: For all the talk about fake videos, it’s deepfake audio that has emerged as the first real threat to the private sector.

 

From DSC…along these same lines see:

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian