Emerging Tech Trend: Patient-Generated Health Data — from futuretodayinstitute.com — Newsletter Issue 124

Excerpt:

Near-Futures Scenarios (2023 – 2028):

Pragmatic: Big tech continues to develop apps that are either indispensably convenient, irresistibly addictive, or both, and we pay for them, not with cash, but with the data we (sometimes unwittingly) let the apps capture. But for the apps for health care and medical insurance, the stakes could literally be life-and-death. Consumers receive discounted premiums, co-pays, diagnostics and prescription fulfillment, but the data we give up in exchange leaves them more vulnerable to manipulation and invasion of privacy.

Catastrophic: Profit-driven drug makers exploit private health profiles and begin working with the Big Nine. They use data-based targeting to over prescribe patients, netting themselves billions of dollars. Big Pharma target and prey on people’s addictions, mental health predispositions and more, which, while undetectable on an individual level, take a widespread societal toll.

Optimistic: Health data enables prescient preventative care. A.I. discerns patterns within gargantuan data sets that are otherwise virtually undetectable to humans. Accurate predictive algorithms identifies complex combinations of risk factors for cancer or Parkinson’s, offers early screening and testing to high-risk patients and encourages lifestyle shifts or treatments to eliminate or delay the onset of serious diseases. A.I. and health data creates a utopia of public health. We happily relinquish our privacy for a greater societal good.

Watchlist: Amazon; Manulife Financial; GE Healthcare; Meditech; Allscripts; eClinicalWorks; Cerner; Validic; HumanAPI; Vivify; Apple; IBM; Microsoft; Qualcomm; Google; Medicare; Medicaid; national health systems; insurance companies.

 

How AI can help you be a better litigator— from law.com by Susan L. Shin
Litigators should be aware of some of the powerful AI and machine learning tools, which can quickly access and analyze large amounts of data and help us make better informed strategic decisions and improve the quality of our advocacy.

Excerpt:

Although artificial intelligence (AI) has been used in the e-discovery space for more than 10 years, AI is now capable of more complex litigation tasks, such as legal research, drafting pleadings, and predicting judicial decisions, in a fraction of the time it would take a human lawyer to do the same tasks. If AI can help lawyers and law firms more quickly process and analyze large amounts of data, and in turn, make the litigation process less expensive, faster and more efficient, why have litigators been so slow to adopt the newest technologies and capabilities? Understanding and demystifying what AI can and cannot do (i.e., it can help automate the more mundane, repetitive legal tasks and analyze large amounts of data, but it cannot negotiate, advocate, or provide sophisticated legal advice) might help litigators not fear, but rather, embrace AI as a way to access larger pools of data, make more informed strategic choices in their advocacy, and provide better and more efficient legal services to clients.

 

AI hiring could mean robot discrimination will head to courts — from news.bloomberglaw.com by Chris Opfer

  • Algorithm vendors, employers grappling with liability issues
  • EEOC already looking at artificial intelligence cases

Excerpt:

As companies turn to artificial intelligence for help making hiring and promotion decisions, contract negotiations between employers and vendors selling algorithms are being dominated by an untested legal question: Who’s liable when a robot discriminates?

The predictive strength of any algorithm is based at least in part on the information it is fed by human sources. That comes with concerns the technology could perpetuate existing biases, whether it is against people applying for jobs, home loans, or unemployment insurance.

From DSC:
Are law schools and their faculty/students keeping up with these kinds of issues? Are lawyers, judges, attorney generals, and others informed about these emerging technologies?

 

A face-scanning algorithm increasingly decides whether you deserve the job — from washingtonpost.com by Drew Harwell
HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

Excerpt:

An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce — and how prospective employees prove their worth.

Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score.

 

The system, they argue, will assume a critical role in helping decide a person’s career. But they doubt it even knows what it’s looking for: Just what does the perfect employee look and sound like, anyway?

“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York.

 

From DSC:
If you haven’t been screened out by an algorithm from an Applicant Tracking System recently, then you haven’t been looking for a job in the last few years. If that’s the case:

  • Then you might not be very interested in this posting.
  • You will be very surprised in the future, when you do need to search for a new job.

Because the truth is, it’s very difficult to get the eyes of a human being to even look at your resume and/or to meet you in person. The above posting/article should disturb you even more. I don’t think that the programmers have programmed everything inside an experienced HR professional’s mind.

 

Also see:

  • In case after case, courts reshape the rules around AI — from muckrock.com
    AI Now Institute recommends improvements and highlights key AI litigation
    Excerpt:
    When undercover officers with the Jacksonville Sheriff’s Office bought crack cocaine from someone in 2015, they couldn’t actually identify the seller. Less than a year later, though, Willie Allen Lynch was sentenced to 8 years in prison, picked through a facial recognition system. He’s still fighting in court over how the technology was used, and his case and others like it could ultimately shape the use of algorithms going forward, according to a new report.
 

Deepfakes: When a picture is worth nothing at all — from law.com by Katherine Forrest

Excerpt:

“Deepfakes” is the name for highly realistic, falsified imagery and sound recordings; they are digitized and personalized impersonations. Deepfakes are made by using AI-based facial and audio recognition and reconstruction technology; AI algorithms are used to predict facial movements as well as vocal sounds. In her Artificial Intelligence column, Katherine B. Forrest explores the legal issues likely to arise as deepfakes become more prevalent.

 

2020 Top 10 Issues: Simplify, Sustain, Innovate: The Drive to Digital Transformation Begins — from Educause by Susan Grajek

  1. Information security strategy
  2. Privacy
  3. Sustainable funding
  4. Digital integrations
  5. Student retention and completion
  6. Student-centric higher education
  7. Improved enrollment
  8. Higher education affordability
  9. Administrative simplification
  10. The integrative CIO

Also see:

The 30th National Survey of eLearning and Information Technology in US Higher Education — from campuscomputing.net
Hiring and Retaining Campus IT Talent Are Challenges; Many Campus Leaders Are Not Well-Informed About nor Engaged with Digital Issues

Excerpt:

New data from the fall 2019 Campus Computing Survey highlight the challenges that IT leaders across all sectors of US higher education confront in hiring and retaining IT talent. More than three-fourths (77 percent) of the CIOs and senior campus officials participating 2019 survey cite “hiring and retaining IT talent” as a top institutional IT priority. Similarly, 78 percent point to uncompetitive campus salaries and benefits as a major problem in the quest to hire and retain IT talent. And reflecting the campus financial challenges that affect hiring and staff retention efforts, fully two-thirds (67 percent) agree/strongly agree that institutional IT funding “has not recovered from the budget cuts” experienced by colleges and universities across all sectors of higher education since the “Great Recession of 2008.”

 

Law librarians & the future of law firms — from aallnet.org by Jordan Furlong

Excerpt:

Law firms that want to win the highest-value, most complex work from clients will need more than just smart lawyers. They will need powerful knowledge engines to augment and amplify the skills of those lawyers, while also constituting capital assets that accrue in size and value every year. Law libraries and legal information professionals hold the key to assembling and growing such engines, and they are, therefore, the key to the future sustainability and competitiveness of the firms themselves.

 

Are smart cities the pathway to blockchain and cryptocurrency adoption? — from forbes.com by Chrissa McFarlane

Excerpts:

At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities. What exactly is a “smart city?” The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning. The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.

What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system. Blockchain provides the technology to unite them into a single system that can track all aspects combined.

 

From DSC:
There are many examples of the efforts/goals of creating smart cities (throughout the globe) in the above article. Also see the article below.

 

There are major issues with AI. This article shows how far the legal realm is in wrestling with emerging technologies.

What happens when employers can read your facial expressions? — from nytimes.com by Evan Selinger and Woodrow Hartzog
The benefits do not come close to outweighing the risks.

Excerpts:

The essential and unavoidable risks of deploying these tools are becoming apparent. A majority of Americans have functionally been put in a perpetual police lineup simply for getting a driver’s license: Their D.M.V. images are turned into faceprints for government tracking with few limits. Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing. Detroit aspires to use facial recognition for round-the-clock monitoring. Americans are losing due-process protections, and even law-abiding citizens cannot confidently engage in free association, free movement and free speech without fear of being tracked.

 “Notice and choice” has been an abysmal failure. Social media companies, airlines and retailers overhype the short-term benefits of facial recognition while using unreadable privacy policiesClose X and vague disclaimers that make it hard to understand how the technology endangers users’ privacy and freedom.

 

From DSC:
This article illustrates how far behind the legal realm is in the United States when we look at where our society is at with wrestling with emerging technologies. Dealing with this relatively new *exponential* pace of change is very difficult for many of our institutions to deal with (higher education and the legal realm come to my mind here).

 

 

Announcing AI Business School for Education for leaders, BDMs and students — from educationblog.microsoft.com by Anthony Salcito

Excerpt:

Microsoft’s AI Business School now offers a learning path for education. Designed for education leaders, decision-makers and even students, the Microsoft AI Business School for Education helps learners understand how AI can enhance the learning environment for all students—from innovations in the way we teach and assess, to supporting accessibility and inclusion for all students, to institutional effectiveness and efficiency with the use of AI tools. The course is designed to empower learners to gain specific, practical knowledge to define and implement an AI strategy. Industry experts share insights on how to foster an AI-ready culture and teach them how to use AI responsibly and with confidence. The learning path is available on Microsoft Learn, a free platform to support learners of all ages and experience levels via interactive, online, self-paced learning.

 

The impact of voice search on firms — from lawtechnologytoday.org

Excerpts:

“Alexa, where can I find an attorney near me who specializes in…?”

“What is my liability if a tree in my yard falls on my neighbor’s house because of a storm?”

“…voice-activated legal searches are coming, and probably faster than you expect.”

 

Exploring Artificial Intelligence and the Law — a presentation/video by Avi Brudner, from blue J Legal

 Exploring Artificial Intelligence and the Law

 

Exploring Artificial Intelligence and the Law

 

Exploring Artificial Intelligence and the Law

 

 

 

 

3 reasons KM and learning systems will soon be amazing — from blog.feathercap.net by Feathercap staff; with thanks to Mr. Tim Seager for this resource

Excerpt:

We’re at an amazing time today as all manner of learning vendors and knowledge management systems are going through a renaissance. Vendors have understood that no one has time to learn required job skills as a separate learning event, and must gain the skills they need in real time as they perform their jobs. A big driver are the technology changes such as the availability of AI approaches accelerating this trend.

From the Knowledge management (KM) providers to the Learning Management Systems (LMS), we’re seeing big improvements. For over a decade LMSs in their present form track and deliver on-demand learning and classroom training. Then came micro learning vendors, with a focus on bite size / 10 min or less training with the Knowledge management (KM) tools and systems growing at the same time. KMs were built to make findable the institutional knowledge an organization uses for each person to do their job. Finally, we have Learning Experience Platforms (LXP), which focus on delivering and recommending micro and macro learning content (macro – longer than 10 minutes to consume) at the moment of need. There has been a downside to all of these approaches however, they all require the workforce, SMEs and content authors to manicure all this content to ensure it is both fresh and useful. Here are the three reasons all of these approaches will soon be amazing…

 

 

Can you make AI fairer than a judge? Play our courtroom algorithm game — from technologyreview.com by Karen Hao and Jonathan Stray
Play our courtroom algorithm game The US criminal legal system uses predictive algorithms to try to make the judicial process less biased. But there’s a deeper problem.

Excerpt:

As a child, you develop a sense of what “fairness” means. It’s a concept that you learn early on as you come to terms with the world around you. Something either feels fair or it doesn’t.

But increasingly, algorithms have begun to arbitrate fairness for us. They decide who sees housing ads, who gets hired or fired, and even who gets sent to jail. Consequently, the people who create them—software engineers—are being asked to articulate what it means to be fair in their code. This is why regulators around the world are now grappling with a question: How can you mathematically quantify fairness? 

This story attempts to offer an answer. And to do so, we need your help. We’re going to walk through a real algorithm, one used to decide who gets sent to jail, and ask you to tweak its various parameters to make its outcomes more fair. (Don’t worry—this won’t involve looking at code!)

The algorithm we’re examining is known as COMPAS, and it’s one of several different “risk assessment” tools used in the US criminal legal system.

 

But whether algorithms should be used to arbitrate fairness in the first place is a complicated question. Machine-learning algorithms are trained on “data produced through histories of exclusion and discrimination,” writes Ruha Benjamin, an associate professor at Princeton University, in her book Race After Technology. Risk assessment tools are no different. The greater question about using them—or any algorithms used to rank people—is whether they reduce existing inequities or make them worse.

 

You can also see change in these articles as well:

 

 

DC: In the future…will there be a “JustWatch” or a “Suppose” for learning-related content?

DC: In the future...will there be a JustWatch or a Suppose for learning-related content?

 
© 2025 | Daniel Christian