A momentous change in the legal industry garnering little attention — from forbes.com by Hendrik Pretorius

Excerpt:

The needed evolution in legal service delivery may receive a big push in the near future. Surprisingly, this issue seems to be flying under the radar for many in the legal industry.

The California Bar, through its Task Force on Access Through Innovation of Legal Services, created in 2018, seeks to “identify possible regulatory changes to enhance the delivery of, and access to, legal services through the use of technology, including artificial intelligence and online legal service delivery models.”

A report commissioned by this task force stated that “[m]odifying the ethics rules to facilitate greater collaboration across law and other disciplines will (1) drive down costs; (2) improve access; (3) increase predictability and transparency of legal services; (4) aid the growth of new businesses; and (5) elevate the reputation of the legal profession.”

 

Herein lies one of the fundamental challenges within the legal industry: viewing the law as the delivery of a legal product, and understanding that this delivery needs to revolve around the user, not the lawyer. There is a real and growing divide between the current model of legal service delivery put forth by a traditional law firm model and what the public wants. Consumers have raised the bar based on what they are experiencing in interacting with other businesses in other industries.

I love what many of these legal tech companies are doing: They are applying standards from outside the entrenched legal industry and changing entire delivery models. This should be a real wake-up call. But how can law firms truly compete and play a role?

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

Why AIoT Is Emerging As The Future Of Industry 4.0 — from forbes.com by Janakiram MSV]

Excerpts:

“By combining AI with industrial IoT, we add an important ability 2connected systems – Act.”

AI goes beyond the visualizations by acting on the patterns and correlations from the telemetry data. It plugs the critical gap by taking appropriate actions based on the data. Instead of just presenting the facts to humans to enable them to act, AI closes the loop by automatically taking an action. It essentially becomes the brain of the connected systems.

 

 

The future of industrial automation lies in the convergence of AI and IoT. Artificial Intelligence of Things will impact almost every industry verticle including automotive, aviation, finance, healthcare, manufacturing and supply chain.

 


 

From DSC:
I’ve often wondered which emerging technologies will be combined with each other to produce something powerful. According to the article referenced above, AI + IoT = AIoT is something to put on the radar.  I’m not at all crazy about the word “lethal” being used in this article/context though — I certainly hope that’s not the case.

 


 

Also relevant/see:

 

Artificial intelligence (AI) has, of late, been the subject of so many announcements, proclamations, predictions and premonitions that it could occupy its own 24-hour cable news channel. In technology circles, it has become a kind of holy grail, akin to fire, the wheel or the steam engine in terms of world-changing potential. Whether these forecasts come to pass is still an open question. What is less in doubt are the vast ethical ramifications of AI development and use, and the need to address them before AI becomes a part of everyday life.

 

Israeli tech co. uses virtual & augmented reality tech to help Christians engage with the Bible — with thanks to Heidi McDow for the resource
Compedia Partners with U.S. Clients to Utilize Company’s Biblical Knowledge and Technological Expertise

TEL AVIV, Israel, Aug. 7, 2019 – Compedia, an Israel-based business-to-business tech company, is using virtual reality technology to service Christian clients with products that help users engage with the Bible in a meaningful way.

Compedia partnered with The Museum of the Bible in Washington, D.C., which attracted more than 1 million visitors during its first year of operation, to help bring the museum’s exhibits to life. With the help of Compedia’s innovation, visitors to the museum can immerse themselves in 34 different biblical sites through augmented reality tours, allowing them to soar across the Sea of Galilee, climb the stairs of the Temple Mount, explore the Holy Sepulchre and so much more. In addition to creating on-site attractions for The Museum of the Bible, Compedia also created a Bible curriculum for high-school students that includes interactive maps, 3-D guides, quizzes, trivia and more.

“Many people are dubious of augmented and virtual reality, but we see how they can be used for God’s glory,” said Illutowich. “When clients recognize how attentive users are to the Bible message when it’s presented through augmented and virtual reality, they see the power of it, too.”

In addition to their passion for furthering Bible education, Compedia is committed to developing products that help educators engage students of all types. The company is currently in partnership with a number of educational institutions and schools around the U.S. to utilize its interactive technology both in the classroom and in the online learning space. Other client collaborations include Siemens, Sony and Intel, to name a few.

About Compedia
Compedia uses cutting-edge technology to help students succeed by making education more fun, engaging, and meaningful. With over 30 years of experience in developing advanced learning solutions for millions of people in 50 countries and 35 languages, Compedia offers expertise in visual computing, augmented reality, virtual reality and advanced systems, as well as instructional design and UX.

 


 

 

 


 

 

A handful of US cities have banned government use of facial recognition technology due to concerns over its accuracy and privacy. WIRED’s Tom Simonite talks with computer vision scientist and lawyer Gretchen Greene about the controversy surrounding the use of this technology.

 

 

The coming deepfakes threat to businesses — from axios.com by Kaveh Waddell and Jennifer Kingson

Excerpt:

In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.

Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company’s stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.

  • Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company’s share price would crumple.

What’s happening: For all the talk about fake videos, it’s deepfake audio that has emerged as the first real threat to the private sector.

 

From DSC…along these same lines see:

 

I opted out of facial recognition at the airport — it wasn’t easy — from wired.com by Allie Funk

Excerpt (emphasis DSC):

As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

As I watched traveler after traveler stand in front of a facial scanner before boarding our flight, I had an eerie vision of a new privacy-invasive status quo. With our faces becoming yet another form of data to be collected, stored, and used, it seems we’re sleepwalking toward a hyper-surveilled environment, mollified by assurances that the process is undertaken in the name of security and convenience. I began to wonder: Will we only wake up once we no longer have the choice to opt out?

Until we have evidence that facial recognition is accurate and reliable—as opposed to simply convenient—travelers should avoid the technology where they can.

 

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. 

 

From DSC:
Readers of this blog will know that I am generally a pro-technology person. That said, there are times when I don’t trust humankind to use the power of some of these emerging technologies appropriately and ethically. Along these lines, I don’t like where facial recognition could be heading…and citizens don’t seem to have effective ways to quickly weigh in on this emerging technology. I find this to be a very troubling situation. How about you?

 

Daniel Christian -- A technology is meant to be a tool, it is not meant to rule.

 

 

Russian hackers behind ‘world’s most murderous malware’ probing U.S. power grid — from digitaltrends.com Georgina Torbet

 

U.S. Escalates Online Attacks on Russia’s Power Grid — from nytimes.com by David Sanger and Nicole Perlroth

 

 

 

 

 

 

From DSC:
As many times happens with humans use of technologies, some good and some bad here. Exciting. Troubling. Incredible. Alarming.

Companies, please make sure you’re not giving the keys to a $137,000, powerful Maserati to your “16 year olds.”

Just because we can…

And to you “16 year olds out there”…ask for / seek wisdom. Ask yourself whether you should be developing what you are developing. Is it helpful or hurtful to society? Don’t just collect the paycheck. You have a responsibility to humankind.

To whom much is given…

 

Facial recognition smart glasses could make public surveillance discreet and ubiquitous — from theverge.com by James Vincent; with thanks to Mr. Paul Czarapata, Ed.D. out on Twitter for this resource
A new product from UAE firm NNTC shows where this tech is headed next. <– From DSC: though hopefully not!!!

Excerpt:

From train stations and concert halls to sport stadiums and airports, facial recognition is slowly becoming the norm in public spaces. But new hardware formats like these facial recognition-enabled smart glasses could make the technology truly ubiquitous, able to be deployed by law enforcement and private security any time and any place.

The glasses themselves are made by American company Vuzix, while Dubai-based firm NNTC is providing the facial recognition algorithms and packaging the final product.

 

From DSC…I commented out on Twitter:

Thanks Paul for this posting – though I find it very troubling. Emerging technologies race out ahead of society. It would be interested in knowing the age of the people developing these technologies and if they care about asking the tough questions…like “Just because we can, should we be doing this?”

 

Addendum on 6/12/19:

 

‘Robots’ Are Not ‘Coming for Your Job’—Management Is — from gizmodo.com by Brian Merchant; with a special thanks going out to Keesa Johnson for her posting this out on LinkedIn

A robot is not ‘coming for’, or ‘stealing’ or ‘killing’ or ‘threatening’ to take away your job. Management is.

Excerpt (emphasis DSC):

At first glance, this might like a nitpicky semantic complaint, but I assure you it’s not—this phrasing helps, and has historically helped, mask the agency behind the *decision* to automate jobs. And this decision is not made by ‘robots,’ but management. It is a decision most often made with the intention of saving a company or institution money by reducing human labor costs (though it is also made in the interests of bolstering efficiency and improving operations and safety). It is a human decision that ultimately eliminates the job.

 

From DSC:
I’ve often said that if all the C-Suite cares about is maximizing profits — instead of thinking about their fellow humankind and society as a whole —  we’re in big trouble.

If the thinking goes, “Heh — it’s just business!” <– Again, then we’re in big trouble here.

Just because we can, should we? Many people should be reflecting upon this question…and not just members of the C-Suite.

 

 

 

State Attempts to Nix Public School’s Facial Recognition Plans — from futurism.com by Kristin Houser
But it might not have the authority to actually stop an upcoming trial.

Excerpt (emphasis DSC):

Chaos Reigns
New York’s Lockport City School District (CSD) was all set to become the first public school district in the U.S. to test facial recognition on its students and staff. But just two days after the school district’s superintendent announced the project’s June 3 start date, the New York State Education Department (NYSED) attempted to put a stop to the trial, citing concerns for students’ privacy. Still, it’s not clear whether the department has the authority to actually put the project on hold — *****the latest sign that the U.S. is in desperate need of clear-cut facial recognition legislation.*****

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

People, Power and Technology: The Tech Workers’ View — from doteveryone.org.uk

Excerpt:

People, Power and Technology: The Tech Workers’ View is the first in-depth research into the attitudes of the people who design and build digital technologies in the UK. It shows that workers are calling for an end to the era of moving fast and breaking things.

Significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society. This is heightening the UK’s tech talent crisis and running up employers’ recruitment and retention bills. Organisations and teams that can understand and meet their teams’ demands to work responsibly will have a new competitive advantage.

While Silicon Valley CEOs have tried to reverse the “techlash” by showing their responsible credentials in the media, this research shows that workers:

    • need guidance and skills to help navigate new dilemmas
    • have an appetite for more responsible leadership
    • want clear government regulation so they can innovate with awareness

Also see:

  • U.K. Tech Staff Quit Over Work On ‘Harmful’ AI Projects — from forbes.com by Sam Shead
    Excerpt:
    An alarming number of technology workers operating in the rapidly advancing field of artificial intelligence say they are concerned about the products they’re building. Some 59% of U.K. tech workers focusing on AI have experience of working on products that they felt might be harmful for society, according to a report published on Monday by Doteveryone, the think tank set up by lastminute.com cofounder and Twitter board member Martha Lane Fox.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian