Law schools escalate their focus on digital skills — from edtechmagazine.com by Eli Zimmerman
Coding, data analytics and device integration give students the tools to become more efficient lawyers.

Excerpt:

Participants learned to use analytics programs and artificial intelligence to complete work in a fraction of the time it usually takes.

For example, students analyzed contracts using AI programs to find errors and areas for improvement across various legal jurisdictions. In another exercise, students learned to use data programs to draft nondisclosure agreements in less than half an hour.

By learning analytics models, students will graduate with the skills to make them more effective — and more employable — professionals.

“As advancing technology and massive data sets enable lawyers to answer complex legal questions with greater speed and efficiency, courses like Legal Analytics will help KU Law students be better advocates for tomorrow’s clients and more competitive for tomorrow’s jobs,” Stephen Mazza, dean of the University of Kansas School of Law, tells Legaltech News.

 

Reflecting that shift, the Law School Admission Council, which organizes and distributes the Law School Admission Test, will be offering the test exclusively on Microsoft Surface Go tablets starting in July 2019.

 

From DSC:
I appreciate the article, thanks Eli. From one of the articles that was linked to, it appears that, “To facilitate the transition to the Digital LSAT starting July 2019, LSAC is procuring thousands of Surface Go tablets that will be loaded with custom software and locked down to ensure the integrity of the exam process and security of the test results.”

 

 

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 
 

The 10+ best real-world examples of augmented reality — from forbes.com by Bernard Marr

Excerpt:

Augmented reality (AR) can add value, solve problems and enhance the user experience in nearly every industry. Businesses are catching on and increasing investments to drive the growth of augmented reality, which makes it a crucial part of the tech economy.

 

As referenced by Bernard in his above article:

 

 

From DSC:
Along these lines, I really appreciate the “translate” feature within Twitter. It helps open up whole new avenues of learning for me from people across the globe. A very cool, practical, positive, beneficial feature/tool!!!

 

 

81% of legal departments aren’t ready for digitization: Gartner — from mitratech.com by The Mitratech Team

Excerpt:

Despite the efforts of Legal Operations legal tech adopters and advocates, and the many expert voices raised about the need to evolve the legal industry?  A Gartner, Inc. report finds the vast majority of in-house legal departments are unprepared for digital transformation.

In compiling the report, Gartner reviewed the roles of legal departments in no less than 1,715 digital business projects. They also conducted interviews with over 100 general counsel and privacy officers, and another 100 legal stakeholders at large companies.

The reveal? That 81% of legal departments weren’t prepared for the oncoming tide of digitization at their companies. That leaves them at a disadvantage when one considers the results of Gartner’s CEO Survey.  Two-thirds of its CEO respondents predicted their business models would change in the next three years, with digitization as a major factor.

 

Also relevant here/see:
AI Pre-Screening Technology: A New Era for Contracts? — from by Tim Pullan, CEO and Founder, ThoughtRiver

Excerpt:

However, enterprises are beginning to understand the tangible value that can be delivered by automated contract pre-screening solutions. Such technology can ask thousands of questions defined by legal experts, and within minutes deliver an output weighing up the risks and advising next steps. Legal resources are then only required to follow up on these recommendations, whether they be a change to a clause, removing common bottlenecks altogether, or acting quickly to monetise a business opportunity.

There are clear benefits for both the legal team and the business. The GC’s team spends more time on enterprise-wide strategy and supporting other departments, while the business can move at pace and gain considerable competitive advantage.

 

 

Horizon Report Preview 2019 — from library.educause.edu
Analytics, Artificial Intelligence (AI), Badges and Credentialing, Blended Learning, Blockchain, Digital Learning, Digital Literacy, Extended Reality (XR), Instructional Design, Instructional Technologies, Learning Analytics, Learning Space, Mobile Learning, Student Learning Support, Teaching and Learning

Abstract
The EDUCAUSE Horizon Report Preview provides summaries of each of the upcoming edition’s trends, challenges, and important developments in educational technology, which were ranked most highly by the expert panel. This year’s trends include modularized and disaggregated degrees, the advancing of digital equity, and blockchain.

For more than a decade, EDUCAUSE has partnered with the New Media Consortium (NMC) to publish the annual Horizon Report – Higher Education Edition. In 2018, EDUCAUSE acquired the rights to the NMC Horizon project.

 

 

 

 

Philips, Microsoft Unveils Augmented Reality Concept for Operating Room of the Future — from hitconsultant.net by Fred Pennic

Excerpt:

Health technology company Philips unveiled a unique mixed reality concept developed together with Microsoft Corp. for the operating room of the future. Based on the state-of-the-art technologies of Philips’Azurion image-guided therapy platform and Microsoft’s HoloLens 2 holographic computing platform, the companies will showcase novel augmented reality applications for image-guided minimally invasive therapies.

 

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Getting smart about the future of AI — from technologyreview.com by MIT Technology Review Insights
Artificial intelligence is a primary driver of possibilities and promise as the Fourth Industrial Revolution unfolds.

Excerpts:

The Industrial Revolution conjures up images of steam engines, textile mills, and iron workers. This was a defining period during the late 18th and early 19th centuries, as society shifted from primarily agrarian to factory-based work. A second phase of rapid industrialization occurred just before World War I, driven by growth in steel and oil production, and the emergence of electricity.

Fast-forward to the 1980s, when digital electronics started having a deep impact on society—the dawning Digital Revolution. Building on that era is what’s called the Fourth Industrial Revolution. Like its predecessors, it is centered on technological advancements—this time it’s artificial intelligence (AI), autonomous machines, and the internet of things—but now the focus is on how technology will affect society and humanity’s ability to communicate and remain connected.

 

That’s what AI technologies represent in the current period of technological change. It is now critical to carefully consider the future of AI, what it will look like, the effect it will have on human life, and what challenges and opportunities will arise as it evolves.

 

 

See the full report here >>

 

 

Also see:

  • Where Next for AI In Business? An overview for C-level executives — from techrevolution.asia by Bernard Marr
    Excerpt:
    The AI revolution is now well underway. In finance, marketing, medicine and manufacturing, machines are learning to monitor and adapt to real-world inputs in order to operate more efficiently, without human intervention. In our everyday lives, AI kicks in whenever we search the internet, shop online or settle down on the sofa to watch Netflix or listen to Spotify. At this point, it’s safe to say that AI is no longer the preserve of science fiction, but has already changed our world in a huge number of different ways.So: what next? Well, the revolution is showing no signs of slowing down. Research indicates that businesses, encouraged by the initial results they have seen, are now planning on stepping up investment and deployment of AI.One of the most noticeable advances will be the ongoing “democratization” of AI. What this means, put simply, is that AI-enabled business tools will increasingly become available to all of us, no matter what jobs we do.

 

You’ll no longer need to be an expert in computer science to use AI to do your job efficiently – this is the “democratization” of AI and it’s a trend which will impact more and more businesses going forward.

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 
 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

Making New Drugs With a Dose of Artificial Intelligence — from nytimes.com by Cade Metz

Excerpt:

DeepMind specializes in “deep learning,” a type of artificial intelligence that is rapidly changing drug discovery science. A growing number of companies are applying similar methods to other parts of the long, enormously complex process that produces new medicines. These A.I. techniques can speed up many aspects of drug discovery and, in some cases, perform tasks typically handled by scientists.

“It is not that machines are going to replace chemists,” said Derek Lowe, a longtime drug discovery researcher and the author of In the Pipeline, a widely read blog dedicated to drug discovery. “It’s that the chemists who use machines will replace those that don’t.”

 

 

 

135 Million Reasons To Believe In A Blockchain Miracle — from forbes.com by Mike Maddock

Excerpts:

Which brings us to the latest headlines about a cryptocurrency entrepreneur’s passing—taking with him the passcode to unlock C$180 million (about $135 million U.S.) in investor currency—which is now reportedly gone forever. Why? Because apparently, the promise of blockchain is true: It cannot be hacked. It is absolutely trustworthy.

Gerald Cotton, the CEO of a crypto company, reportedly passed away recently while building an orphanage in India. Unfortunately, he was the only person who knew the passcode to access the millions his investors had entrusted in him.

This is how we get the transition to Web 3.0.

Some questions to consider:

  • Who will build an easy-to-use “wallet” of the future?
  • Are we responsible enough to handle that much power?

Perhaps the most important question of all is: What role do our “trusted” experts play in this future?

 


From DSC:
I’d like to add another question to Mike’s article:

  • How should law schools, law firms, legislative bodies, government, etc. deal with the new, exponential pace of change and with the power of emerging technologies like , ,  ,  etc.?

 


 

 
© 2024 | Daniel Christian