Gartner: Top 10 Strategic Technologies Impacting Higher Ed in 2019 — from campustechnology.com by Rhea Kelly

Excerpt:

  • Artificial intelligence conversational interfaces. Gartner defines these as “a subset of conversational user interfaces (CUIs), in which user and machine interactions occur in the user’s spoken or written natural language.” The benefit for higher ed insitutions: “CUIs place responsibility on the machine interface to learn what the user wants, rather than the user having to learn the software, saving user time, increasing student satisfaction, and being available to use 24/7.”
  • Smart campus. This is “a physical or digital environment in which humans and technology-enabled systems interact to create more immersive and automated experiences for university stakeholders.” While smart campus initiatives are still in the early stages, there has been a rising interest across higher ed institutions, according to Gartner. “The smart campus will drive growth in markets like robotic process automation solutions and augmented and virtual reality in the higher education space. Campus efficiency will be enhanced and student learning will be enriched with the new capabilities they bring. It’s a win all-around, except for the data security implications that come with most technology initiatives today,” said Morgan.
  • Digital credentialing technologies. “Students, faculty and the higher education institutions they are a part of are starting to expect the ability to quickly and freely exchange credentials to enhance the verification and recruitment process,” noted Gartner. Technologies such as blockchain and data encryption are driving change in this area. “In many ways, credentials issued by an education institution are the only tangible evidence of higher education. They should be considered the currency of the education ecosystem,” said Morgan. “These technologies really enable universities to leverage technology to improve the student experience by giving them more control over their information. The only hurdle is a general lack of understanding of digital credentialing technologies and risk-averseness in the high-stakes nature of the higher education market.”

 

 

Map of fundamental technologies in legal services — from remakinglawfirms.com by Michelle Mahoney

Excerpt:
The Map is designed to help make sense of the trends we are observing:

  • an increasing number of legal technology offerings;
  • the increasing effectiveness of legal technologies;
  • emerging new categories of legal technology;
  • the layering and combining of fundamental technology capabilities; and
  • the maturation of machine learning, natural language processing and deep learning artificial intelligence.

Given the exponential nature of the technologies, the Fundamental Technologies Map can only depict the landscape at the current point in time.

 

Information processing in legal services (PDF file)

 

Also see:
Delta Model Update: The Most Important Area of Lawyer Competency — Personal Effectiveness Skills — from legalexecutiveinstitute.comby Natalie Runyon

Excerpt:

Many legal experts say the legal industry is at an inflection point because the pace of change is being driven by many factors — technology, client demand, disaggregation of matter workflow, the rise of Millennials approaching mid-career status, and the faster pace of business in general.

The fact that technology spend by law firms continues to be a primary area of investment underscores the fact that the pace of change is continuing to accelerate with the ongoing rise of big data and workflow technology that are greatly influencing how lawyering gets done. Moreover, combined with big unstructured data, artificial intelligence (AI) is creating opportunities to analyze siloed data sets to gain insights in numerous new ways.

 

 

A Chinese subway is experimenting with facial recognition to pay for fares — from theverge.com by Shannon Liao

Excerpt:

Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post.

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.

 

 

From DSC:
I don’t want this type of thing here in the United States. But…now what do I do? What about you? What can we do? What paths are open to us to stop this?

I would argue that the new, developing, technological “Wild Wests” in many societies throughout the globe could be dangerous to our futures. Why? Because the pace of change has changed. And these new Wild Wests now have emerging, powerful, ever-more invasive (i.e., privacy-stealing) technologies to deal with — the likes of which the world has never seen or encountered before. With this new, rapid pace of change, societies aren’t able to keep up.

And who is going to use the data? Governments? Large tech companies? Other?

Don’t get me wrong, I’m generally pro-technology. But this new pace of change could wreak havoc on us. We need time to weigh in on these emerging techs.

 

Addendum on 3/20/19:

  • Chinese Facial Recognition Database Exposes 2.5 Million People — from futurumresearch.com by Shelly Kramer
    Excerpt:
    An artificial intelligence company operating a facial recognition system in China recently left its database exposed online, leaving the personal information of some 2.5 million Chinese citizens vulnerable. Considering how much the Chinese government relies on facial recognition technology, this is a big deal—for both the Chinese government and Chinese citizens.

 

 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 

Is this South Africa’s best legal online platform using blockchain? — from techfinancials.co.za
The winners of the event, Kagiso, will progress to the second round of the contest, in which a panel of international judges will decide who attends a grand final in New York.

Excerpt:

The Hague Institute for Innovation of Law (HiiL) and leading global law firm Baker McKenziehave announced the winners of the South African leg of Global Legal Hackathon 2019 (GLH2019).

First prize went to Kagiso, an online mediation platform that provides a cost-effective and fast alternative to lengthy court processes for civil disputes.

Kagiso uses machine learning to match cases with professional mediators who have the most relevant skill sets to be effective – such as subject matter experience or knowledge of local languages – and stores records using blockchain technology.

The second prize was awarded to Bua, a voice-recognition system that allows victims of crime to record their own statements in their own language in a private “safe space” such as a kiosk or on their own phone.

The majority of crimes in South Africa’s go unreported or prosecutions fail, and a leading reason is that victims don’t feel comfortable giving statements in open police stations, and statements are often badly or wilfully mistranslated.

 

 

The Global Legal Hackathon is a non-profit organization that organizes law schools, law firms and in-house departments, legal technology companies, governments, and service providers to innovation in the legal industry – across the globe. It brings together the best thinkers, doers and practitioners in law in support of a unified vision: rapid development of solutions to improve the legal industry, world-wide.

 

 

From DSC:
Glancing through the awards likely shows where the future of the legal field is going…at least in part.

 

Also see:

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

 

C-Level View | Feature:
Technology Change: Closing the Knowledge Gap — A Q&A with Mary Grush & Daniel Christian

Excerpts:

Technology changes quickly. People change slowly. The rate of technology change often outpaces our ability to understand it.

It has caused a gap between what’s possible and what’s legal. For example, facial recognition seems to be starting to show up all around us — that’s what’s possible. But what’s legal?

The overarching questions are: What do we really want from these technologies? What kind of future do we want to live in?

Those law schools that expand their understanding of emerging technologies and lead the field in the exploration of related legal issues will achieve greater national prominence.

Daniel Christian

 

 

 

For a next gen learning platform: A Netflix-like interface to check out potential functionalities / educationally-related “apps” [Christian]

From DSC:
In a next generation learning system, it would be sharp/beneficial to have a Netflix-like interface to check out potential functionalities that you could turn on and off (at will) — as one component of your learning ecosystem that could feature a setup located in your living room or office.

For example, put a Netflix-like interface to the apps out at eduappcenter.com (i.e., using a rolling interface at first, then going to a static page/listing of apps…again…similar to Netflix).

 

A Netflix-like interface to check out potential functionalities / educationally-related apps

 

 

 

Accenture Technology Vision 2019: The post-digital era is upon us — from accenture.com

In brief

  • Digital transformation grants companies exceptional capabilities. But it also creates enormous expectations.
  • Amid these rising expectations, every business is investing in digital technologies, raising the question of how leaders will set themselves apart.
  • Companies looking to differentiate themselves must be aware of five distinct trends that will characterize the “post-digital” future.

 

 

Here is the link for the report.

 

 

 

 

Emerging technology trends can seem both elusive and ephemeral, but some become integral to business and IT strategies—and form the backbone of tomorrow’s technology innovation. The eight chapters of Tech Trends 2019 look to guide CIOs through today’s most promising trends, with an eye toward innovation and growth and a spotlight on emerging trends that may well offer new avenues for pursuing strategic ambitions.

 

 

Amazon is pushing facial technology that a study says could be biased — from nytimes.com by Natasha Singer
In new tests, Amazon’s system had more difficulty identifying the gender of female and darker-skinned faces than similar services from IBM and Microsoft.

Excerpt:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

 

 

When the future comes to West Michigan, will we be ready?


 

UIX: When the future comes to West Michigan, will we be ready? — from rapidgrowthmedia.com by Matthew Russell

Excerpts (emphasis DSC):

“Here in the United States, if we were to personify things a bit, it’s almost like society is anxiously calling out to an older sibling (i.e., emerging technologies), ‘Heh! Wait up!!!'” Christian says. “This trend has numerous ramifications.”

Out of those ramifications, Christian names three main points that society will have to address to fully understand, make use of, and make practical, future technologies.

  1. The need for the legal/legislative side of the world to close the gap between what’s possible and what’s legal
  2. The need for lifelong learning and to reinvent oneself
  3. The need to make pulse-checking/futurism an essential tool in the toolbox of every member of the workforce today and in the future

 

When the future comes to West Michigan, will we be ready?

Photos by Adam Bird

 

From DSC:
The key thing that I was trying to relay in my contribution towards Matthew’s helpful article was that we are now on an exponential trajectory of technological change. This trend has ramifications for numerous societies around the globe, and it involves the legal realm as well. Hopefully, all of us in the workforce are coming to realize our need to be constantly pulse-checking the relevant landscapes around us. To help make that happen, each of us needs to be tapping into the appropriate “streams of content” that are relevant to our careers so that our knowledgebases are as up-to-date as possible. We’re all into lifelong learning now, right?

Along these lines, increasingly there is a need for futurism to hit the mainstream. That is, when the world is moving at 120+mph, the skills and methods that futurists follow must be better taught and understood, or many people will be broadsided by the changes brought about by emerging technologies. We need to better pulse-check the relevant landscapes, anticipate the oncoming changes, develop potential scenarios, and then design the strategies to respond to those potential scenarios.

 

 

Towards a Reskilling Revolution: Industry-Led Action for the Future of Work — from weforum.org

As the Fourth Industrial Revolution impacts skills, tasks and jobs, there is growing concern that both job displacement and talent shortages will impact business dynamism and societal cohesion. A proactive and strategic effort is needed on the part of all relevant stakeholders to manage reskilling and upskilling to mitigate against both job losses and talent shortages.

Through the Preparing for the Future of Work project, the World Economic Forum provides a platform for designing and implementing intra-industry collaboration on the future of work, working closely with the public sector, unions and educators. The output of the project’s first phase of work, Towards a Reskilling Revolution: A Future of Jobs for All, highlighted an innovative method to identify viable and desirable job transition pathways for disrupted workers. This second report, Towards a Reskilling Revolution: Industry-Led Action for the Future of Work extends our previous research to assess the business case for reskilling and establish its magnitude for different stakeholders. It also outlines a roadmap for selected industries to address specific challenges and opportunities related to the transformation of their workforce.

 

See the PDF file / report here.

 

 

 

 

What does it say when a legal blockchain eBook has 1.7M views? — from legalmosaic.com by Mark A. Cohen

Excerpts (emphasis DSC):

Blockchain For Lawyers,” a recently-released eBook by Australian legal tech company Legaler, drew 1.7M views in two weeks. What does that staggering number say about blockchain, legal technology, and the legal industry? Clearly, blockchain is a hot legal topic, along with artificial intelligence (AI), and legal tech generally.

Legal practice and delivery are each changing. New practice areas like cryptocurrency, cybersecurity, and Internet law are emerging as law struggles to keep pace with the speed of business change in the digital age. Concurrently, several staples of traditional practice–research, document review, etc.– are becoming automated and/or no longer performed by law firm associates. There is more “turnover” of practice tasks, more reliance on machines and non-licensed attorneys to mine data and provide domain expertise used by lawyers, and more collaboration than ever before. The emergence of new industries demands that lawyers not only provide legal expertise in support of new areas but also that they possess intellectual agility to master them quickly. Many practice areas law students will encounter have yet to be created. That means that all lawyers will be required to be more agile than their predecessors and engage in ongoing training.

 

 

 
© 2025 | Daniel Christian