From DSC:
The following article made me again wonder about the place of technology in the access to justice realm.


Legal Tech Startup Lexion Tasks GPT-3 to Help Draft Contracts in Microsoft Word — from voicebot.ai by Eric Hal Schwartz

Excerpt:

Lawyers can ask GPT-3 to help write contracts in Microsoft Word thanks to legal tech startup Lexion’s new AI Contract Assist Word plugin. The new tool offers assistance in drafting and negotiating terms, as well as summarizing the contract for those not versed in legal language and marks the growing interest in applying generative AI within the legal profession.

Lexion’s AI Contract Assistant is designed to compose, adjust, and explain contracts with an eye toward streamlining their creation and approval. Lawyers with the Word plugin can write a prompt describing the goal of a contract clause, and the AI will generate one with appropriate language. 

 

ChatGPT, Chatbots and Artificial Intelligence in Education — from ditchthattextbook.com by Matt Miller
AI just stormed into the classroom with the emergence of ChatGPT. How do we teach now that it exists? How can we use it? Here are some ideas.

Excerpt:
Now, we’re wondering …

  • What is ChatGPT? And, more broadly, what are chatbots and AI?
  • How is this going to impact education?
  • How can I teach tomorrow knowing that this exists?
  • Can I use this as a tool for teaching and learning?
  • Should we block it through the school internet filter — or try to ban it?

Also relevant/see:

We gave ChatGPT a college-level microbiology quiz. It blew the quiz away. — from bigthink.com by Dr. Alex Berezow
ChatGPT’s capabilities are astonishing.

Key takeaways:

  • The tech world is abuzz over ChatGPT, a chat bot that is said to be the most advanced ever made.
  • It can create poems, songs, and even computer code. It convincingly constructed a passage of text on how to remove a peanut butter sandwich from a VCR, in the voice of the King James Bible.
  • As a PhD microbiologist, I devised a 10-question quiz that would be appropriate as a final exam for college-level microbiology students. ChatGPT blew it away.

ChatGPT Is Dumber Than You Think — from theatlantic.com by Ian Bogost
Treat it like a toy, not a tool.

Excerpt:

On the one hand, yes, ChatGPT is capable of producing prose that looks convincing. But on the other hand, what it means to be convincing depends on context. The kind of prose you might find engaging and even startling in the context of a generative encounter with an AI suddenly seems just terrible in the context of a professional essay published in a magazine such as The Atlantic. And, as Warner’s comments clarify, the writing you might find persuasive as a teacher (or marketing manager or lawyer or journalist or whatever else) might have been so by virtue of position rather than meaning: The essay was extant and competent; the report was in your inbox on time; the newspaper article communicated apparent facts that you were able to accept or reject.

I Would Have Cheated in College Using ChatGPT — from eliterate.us by Michael Feldstein

Excerpt:

These lines of demarcation—the lines between when a tool can do all of a job, some of it, or none of it—are both constantly moving and critical to watch. Because they define knowledge work and point to the future of work. We need to be teaching people how to do the kinds of knowledge work that computers can’t do well and are not likely to be able to do well in the near future. Much has been written about the economic implications to the AI revolution, some of which are problematic for the employment market. But we can put too much emphasis on that part. Learning about artificial intelligence can be a means for exploring, appreciating, and refining natural intelligence. These tools are fun. I learn from using them. Those two statements are connected.

Google to Rival OpenAI’s ChatGPT? New AI Bot for Chats in 2023, CEO Claims to Use it for Search — from techtimes.com by Isaiah Richard
Google to expand its Search engine with AI for 2023, but not in a creepy way.

Excerpt:

Google is planning to create a new AI feature for its Search engine, one that would rival the recently released and controversial ChatGPT from OpenAI. The company revealed this after a recent Google executive meeting that involved the likes of its CEO Sundar Pichai and AI head, Jeff Dean, that talked about the technology that the internet company already has, soon for development.

Employees from the Mountain View giant were concerned that it was behind the current AI trends to the likes of OpenAI despite already having a similar technology laying around.


And more focused on the business/vocational/corporate training worlds:

Sana raises $34M for its AI-based knowledge management and learning platform for workplaces — from techcrunch.com by Ingrid Lunden

There are a lot of knowledge management, enterprise learning and enterprise search products on the market today, but what Sana believes it has struck on uniquely is a platform that combines all three to work together: a knowledge management-meets-enterprise-search-meets-e-learning platform.

Exclusive: ChatGPT owner OpenAI projects $1 billion in revenue by 2024 — from reuters.com by Jeffrey Dastin, Krystal Hu, and Paresh Dave

Excerpt:

Three sources briefed on OpenAI’s recent pitch to investors said the organization expects $200 million in revenue next year and $1 billion by 2024.

The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.

“We’re going to see advances in 2023 that people two years ago would have expected in 2033. It’s going to be extremely important not just for Microsoft’s future, but for everyone’s future,” he said in an interview this week.


Addendum on 12/21/22:

ChatGPT and higher education: last week and this week — from bryanalexander.org by Bryan Alexander

 

AI bot ChatGPT stuns academics with essay-writing skills and usability — from theguardian.com by Alex Hern
Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

Excerpt:

Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

 


Also related/see:


AI and the future of undergraduate writing — from chronicle.com by Beth McMurtrie

Excerpts:

Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?

And what does it mean for professors if the answer to those questions is “yes”?

Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.

“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”

 

 

 

ChatGPT Could Be AI’s iPhone Moment — from bloomberg.com by Vlad Savov; with thanks to Dany DeGrave for his Tweet on this

Excerpt:

The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.

 


And on the legal side of things:


 

The Top 10 Digital Health Stories Of 2022 — from medicalfuturist.com by Dr. Bertalan Mesko

Excerpt:

Edging towards the end of the year, it is time for a summary of how digital health progressed in 2022. It is easy to get lost in the noise – I myself shared well over a thousand articles, studies and news items between January and the end of November 2022. Thus, just like in 20212020 (and so on), I picked the 10 topics I believe will have the most significance in the future of healthcare.

9. Smart TVs Becoming A Remote Care Platform
The concept of turning one’s TV into a remote care hub isn’t new. Back in 2012, researchers designed a remote health assistance system for the elderly to use through a TV set. But we are exploring this idea now as a major tech company has recently pushed for telehealth through TVs. In early 2022, electronics giant LG announced that its smart TVs will be equipped with the remote health platform Independa. 

And in just a few months (late November) came a follow-up: a product called Carepoint TV Kit 200L, in beta testing now. Powered by Amwell’s Converge platform, the product is aimed at helping clinicians more easily engage with patients amid healthcare’s workforce shortage crisis.

Also relevant/see:

Asynchronous Telemedicine Is Coming And Here Is Why It’s The Future Of Remote Care — from medicalfuturist.com by Dr. Bertalan Mesko

Excerpt:

Asynchronous telemedicine is one of those terms we will need to get used to in the coming years. Although it may sound alien, chances are you have been using some form of it for a while.

With the progress of digital health, especially due to the pandemic’s impact, remote care has become a popular approach in the healthcare setting. It can come in two forms: synchronous telemedicine and asynchronous telemedicine.

 

10 Must Read Books for Learning Designers — from linkedin.com by Amit Garg

Excerpt:

From the 45+ #books that I’ve read in last 2 years here are my top 10 recommendations for #learningdesigners or anyone in #learninganddevelopment

Speaking of recommended books (but from a more technical perspective this time), also see:

10 must-read tech books for 2023 — from enterprisersproject.com by Katie Sanders (Editorial Team)
Get new thinking on the technologies of tomorrow – from AI to cloud and edge – and the related challenges for leaders

10 must-read tech books for 2023 -- from enterprisersproject.com by Katie Sanders

 

NVIDIA Teams With Microsoft to Build Massive Cloud AI Computer — from nvidianews.nvidia.com
Tens of Thousands of NVIDIA GPUs, NVIDIA Quantum-2 InfiniBand and Full Stack of NVIDIA AI Software Coming to Azure; NVIDIA, Microsoft and Global Enterprises to Use Platform for Rapid, Cost-Effective AI Development and Deployment

Excerpt:

NVIDIA announced [on 11/16/22] a multi-year collaboration with Microsoft to build one of the most powerful AI supercomputers in the world, powered by Microsoft Azure’s advanced supercomputing infrastructure combined with NVIDIA GPUs, networking and full stack of AI software to help enterprises train, deploy and scale AI, including large, state-of-the-art models.

Addendum on 11/20/22:

 
 

Clio’s 2022 Legal Trends Report
Learn how lawyers are balancing the flexibility of hybrid work, and what clients look for when hiring a lawyer.

Also relevant/see:

Clio’s 2022 Legal Trends Report Finds Lawyers’ Business Growing But Fees Fail to Keep Pace — from lawnext.com by Bob Ambrogi

Top law schools have been slow to add women faculty members, research finds — from highereddive.com

Excerpt:

Law schools have increasingly sorted along gender lines, and the makeup of faculties has become a reflection of schools’ student population, according to preprint research published on the SSRN, an open access platform for early-stage research.

Five digital trends to watch in the legal tech sector — from information-age.com by Leanne Aldrich

Excerpt:

Technology is changing the legal sector. The UK government has recently announced that it is investing £4m to modernise the UK legal industry through its LawTechUK programme. The initiative is a part of a drive to keep the UK at the global forefront of legal services..

ILTA’s Annual Technology Survey: Highlights — from legaltechnology.com

What does it take to be a legal technologist? — legalfutures.co.uk

Excerpt:

When the first seeds of the legal technologist role were planted in the early 2010s, they took some time to germinate. A decade later, after a seemingly slow start, there has been an explosion of investment, awareness and new job opportunities in legal technology.

But as this new strand of the legal profession sets its roots deeper in the industry, what exactly does it take to be a legal technologist?

Shearman & Sterling Launches Legal Ops Service In Sign of the Times — from artificiallawyer.com

Excerpt:

In another sign of the changing times we are in, leading New York law firm Shearman & Sterling is formally launching a Legal Operations capability. The move follows fellow elite rival Cleary Gottlieb launching Cleary X, its innovation-focused legal delivery arm.

A decade ago many would not have expected New York’s top firms to be that bothered with anything other than high-end legal advisory and disputes work, but the legal world is evolving.

‘Legal Operations by Shearman’ will offer a range of services including legal tech help, data analytics, and inhouse department design, but may work with ALSPs and other groups when it comes to CLM onboarding, with these other providers handling actual implementation and with Shearman focused on the bigger legal ops picture.

#legal #trends #legaltech #lawyers #law #lawschools

 

 

The real strength of weak ties — from news.stanford.edu; with thanks to Roberto Ferraro for this resource
A team of Stanford, MIT, and Harvard scientists finds “weaker ties” are more beneficial for job seekers on LinkedIn.

Excerpt:

A team of researchers from Stanford, MIT, Harvard, and LinkedIn recently conducted the largest experimental study to date on the impact of digital job sites on the labor market and found that weaker social connections have a greater beneficial effect on job mobility than stronger ties.

“A practical implication of the research is that it’s helpful to reach out to people beyond your immediate friends and colleagues when looking for a new job,” explained Erik Brynjolfsson, who is the Jerry Yang and Akiko Yamazaki Professor at Stanford University. “People with whom you have weaker ties are more likely to have information or connections that are useful and relevant.”

 

We must end ‘productivity paranoia’ on working from home says Microsoft — from inavateonthenet.net

Excerpt:

As part of a survey on hybrid working patterns of more than 20,000 people in 11 countries, Microsoft has called for an end to ‘productivity paranoia’ with 85% of business leaders still saying they find it difficult to have confidence in staff productivity when remote working.

“Closing the feedback loop is key to retaining talent. Employees who feel their companies use employee feedback to drive change are more satisfied (90% vs. 69%) and engaged (89% vs. 73%) compared to those who believe their companies don’t drive change. And the employees who don’t think their companies drive change based on feedback? They’re more than twice as likely to consider leaving in the next year (16% vs. 7%) compared to those who do. And it’s not a one-way street. To build trust and participation in feedback systems, leaders should regularly share what they’re hearing, how they’re responding, and why.”

From DSC:
It seems to me that trust and motivation are highly involved here. Trust in one’s employees to do their jobs. And employees who aren’t producing and have low motivation levels should consider changing jobs/industries to find something that’s much more intrinsically motivating to them. Find a cause/organization that’s worth working for.

 

Top 5 Developments in Web 3.0 We Will See in the Next Five Years — from intelligenthq.com

Excerpt:

Today, websites have turned highly engaging, and the internet is full of exciting experiences. Yet, web 3.0 is coming with noteworthy trends and things to look out for.

Here are the top 5 developments in web 3.0 expected in the coming five years.
.

 

European telco giants collaborate on 5G-powered holographic videocalls — from inavateonthenet.net

Excerpt:

Some of Europe’s biggest telecoms operators have joined forces for a pilot project that aims to make holographic calls as simple and straightforward as a phone call.

Deutsche Telekom, Orange, Telefónica and Vodafone are working with holographic presence company Matsuko to develop an easy-to-use platform for immersive 3D experiences that could transform communications and the virtual events market

Advances in connectivity, thanks to 5G and edge computing technology, allow smooth and natural movement of holograms and make the possibility of easy-to-access holographic calls a reality.
.

Top XR Vendors Majoring in Education for 2022 — from xrtoday.com

Excerpt:

Few things are more important than delivering the right education to individuals around the globe. Whether enlightening a new generation of young students, or empowering professionals in a complex business environment, learning is the key to building a better future.

In recent years, we’ve discovered just how powerful technology can be in delivering information to those who need it most. The cloud has paved the way for a new era of collaborative remote learning, while AI tools and automated systems are assisting educators in their tasks. XR has the potential to be one of the most disruptive new technologies in the educational space.

With Extended Reality technology, training professionals can deliver incredible experiences to students all over the globe, without the risks or resource requirements of traditional education. Today, we’re looking at just some of the major vendors leading the way to a future of immersive learning.

 

Top Tools for Learning 2022 [Jane Hart]

Top Tools for Learning 2022

 

Top tools for learning 2022 — from toptools4learning.com by Jane Hart

Excerpt:

In fact, it has become clear that whilst 2021 was the year of experimentation – with an explosion of tools being used as people tried out new things, 2022 has been the year of consolidation – with people reverting to their trusty old favourites. In fact, many of the tools that were knocked off their perches in 2021, have now recovered their lost ground this year.


Also somewhat relevant/see:


 

Radar Trends to Watch: August 2022 — from oreilly.com by Mike Loukides
Developments in Security, Quantum Computing, Energy, and More

Excerpt:

The large model train keeps rolling on. This month, we’ve seen the release of Bloom, an open, large language model developed by the BigScience collaboration, the first public access to DALL-E (along with a guide to prompt engineering), a Copilot-like model for generating regular expressions from English-language prompts, and Simon Willison’s experiments using GPT-3 to explain JavaScript code.

On other fronts, NIST has released the first proposed standard for post-quantum cryptography (i.e., cryptography that can’t be broken by quantum computers). CRISPR has been used in human trials to re-engineer a patient’s DNA to reduce cholesterol. And a surprising number of cities are paying high tech remote workers to move there.

 

The Metaverse Will Reshape Our Lives. Let’s Make Sure It’s for the Better. — from time.com by Matthew Ball

Excerpts (emphasis DSC):

The metaverse, a 30-year-old term but nearly century-old idea, is forming around us. Every few decades, a platform shift occurs—such as that from mainframes to PCs and the internet, or the subsequent evolution to mobile and cloud computing. Once a new era has taken shape, it’s incredibly difficult to alter who leads it and how. But between eras, those very things usually do change. If we hope to build a better future, then we must be as aggressive about shaping it as are those who are investing to build it.

The next evolution to this trend seems likely to be a persistent and “living” virtual world that is not a window into our life (such as Instagram) nor a place where we communicate it (such as Gmail) but one in which we also exist—and in 3D (hence the focus on immersive VR headsets and avatars).

 

Inside a radical new project to democratize AI — from technologyreview.com by Melissa Heikkilä
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

Excerpt:

PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.

Another item re: AI:

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns — from protocol.com by Kate Kaye
The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year — “strongly encouraged”researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.

 
© 2024 | Daniel Christian