Just 10% of law firms have a GenAI policy, new Thomson Reuters report shows — from legaltechnology.com by Caroline Hill

Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.

While Thomson Reuters 2024 Generative AI in Professional Services report shows AI views among legal professionals are rapidly shifting (85% of law firms and corporate legal teams now think AI can be applied to their work) the report shows that legal organisations have a way to go in terms of setting the ground rules for the use of AI. Just 8% said that GenAI is covered under their existing technology policy, while 75% of firms said they don’t have a policy and 7% said they don’t know.


Artificial Lawyer’s 2025 Predictions – Part One — from artificiallawyer.com

I expect to see a lot more agentic AI systems in law: services that break complex tasks into component parts or checklists, complete them with a mix of software, lawyers, and allied legal professionals, then reassemble them into complex first drafts of legal work.

Ed Walters

Integrated Legal Ecosystems
The silos that have historically divided in-house legal teams, external counsel, and other stakeholders will evolve, thanks to the rise of integrated legal ecosystems. End-to-end platforms will provide seamless collaboration, real-time updates, and shared knowledge, enabling all parties to work more effectively toward common goals. Legal departments will turn to these unified solutions to streamline legal requests, offer self-service options, and improve service delivery.

Sacha Kirk

2025 will be the year of agentic AI. It will be the year that end users of legal services finally get legal work resolved for them autonomously.

Richard Mabey


AI Enhances the Human Art of In-House Counsel Leadership — from news.bloomberglaw.com by Eric Dodson Greenberg (behind a paywall)

Transactional lawyers stored prized forms in file drawers or consulted the law library’s bound volumes of forms. The arrival of databases revolutionized access to precedent, enabling faster and broader access to legal resources.

There is little nostalgia for the labor-intensive methods of the past and no argument that some arcane legal skill was lost in the transition. What we gained was increased capacity to focus on higher-order skills and more sophisticated, value-driven legal work. AI offers a similar leap, automating repetitive or foundational tasks to free lawyers for more sophisticated work.

 

US College Closures Are Expected to Soar, Fed Research Says — from bloomberg.com

  • Fed research created predictive model of college stress
  • Worst-case scenario forecasts 80 additional closures

The number of colleges that close each year is poised to significantly increase as schools contend with a slowdown in prospective students.

That’s the finding of a new working paper published by the Federal Reserve Bank of Philadelphia, where researchers created predictive models of schools’ financial distress using metrics like enrollment and staffing patterns, sources of revenue and liquidity data. They overlayed those models with simulations to estimate the likely increase of future closures.

Excerpt from the working paper:

We document a high degree of missing data among colleges that eventually close and show that this is a key impediment to identifying at risk institutions. We then show that modern machine learning techniques, combined with richer data, are far more effective at predicting college closures than linear probability models, and considerably more effective than existing accountability metrics. Our preferred model, which combines an off-the-shelf machine learning algorithm with the richest set of explanatory variables, can significantly improve predictive accuracy even for institutions with complete data, but is particularly helpful for predicting instances of financial distress for institutions with spotty data.


From DSC:
Questions that come to my mind here include:

  • Shouldn’t the public — especially those relevant parents and students — be made more aware of these types of papers and reports?
    .
  • How would any of us like finishing up 1-3 years of school and then being told that our colleges or universities were closing, effective immediately? (This has happened many times already.) and with the demographic cliff starting to hit higher education, this will happen even more now.
    .
    Adding insult to injury…when we transfer to different institutions, we’re told that many of our prior credits don’t transfer — thus adding a significant amount to the overall cost of obtaining our degrees.
    .
  • Would we not be absolutely furious to discover such communications from our prior — and new — colleges and universities?
    .
  • Will all of these types of closures move more people to this vision here?

Relevant excerpts from Ray Schroeder’s recent articles out at insidehighered.com:

Winds of Change in Higher Ed to Become a Hurricane in 2025

A number of factors are converging to create a huge storm. Generative AI advances, massive federal policy shifts, broad societal and economic changes, and the demographic cliff combine to create uncertainty today and change tomorrow.

Higher Education in 2025: AGI Agents to Displace People

The anticipated enrollment cliff, reductions in federal and state funding, increased inflation, and dwindling public support for tuition increases will combine to put even greater pressure on university budgets.


On the positive side of things, the completion rates have been getting better:

National college completion rate ticks up to 61.1% — from highereddive.com by Natalie Schwartz
Those who started at two-year public colleges helped drive the overall increase in students completing a credential.

Dive Brief:

  • Completion rates ticked up to 61.1% for students who entered college in fall 2018, a 0.5 percentage-point increase compared to the previous cohort, according to data released Wednesday by the National Student Clearinghouse Research Center.
  • The increase marks the highest six-year completion rate since 2007 when the clearinghouse began tracking the data. The growth was driven by fewer students stopping out of college, as well as completion gains among students who started at public two-year colleges.
  • “Higher completion rates are welcome news for colleges and universities still struggling to regain enrollment levels from before the pandemic,” Doug Shapiro, the research center’s executive director, said in a statement dated Wednesday.

Addendum:

Attention Please: Professors Struggle With Student Disengagement — from edsurge.com

The stakes are huge, because the concern is that maybe the social contract between students and professors is kind of breaking down. Do students believe that all this college lecturing is worth hearing? Or, will this moment force a change in the way college teaching is done?

 

How to Secure Your 2025 Legal Tech — from americanbar.org by Rachel Bailey

Summary

  • With firms increasingly open to AI tools, now is an exciting time to do some blue-sky thinking about your firm’s technology as a whole.
  • This is a chance for teams to envision the future of their firm’s technology landscape and make bold choices that align with long-term goals.
  • Learn six tips that will improve your odds of approval for your legal tech budget.

Also relevant, see:


Why Technology-Driven Law Firms Are Poised For Long-Term Success — from forbes.com by Daniel Farrar

Client expectations have shifted significantly in today’s technology-driven world. Quick communication and greater transparency are now a priority for clients throughout the entire case life cycle. This growing demand for tech-enhanced processes comes not only from clients but also from staff, and is set to rise even further as more advances become available.

I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day…


Just 10% of law firms have a GenAI policy, new Thomson Reuters report shows — from legaltechnology.com by Caroline Hill

Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.


AI & Law Symposium: Students Exploring Innovation, Challenges, and Legal Implications of a Technological Revolution — from allard.ubc.ca

Artificial Intelligence (AI) has been rapidly deployed around the world in a growing number of sectors, offering unprecedented opportunities while raising profound legal and ethical questions. This symposium will explore the transformative power of AI, focusing on its benefits, limitations, and the legal challenges it poses.

AI’s ability to revolutionize sectors such as healthcare, law, and business holds immense potential, from improving efficiency and access to services, to providing new tools for analysis and decision-making. However, the deployment of AI also introduces significant risks, including bias, privacy concerns, and ethical dilemmas that challenge existing legal and regulatory frameworks. As AI technologies continue to evolve, it is crucial to assess their implications critically to ensure responsible and equitable development.


The role of legal teams in creating AI ethics guardrails — from legaldive.com by Catherine Dawson
For organizations to balance the benefits of artificial intelligence with its risk, it’s important for counsel to develop policy on data governance and privacy.


How Legal Aid and Tech Collaboration Can Bridge the Justice Gap — from law.com by Kelli Raker and Maya Markovich
“Technology, when thoughtfully developed and implemented, has the potential to expand access to legal services significantly,” write Kelli Raker and Maya Markovich.

Challenges and Concerns
Despite the potential benefits, legal aid organizations face several hurdles in working with new technologies:

1. Funding and incentives: Most funding for legal aid is tied to direct legal representation, leaving little room for investment in general case management or exploration of innovative service delivery methods to exponentially scale impact.

2. Jurisdictional inconsistency: The lack of a unified court system or standardized forms across regions makes it challenging to develop accurate and widely applicable tech solutions in certain types of matters.

3. Organizational capacity: Many legal aid organizations lack the time and resources to thoroughly evaluate new tech offerings or collaboration opportunities or identify internal workflows and areas of unmet need with the highest chance for impact.

4. Data privacy and security: Legal aid providers need assurance that tech protects client data and avoids misuse of sensitive information.

5. Ethical considerations: There’s significant concern about the accuracy of information produced by consumer-facing technology and the potential for inadvertent unauthorized practice of law.

 

Adapting to the future | Educause

Institutions are balancing capacity issues and rapid technological advancements—including artificial intelligence—while addressing a loss of trust in higher education.

To adapt to the future, technology and data leaders must work strategically to restore trust, prepare for policy updates, and plan for online education growth.



 

The Burden of Misunderstanding — from onedtech.philhillaa.com by Phil Hill
How ED’s outdated consumer-protection view of online education could lead to bureaucratic burden on every online course in US higher ed

Time to Comment
There are plenty of other points to be made on this proposed rule:

  • the lack of evidence supporting the treatment of online ed differently than f2f or hybrid;
  • the redefinition of regular and substantive interaction;
  • the impact of this simplification rule actually complicating matters for compliance; and
  • the risk of auto-withdrawal for 14-day inactivity periods, etc.

For now, I wanted to be more precise on what I believe is a misunderstood compliance burden of ED’s proposed rule, and ED’s inability to listen to feedback from colleges and universities and associations representing them. And that while the details of this proposed rule might seem arcane, it will have a major impact across higher ed.

It is very important to note that we are in the middle of the public comment period for these proposed rules, and that ED should hear directly from colleges and universities about the impact of the proposed rules. You can comment here through next Friday (August 23rd).


From DSC:
Phil brings up numerous excellent points in the above posting. If the Department of Education’s (ED’s) proposed rules on online attendance taking get finalized, the impacts could be huge — and negative/costly in several areas. Faculty members, directors and staff of teaching and learning centers, directors of online programs, provosts and other members of administrations, plus other relevant staff should comment– NOW — before the comment period ends next Friday (August 23rd).


 

What Students Want When It Comes To AI — from onedtech.philhillaa.com by Glenda Morgan
The Digital Education Council Global AI Student Survey 2024

The Digital Education Council (DEC) this week released the results of a global survey of student opinions on AI. It’s a large survey with nearly 4,000 respondents conducted across 16 countries, but more importantly, it asks some interesting questions. There are many surveys about AI out there right now, but this one stands out. I’m going to go into some depth here, as the entire survey report is worth reading.

.

.


AI is forcing a teaching and learning evolution — from eschoolnews.com by Laura Ascione
AI and technology tools are leading to innovative student learning–along with classroom, school, and district efficiency

Key findings from the 2024 K-12 Educator + AI Survey, which was conducted by Hanover Research, include:

  • Teachers are using AI to personalize and improve student learning, not just run classrooms more efficiently, but challenges remain
  • While post-pandemic challenges persist, the increased use of technology is viewed positively by most teachers and administrators
  • …and more

From DSC:
I wonder…how will the use of AI in education square with the issues of using smartphones/laptops within the classrooms? See:

  • Why Schools Are Racing to Ban Student Phones — from nytimes.com by Natasha Singer; via GSV
    As the new school year starts, a wave of new laws that aim to curb distracted learning is taking effect in Indiana, Louisiana and other states.

A three-part series from Dr. Phillippa Hardman:

Part 1: Writing Learning Objectives  
The Results Part 1: Writing Learning Objectives

In this week’s post I will dive into the results from task 1: writing learning objectives. Stay tuned over the next two weeks to see all of the the results.

Part 2: Selecting Instructional Strategies.
The Results Part 2: Selecting an Instructional Strategy

Welcome back to our three-part series exploring the impact of AI on instructional design.

This week, we’re tackling a second task and a crucial aspect of instructional design: selecting instructional strategies. The ability to select appropriate instructional strategies to achieve intended objectives is a mission-critical skill for any instructional designer. So, can AI help us do a good job of it? Let’s find out!

Part 3: How Close is AI to Replacing Instructional Designers?
The Results Part 3: Creating a Course Outline

Today, we’re diving into what many consider to be the role-defining task of the instructional designer: creating a course design outline.


ChatGPT Cheat Sheet for Instructional Designers! — from Alexandra Choy Youatt EdD

Instructional Designers!
Whether you’re new to the field or a seasoned expert, this comprehensive guide will help you leverage AI to create more engaging and effective learning experiences.

What’s Inside?
Roles and Tasks: Tailored prompts for various instructional design roles and tasks.
Formats: Different formats to present your work, from training plans to rubrics.
Learning Models: Guidance on using the ADDIE model and various pedagogical strategies.
Engagement Tips: Techniques for online engagement and collaboration.
Specific Tips: Industry certifications, work-based learning, safety protocols, and more.

Who Can Benefit?
Corporate Trainers
Curriculum Developers
E-Learning Specialists
Instructional Technologists
Learning Experience Designers
And many more!

ChatGPT Cheat Sheet | Instructional Designer


5 AI Tools I Use Every Day (as a Busy Student) — from theaigirl.substack.com by Diana Dovgopol
AI tools that I use every day to boost my productivity.
#1 Gamma
#2 Perplexity
#3 Cockatoo

I use this AI tool almost every day as well. Since I’m still a master’s student at university, I have to attend lectures and seminars, which are always in English or German, neither of which is my native language. With the help of Cockatoo, I create scripts of the lectures and/or translations into my language. This means I don’t have to take notes in class and then manually translate them afterward. All I need to do is record the lecture audio on any device or directly in Cockatoo, upload it, and then you’ll have the audio and text ready for you.

…and more


Students Worry Overemphasis on AI Could Devalue Education — from insidehighered.com by Juliette Rowsell
Report stresses that AI is “new standard” and universities need to better communicate policies to learners.

Rising use of AI in higher education could cause students to question the quality and value of education they receive, a report warns.

This year’s Digital Education Council Global AI Student Survey, of more than 3,800 students from 16 countries, found that more than half (55 percent) believed overuse of AI within teaching devalued education, and 52 percent said it negatively impacted their academic performance.

Despite this, significant numbers of students admitted to using such technology. Some 86 percent said they “regularly” used programs such as ChatGPT in their studies, 54 percent said they used it on a weekly basis, and 24 percent said they used it to write a first draft of a submission.

Higher Ed Leadership Is Excited About AI – But Investment Is Lacking — from forbes.com by Vinay Bhaskara

As corporate America races to integrate AI into its core operations, higher education finds itself in a precarious position. I conducted a survey of 63 university leaders revealing that while higher ed leaders recognize AI’s transformative potential, they’re struggling to turn that recognition into action.

This struggle is familiar for higher education — gifted with the mission of educating America’s youth but plagued with a myriad of operational and financial struggles, higher ed institutions often lag behind their corporate peers in technology adoption. In recent years, this gap has become threateningly large. In an era of declining enrollments and shifting demographics, closing this gap could be key to institutional survival and success.

The survey results paint a clear picture of inconsistency: 86% of higher ed leaders see AI as a “massive opportunity,” yet only 21% believe their institutions are prepared for it. This disconnect isn’t just a minor inconsistency – it’s a strategic vulnerability in an era of declining enrollments and shifting demographics.


(Generative) AI Isn’t Going Anywhere but Up — from stefanbauschard.substack.com by Stefan Bauschard
“Hype” claims are nonsense.

There has been a lot of talk recently about an “AI Bubble.” Supposedly, the industry, or at least the generative AI subset of it, will collapse. This is known as the “Generative AI Bubble.” A bubble — a broad one or a generative one — is nonsense. These are the reasons we will continue to see massive growth in AI.


AI Readiness: Prepare Your Workforce to Embrace the Future — from learningguild.com by Danielle Wallace

Artificial Intelligence (AI) is revolutionizing industries, enhancing efficiency, and unlocking new opportunities. To thrive in this landscape, organizations need to be ready to embrace AI not just technologically but also culturally.

Learning leaders play a crucial role in preparing employees to adapt and excel in an AI-driven workplace. Transforming into an AI-empowered organization requires more than just technological adoption; it demands a shift in organizational mindset. This guide delves into how learning leaders can support this transition by fostering the right mindset attributes in employees.


Claude AI for eLearning Developers — from learningguild.com by Bill Brandon

Claude is fast, produces grammatically correct  text, and outputs easy-to-read articles, emails, blog posts, summaries, and analyses. Take some time to try it out. If you worry about plagiarism and text scraping, put the results through Grammarly’s plagiarism checker (I did not use Claude for this article, but I did send the text through Grammarly).


Survey: Top Teacher Uses of AI in the Classroom — from thejournal.com by Rhea Kelly

A new report from Cambium Learning Group outlines the top ways educators are using artificial intelligence to manage their classrooms and support student learning. Conducted by Hanover Research, the 2024 K-12 Educator + AI Survey polled 482 teachers and administrators at schools and districts that are actively using AI in the classroom.

More than half of survey respondents (56%) reported that they are leveraging AI to create personalized learning experiences for students. Other uses included providing real-time performance tracking and feedback (cited by 52% of respondents), helping students with critical thinking skills (50%), proofreading writing (47%), and lesson planning (44%).

On the administrator side, top uses of AI included interpreting/analyzing student data (61%), managing student records (56%), and managing professional development (56%).


Addendum on 8/14/24:

 


The race against time to reinvent lawyers — from jordanfurlong.substack.com by Jordan Furlong
Our legal education and licensing systems produce one kind of lawyer. The legal market of the near future will need another kind. If we can’t close this gap fast, we’ll have a very serious problem.

Excerpt (emphasis DSC):

Lawyers will still need competencies like legal reasoning and analysis, statutory and contractual interpretation, and a range of basic legal knowledge. But it’s unhelpful to develop these skills through activities that lawyers won’t be performing much longer, while neglecting to provide them with other skills and prepare them for other situations that they will face. Our legal education and licensing systems are turning out lawyers whose competence profiles simply won’t match up with what people will need lawyers to do.

A good illustration of what I mean can be found in an excellent recent podcast from the Practising Law Institute, “Shaping the Law Firm Associate of the Future.” Over the course of the episode, moderator Jennifer Leonard of Creative Lawyers asked Professors Alice Armitage of UC Law San Francisco and Heidi K. Brown of New York Law School to identify some of the competencies that newly called lawyers and law firm associates are going to need in future. Here’s some of what they came up with:

  • Agile, nimble, extrapolative thinking
  • Collaborative, cross-disciplinary learning
  • Entrepreneurial, end-user-focused mindsets
  • Generative AI knowledge (“Their careers will be shaped by it”)
  • Identifying your optimal individual workflow
  • Iteration, learning by doing, and openness to failure
  • Leadership and interpersonal communication skills
  • Legal business know-how, including client standards and partner expectations
  • Receiving and giving feedback to enhance effectiveness

Legal Tech for Legal Departments – What In-House Lawyers Need to Know — from legal.thomsonreuters.com by Sterling Miller

Whatever the reason, you must understand the problem inside and out. Here are the key points to understanding your use case:

  • Identify the problem.
  • What is the current manual process to solve the problem?
  • Is there technology that will replace this manual process and solve the problem?
  • What will it cost and do you have (or can you get) the budget?
  • Will the benefits of the technology outweigh the cost? And how soon will those benefits pay off the cost? In other words, what is the return on investment?
  • Do you have the support of the organization to buy it (inside the legal department and elsewhere, e.g., CFO, CTO)?

2024-05-13: Of Legal AI — from emergentbehavior.co

Long discussion with a senior partner at a major Bay Area law firm:

Takeaways

A) They expect legal AI to decimate the profession…
B) Unimpressed by most specific legal AI offerings…
C) Generative AI error rates are acceptable even at 10–20%…
D) The future of corporate law is in-house…
E) The future of law in general?…
F) Of one large legal AI player…


2024 Legal Technology Survey Results — from lexology.com

Additional findings of the annual survey include:

  • 77 percent of firms have a formal technology strategy in place
  • Interest and intentions regarding generative A.I. remain high, with almost 80 percent of participating firms expecting to leverage it within the next five years. Many have either already begun or are planning to undertake data hygiene projects as a precursor to using generative A.I. and other automation solutions. Although legal market analysts have hypothesized that proprietary building of generative A.I. solutions remain out of reach for mid-sized firms, several Meritas survey respondents are making traction. Many other firms are also licensing third-party generative A.I. solutions.
  • The survey showed strong technology progression among several Meritas member firms, with most adopting a tech stack of core, foundational systems of infrastructure technology and adding cloud-based practice management, document management, time, billing, and document drafting applications.
  • Most firms reported increased adoption and utilization of options already available within their current core systems, such as Microsoft Office 365 Teams, SharePoint, document automation, and other native functionalities for increasing efficiencies; these functions were used more often in place of dedicated purpose-built solutions such as comparison and proofreading tools.
  • The legal technology market serving Meritas’ member firms continues to be fractured, with very few providers emerging as market leaders.

AI Set to Save Professionals 12 Hours Per Week by 2029 — from legalitprofessionals.com

Thomson Reuters, a global content and technology company, today released its 2024 Future of Professionals report, an annual survey of more than 2,200 professionals working across legal, tax, and risk & compliance fields globally. Respondents predicted that artificial intelligence (AI) has the potential to save them 12 hours per week in the next five years, or four hours per week over the upcoming year – equating to 200 hours annually.

This timesaving potential is the equivalent productivity boost of adding an extra colleague for every 10 team members on staff. Harnessing the power of AI across various professions opens immense economic opportunities. For a U.S. lawyer, this could translate to an estimated $100,000 in additional billable hours.*

 

From DSC:
I realize I lose a lot of readers on this Learning Ecosystems blog because I choose to talk about my faith and integrate scripture into these postings. So I have stayed silent on matters of politics — as I’ve been hesitant to lose even more people. But I can no longer stay silent re: Donald Trump.

I, too, fear for our democracy if Donald Trump becomes our next President. He is dangerous to our democracy.

Also, I can see now how Hitler came to power.

And look out other countries that Trump doesn’t like. He is dangerous to you as well.

He doesn’t care about the people of the United States (nor any other nation). He cares only about himself and gaining power. Look out if he becomes our next president. 


From Stefan Bauschard:

Unlimited Presidential power. According to Trump vs the US, the “President may not be prosecuted for exercising his core constitutional powers, and he is entitled to at least presumptive immunity from prosecution for his official acts.” Justice Sotomayor says this makes the President a “king.” This power + surveillance + AGI/autonomous weapons mean the President is now the most powerful king in the history of the world.

Democracy is only 200 years old.

 

6 Ways State Policymakers Can Build More Future-Focused Education Systems — from gettingsmart.com by Jennifer Kabaker

Key Points

  • Guided by a vision – often captured as a Portrait of a Graduate – co-constructed with local leaders, community members, students, and families, state policymakers can develop policies that equitably and effectively support students and educators in transforming learning experiences.
  • The Aurora Institute highlights the importance of collaborative efforts in creating education systems that truly meet the diverse needs of every student.

The Aurora Institute has spent years working with states looking to advance competency-based systems, and has identified a set of key state policy levers that policymakers can put into action to build more personalized and competency-based systems. These shifts should be guided by a vision–co-constructed with local leaders, community members, students, and families–for what students need to know and be able to do upon graduating.


Career Pathways In A Rapidly Changing World: US Career Pathways Story — from gettingsmart.com by Paul Herdman

Key Points

  • There has been a move away from the traditional “Bachelor’s or Bust” mentality towards recognizing the value of diverse career pathways that may not necessarily require a four-year degree.
  • Local entities such as states, school districts, and private organizations have played a crucial role in implementing and scaling up career pathways programs.

While much has been written on this topic (see resources below), this post, in the context of our OECD study of five Anglophone countries, will attempt to provide a backdrop on what was happening at the federal level in the U.S. over the last several decades to help catalyze this shift in career pathways and offer a snapshot of how this work is evolving in two very different statesDelaware and Texas.


U.S. public, private and charter schools in 5 charts — from pewresearch.org by Katherine Schaeffer
.

 


Information Age vs Generation Age Technologies for Learning — from opencontent.org by David Wiley

Remember (emphasis DSC)

  • the internet eliminated time and place as barriers to education, and
  • generative AI eliminates access to expertise as a barrier to education.

Just as instructional designs had to be updated to account for all the changes in affordances of online learning, they will need to be dramatically updated again to account for the new affordances of generative AI.


The Curious Educator’s Guide to AI | Strategies and Exercises for Meaningful Use in Higher Ed  — from ecampusontario.pressbooks.pub by Kyle Mackie and Erin Aspenlieder; via Stephen Downes

This guide is designed to help educators and researchers better understand the evolving role of Artificial Intelligence (AI) in higher education. This openly-licensed resource contains strategies and exercises to help foster an understanding of AI’s potential benefits and challenges. We start with a foundational approach, providing you with prompts on aligning AI with your curiosities and goals.

The middle section of this guide encourages you to explore AI tools and offers some insights into potential applications in teaching and research. Along with exposure to the tools, we’ll discuss when and how to effectively build AI into your practice.

The final section of this guide includes strategies for evaluating and reflecting on your use of AI. Throughout, we aim to promote use that is effective, responsible, and aligned with your educational objectives. We hope this resource will be a helpful guide in making informed and strategic decisions about using AI-powered tools to enhance teaching and learning and research.


Annual Provosts’ Survey Shows Need for AI Policies, Worries Over Campus Speech — from insidehighered.com by Ryan Quinn
Many institutions are not yet prepared to help their faculty members and students navigate artificial intelligence. That’s just one of multiple findings from Inside Higher Ed’s annual survey of chief academic officers.

Only about one in seven provosts said their colleges or universities had reviewed the curriculum to ensure it will prepare students for AI in their careers. Thuswaldner said that number needs to rise. “AI is here to stay, and we cannot put our heads in the sand,” he said. “Our world will be completely dominated by AI and, at this point, we ain’t seen nothing yet.”


Is GenAI in education more of a Blackberry or iPhone? — from futureofbeinghuman.com by Andrew Maynard
There’s been a rush to incorporate generative AI into every aspect of education, from K-12 to university courses. But is the technology mature enough to support the tools that rely on it?

In other words, it’s going to mean investing in concepts, not products.

This, to me, is at the heart of an “iPhone mindset” as opposed to a “Blackberry mindset” when it comes to AI in education — an approach that avoids hard wiring in constantly changing technologies, and that builds experimentation and innovation into the very DNA of learning.

For all my concerns here though, maybe there is something to being inspired by the Blackberry/iPhone analogy — not as a playbook for developing and using AI in education, but as a mindset that embraces innovation while avoiding becoming locked in to apps that are detrimentally unreliable and that ultimately lead to dead ends.


Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays — from sciencedirect.com by Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, and Jens Möller

Highlights

  • Randomized-controlled experiments investigating novice and experienced teachers’ ability to identify AI-generated texts.
  • Generative AI can simulate student essay writing in a way that is undetectable for teachers.
  • Teachers are overconfident in their source identification.
  • AI-generated essays tend to be assessed more positively than student-written texts.

Can Using a Grammar Checker Set Off AI-Detection Software? — from edsurge.com by Jeffrey R. Young
A college student says she was falsely accused of cheating, and her story has gone viral. Where is the line between acceptable help and cheating with AI?


Use artificial intelligence to get your students thinking critically — from timeshighereducation.com by Urbi Ghosh
When crafting online courses, teaching critical thinking skills is crucial. Urbi Ghosh shows how generative AI can shape how educators can approach this


ChatGPT shaming is a thing – and it shouldn’t be — from futureofbeinghuman.com by Andrew Maynard
There’s a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it’s a problem.

Excerpt (emphasis DSC):

This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.

From DSC:
Yes…that sounds like how many faculty members viewed online learning, even though they had never taught online before.

 

Shares of two big online education stocks tank more than 10% as students use ChatGPT — from cnbc.com by Michelle Fox; via Robert Gibson on LinkedIn

The rapid rise of artificial intelligence appears to be taking a toll on the shares of online education companies Chegg and Coursera.

Both stocks sank by more than 10% on Tuesday after issuing disappointing guidance in part because of students using AI tools such as ChatGPT from OpenAI.



Synthetic Video & AI Professors — from drphilippahardman.substack.com by Dr. Philippa Hardman
Are we witnessing the emergence of a new, post-AI model of async online learning?

TLDR: by effectively tailoring the learning experience to the learner’s comprehension levels and preferred learning modes, AI can enhance the overall learning experience, leading to increased “stickiness” and higher rates of performance in assessments.

TLDR: AI enables us to scale responsive, personalised “always on” feedback and support in a way that might help to solve one of the most wicked problems of online async learning – isolation and, as a result, disengagement.

In the last year we have also seen the rise of an unprecedented number of “always on” AI tutors, built to provide coaching and feedback how and when learners need it.

Perhaps the most well-known example is Khan Academy’s Khanmigo and its GPT sidekick Tutor Me. We’re also seeing similar tools emerge in K12 and Higher Ed where AI is being used to extend the support and feedback provided for students beyond the physical classroom.


Our Guidance on School AI Guidance document has been updated — from stefanbauschard.substack.com by Stefan Bauschard

We’ve updated the free 72-page document we wrote to help schools design their own AI guidance policies.

There are a few key updates.

  1. Inclusion of Oklahoma and significant updates from North Carolina and Washington.
  2. More specifics on implementation — thanks NC and WA!
  3. A bit more on instructional redesign. Thanks to NC for getting this party started!

Creating a Culture Around AI: Thoughts and Decision-Making — from er.educause.edu by Courtney Plotts and Lorna Gonzalez

Given the potential ramifications of artificial intelligence (AI) diffusion on matters of diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.

 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

Beyond the Hype: Taking a 50 Year Lens to the Impact of AI on Learning — from nafez.substack.com by Nafez Dakkak and Chris Dede
How do we make sure LLMs are not “digital duct tape”?

[Per Chris Dede] We often think of the product of teaching as the outcome (e.g. an essay, a drawing, etc.). The essence of education, in my view, lies not in the products or outcomes of learning but in the journey itself. The artifact is just a symbol that you’ve taken the journey.

The process of learning — the exploration, challenges, and personal growth that occur along the way — is where the real value lies. For instance, the act of writing an essay is valuable not merely for the final product but for the intellectual journey it represents. It forces you to improve and organize your thinking on a subject.

This distinction becomes important with the rise of generative AI, because it uniquely allows us to produce these artifacts without taking the journey.

As I’ve argued previously, I am worried that all this hype around LLMs renders them a “type of digital duct-tape to hold together an obsolete industrial-era educational system”. 


Speaking of AI in our learning ecosystems, also see:


On Building a AI Policy for Teaching & Learning — from by Lance Eaton
How students drove the development of a policy for students and faculty

Well, last month, the policy was finally approved by our Faculty Curriculum Committee and we can finally share the final version: AI Usage Policy. College Unbound also created (all-human, no AI used) a press release with the policy and some of the details.

To ensure you see this:

  • Usage Guidelines for AI Generative Tools at College Unbound
    These guidelines were created and reviewed by College Unbound students in Spring 2023 with the support of Lance Eaton, Director of Faculty Development & Innovation.  The students include S. Fast, K. Linder-Bey, Veronica Machado, Erica Maddox, Suleima L., Lora Roy.

ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds — from psypost.org by Eric W. Dolan

A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work. The study, published in the Canadian Psychological Association’s Mind Pad, found that “false citation rates” across various psychology subfields ranged from 6% to 60%. Surprisingly, these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.

MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals.

 

Addressing equity and ethics in artificial intelligence — from apa.org by Zara Abrams
Algorithms and humans both contribute to bias in AI, but AI may also hold the power to correct or reverse inequities among humans

“The conversation about AI bias is broadening,” said psychologist Tara Behrend, PhD, a professor at Michigan State University’s School of Human Resources and Labor Relations who studies human-technology interaction and spoke at CES about AI and privacy. “Agencies and various academic stakeholders are really taking the role of psychology seriously.”


NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications — from natlawreview.com by James G. Gatto

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.

For those of you who want the “Cliff Notes” version of this report, here is a table that summarizes by topic the various rules mentioned and a concise summary of the associated guidance.

The Report includes four primary recommendations:


 

 

 

The University Student’s Guide To Ethical AI Use  — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource

This comprehensive guide offers:

  • Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
  • An overview of popular AI tools used in universities and its limitations as a study tool
  • Tips on how to ethically use AI and how to maximize its capabilities for students
  • Current existing punishment and penalties for cheating using AI
  • A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use

Some of the key facts you might find interesting are:

  • The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
  • 68% of students say using AI has impacted their academic performance positively.
  • Educators using AI tools say the technology helps speed up their grading process by as much as 75%.
 
© 2024 | Daniel Christian