Students can use AI on applications, Arizona State law school says — from reuters.com by Sara Merken

July 28 (Reuters) – A week after The University of Michigan Law School banned the use of popular artificial intelligence tools like ChatGPT on student applications, at least one school is going in the other direction.

The Sandra Day O’Connor College of Law at Arizona State University said on Thursday that prospective students are explicitly allowed to use generative artificial intelligence tools to help draft their applications.

 

The future of learning and skilling with AI in the picture — from chieflearningofficer.com by Janice Burns
Janice Burns, chief transformation officer at Degreed, looks at how AI is impacting the future of learning and skilling.

Sections include:

  • Saving L&D time
  • Recommending and personalizing
  • ‘As you need it’ learning 
  • A career coach for everyone?
  • More advances coming
  • Be mindful of the limitations
  • Remain open to the changes coming

Also relevant/see:


Who Will Train Digital (Legal) Talent At Scale? — from forbes.com by Mark A. Cohen

Excerpt (emphasis DSC):

The urgency to fill existing and prospective positions with digital talent and to upskill those already in the workforce are among the reasons why leading companies have boldly assessed and transformed their enterprise talent management strategies. Some key initiatives leading companies are undertaking include:

  • Direct involvement by the C-Suite in the formulation of the enterprise talent strategy and lifecycle;
  • A paradigmatic hiring shift from diplomas to skills;
  • Increased investment in upskilling and career advancement to promote retention and to identify high-performers early on;
  • Targeted collaboration with universities focused on training in areas of existing and projected talent supply demand
  • Promoting a learning-for-life mindset and encouraging creative thinking, cross-cultural collaboration, and forging a culture that values these and other humanistic values.
  • Collaborating with other companies to create joint solutions for fulfilling skill demand

Practical, powerful employee education: How interactivity supports greater learning online — from chieflearningofficer.com by Natasha Nicholson

Consider this comparison: In more passive online learning, a participant will learn primarily by listening, watching and observing. Conversely, in an interactive model, the participant will be expected to engage with a story or situation by being asked to make choices that will show potential consequences.

Here are some of the elements that, when combined, make interactive learning especially effective:

 

Navigating the Future of Learning in a Digitally-Disrupted World — from thinklearningstudio.org by Russell Cailey

Are we on the frontier of unveiling an unseen revolution in education? The hypothesis is that this quiet upheaval’s importance is far more significant than we imagine. As our world adjusts, restructures, and emerges from a year which launched an era of mass AI, so too does a new academic year dawn for many – with hope and enthusiasm about new roles, titles, or simply just a new mindset. Concealed from sight, however, I believe a significant transformative wave has started and will begin to reshape our education systems and push us into a new stage of innovative teaching practice whether we desire it or not. The risk and hope is that the quiet revolution remains outside the regulator’s and ministries’ purview, which could risk a dangerous fragmentation of education policy and practice, divorced from the actualities of the world ‘in and outside school’.

“This goal can be achieved through continued support for introducing more new areas of study, such as ‘foresight and futures’, in the high school classroom.”


Four directions for assessment redesign in the age of generative AI— from timeshighereducation.com by Julia Chen
The rise of generative AI has led universities to rethink how learning is quantified. Julia Chen offers four options for assessment redesign that can be applied across disciplines

Direction 1: From written description to multimodal explanation and application

Direction 2: From literature review alone to referencing lectures

Direction 3: From presentation of ideas to defence of views

Direction 4: From working alone to student-staff partnership




15 Inspirational Voices in the Space Between AI and Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Get Inspired for AI and The Future of Education.

If you are just back from vacation and still not quite sure what to do about AI, let me assure you that you are not the only one. My advice for you today is this: fill your LinkedIn-feed and/or inbox with ideas, inspirational writing and commentary on AI. This will get you up to speed quickly and is a great way to stay informed on the newest movements you need to be aware of.

My personal recommendation for you is to check out these bright people who are all very active on LinkedIn and/or have a newsletter worth paying attention to. I have kept the list fairly short – only 15 people – in order to make it as easy as possible for you to begin exploring.


Universities say AI cheats can’t be beaten, moving away from attempts to block AI (Australia) — from abc.net.au by Jake Evans

Key points:

  • Universities have warned against banning AI technologies in academia
  • Several say AI cheating in tests will be too difficult to stop, and it is more practical to change assessment methods
  • The sector says the entire nature of teaching will have to change to ensure students continue to effectively learn

aieducator.tools


Navigating A World of Generative AI: Suggestions for Educators — from nextlevellab.gse.harvard.edu by Lydia Cao and Chris Dede

Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though there are concerns of plagiarism and replacement of human jobs, Cao and Dede argue that a more productive way forward is for educators to focus on demystifying AI, emphasizing the learning process over the final product, honoring learner agency, orchestrating multiple sources of motivation, cultivating skills that AI cannot easily replicate, and fostering intelligence augmentation (IA) through building human-AI partnerships.

Navigating A World of Generative AI: Suggestions for Educators -- by Lydia Cao and Chris Dede


20 CHATGPT PROMPTS FOR ELA TEACHERS — from classtechtips.com by Dr. Monica Burns

Have you used chatbots to save time this school year? ChatGPT and generative artificial intelligence (AI) have changed the way I think about instructional planning. Today on the blog, I have a selection of ChatGPT prompts for ELA teachers.

You can use chatbots to tackle tedious tasks, gather ideas, and even support your work to meet the needs of every student. In my recent quick reference guide published by ISTE and ASCD, Using AI Chatbots to Enhance Planning and Instruction, I explore this topic. You can also find 50 more prompts for educators in this free ebook.


Professors Craft Courses on ChatGPT With ChatGPT — from insidehighered.com by Lauren Coffey
While some institutions are banning the use of the new AI tool, others are leaning into its use and offering courses dedicated solely to navigating the new technology.

Maynard, along with Jules White at Vanderbilt University, are among a small number of professors launching courses focused solely on teaching students across disciplines to better navigate AI and ChatGPT.

The offerings go beyond institutions flexing their innovation skills—the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.


GPT-4 can already pass freshman year at Harvard | professors need to adapt to their students’ new reality — fast — from chronicle.com by Maya Bodnick (an undergraduate at Harvard University, studying government)

A. A. A-. B. B-. Pass.

That’s a solid report card for a freshman in college, a respectable 3.57 GPA. I recently finished my freshman year at Harvard, but those grades aren’t mine — they’re GPT-4’s.

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received…

The impact that AI is having on liberal-arts homework is indicative of the AI threat to the career fields that liberal-arts majors tend to enter. So maybe what we should really be focused on isn’t, “How do we make liberal-arts homework better?” but rather, “What are jobs going to look like over the next 10–20 years, and how do we prepare students to succeed in that world?”



The great assessment rethink — from timeshighereducation.com by
How to measure learning and protect academic integrity in the age of ChatGPT

Items from Times Higher Education re: redesigning assessment

 

Disconnect grows between law firm service and client expectation, survey finds — from legaltechnology.com by Caroline Hill

Over 90% of in-house counsel and three quarters of private practice lawyers said that the legal sector is slow to embrace data, technology and new delivery models – a significant increase on the 64% who felt the same way last year.

In terms of the disconnect, 96% of in-house counsel agreed with the statement that what law firms provide is out of kilter with what clients expect. The majority (78%) of private practice lawyers also largely agreed.


ndaOK Unleashes Next-Level Efficiency in Legal Tech with GPT-4 Powered NDA Review — from legaldive.com by Christina Pennell
AI-driven legal tool promises to reduce NDA review times by over 90%

AUSTIN, Tex. —  ndaOK, an innovator in AI-powered legal technology, today announces the launch of its next-generation non-disclosure agreement (NDA) review system. This advanced solution leverages OpenAI’s GPT-4 large multimodal model, a first among legal technology companies, offering unprecedented performance and efficiency in reviewing NDAs.

Capitalizing on the computational power and versatility of GPT-4, ndaOK accurately reviews and edits documents based on a user’s pre-determined requirements without the need for human assistance or input. This unique capability makes ndaOK faster and easier to deploy than any other AI-based contract review solution.


And here are two relevant postings that I missed a while back:

ANALYSIS: Meet the Law Schools Leading the Way in Innovation — from news.bloomberglaw.com by Francis Boustany

As law firms, businesses, and their clients adapt to the new realities of the legal and business worlds, law schools must prepare students in new ways—beyond traditional law school curricula and teaching methods—to give students an experience and education that better prepares them for their post-graduation careers.

Bloomberg Law launched its inaugural Law School Innovation Program as a means of promoting, acknowledging, and connecting the law schools that are innovating in the legal education space and providing their students with new ways of learning the law.

In this reimagined version of law school, students are taught to have ‘an entrepreneur’s mind’ — from fastcompany.com by Grace Buono
The University of Richmond program asks students not only to think like lawyers, but as entrepreneurs.

The demand for good lawyers is nothing new and, each year, law schools churn out graduates equipped with virtually the same skills as decades of law students before them. But one school is trying to change that.

The University of Richmond’s Legal Business Design Hub has students focus on the strategy, design, and operations of legal services in addition to their regular coursework, applying what the program calls “an entrepreneur’s mind” to their studies.

 

Generative AI and the future of work in America — from mckinsey.com by Kweilin Ellingrud, Saurabh Sanghvi, Gurneet Singh Dandona, Anu Madgavkar, Michael Chui, Olivia White, and Paige Hasebe

At a glance

  • During the pandemic (2019–22), the US labor market saw 8.6 million occupational shifts, 50 percent more than in the previous three-year period.
  • By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI.
  • Federal investment to address climate and infrastructure, as well as structural shifts, will also alter labor demand.
  • An additional 12 million occupational transitions may be needed by 2030.
  • The United States will need workforce development on a far larger scale as well as more expansive hiring approaches from employers.

Employers will need to hire for skills and competencies rather than credentials, recruit from overlooked populations (such as rural workers and people with disabilities), and deliver training that keeps pace with their evolving needs.


The AI-Powered, Totally Autonomous Future of War Is Here — from wired.com by Will Knight
Ships without crews. Self-directed drone swarms. How a US Navy task force is using off-the-shelf robotics and artificial intelligence to prepare for the next age of conflict.

From DSC:
Hhhhmmmmm…..not good. Is anyone surprised by this? No, I didn’t think so either. That’s why the United States and China are so heated up about semiconductor chips.


AI puts glitch in graduates’ employment plans — from hrdive.com by Ginger Christ
Recent grads are worried how AI will affect their career prospects, a new survey found.

Excerpt:

  • The proliferation of new technologies like generative artificial intelligence is making recent graduates uneasy, a new study released Thursday found. A third of the 1,000 people who graduated in the past year said they are second-guessing their career choice, while roughly half reported questioning their workforce preparedness and feeling threatened by AI, according to the 2023 Employability Report by Cengage Group, a global education technology company.

“The workplace has changed rapidly in the last few years, and now we are witnessing a new shift as AI begins to reshape worker productivity, job requirements, hiring habits and even entire industries,” Michael Hansen, Cengage Group CEO, said in a news release. 

Along these lines, also see:

AI Boom Creates Concerns for Recent Graduates — from insidehighered.com by  Lauren Coffey

More than half of recent graduates question whether they are properly prepared for the workforce in light of the rise of artificial intelligence, a survey finds.

There is also more of a preference for skills training credentials. Among employers, nearly 40 percent said skills training credentials are most important, while only 19 percent ranked a college degree as most important.

However, recent graduates did cite an issue with most higher education institutions’ ability to teach employability skills. In 2023, 43 percent of students said their degree program taught them the necessary skills for their first job, down 20 percentage points from 2022.


Instructure, Khan Academy Announce Major Partnership On AI Tutoring, Teaching
— from forbes.com by Derek Newton

The news is that Instructure, one of the few public education companies and the market leader in learning management with their signature product Canvas, struck a partnership with Khan Academy to create an AI-powered tutoring and teaching assistant tool – merging Khan’s innovative instructional content and Instructure’s significant reach, scale, and data insights. The partnership and related tools will be known as Khanmigo, according to the announcement.

On brand names alone, this is a big deal. On potential impact, it could be even bigger.


How To Use AI to Write Scenarios — from christytuckerlearning.com by Christy Tucker
How can you use AI to write scenarios for learning? Read this example with prompts and results using ChatGPT and Bard.

Excerpts:

So far, I have found these tools helpful in generating ideas, writing first drafts, and summarizing. They work better for general knowledge tasks than really specific topics unless I provide more details to them, which makes sense.

This post isn’t going to give you “5 magical prompts to instantly write scenarios for you” or anything like that. Instead, this is a “working out loud” post where I’ll share some prompts I have used.

Christy’s posting includes:

  1. “The Meeting from Hell”
  2. “The Backstabbing Coworker”
  3. “The Boss from Hell”
  4. “The Office Romance Gone Wrong”
  5. “The New Hire with Attitude”

Some potential tools for you to check out:



The Rise of the Talent Economy — from drphilippahardman.substack.com by Dr. Philippa Hardman
How Education & Training Will Dictate the Future & Impact of AI

“Talent, more than capital, will represent the critical factor of production.”

In short, the demand for AI skills requires a significant transformation in training and education models. To bridge the global skills gap, educational institutions, online learning providers, and employers must design and deliver training programs that cater to the rapidly evolving AI-driven labor market. 


How ChatGPT killed my discussion boards and prompted new prompts — from timeshighereducation.com by Sara Cline; per Robert Gibson on LinkedIn
Advice on learning and discussion prompts that require students to think beyond the remit of AI responses

Excerpts:

To combat this problem, we modified some of our prompts this summer to try to prevent students from using AI to avoid learning. I’m sharing some of our strategies in the hope that they help you out as you adapt your course to a world of generative AI.

  1. Use prompts that force a personal opinion.
  2. Have students include their source(s) as an attachment.
  3. Use current or local events.
  4. Have them take and caption a photo.
  5. Draw a diagram or chart.
  6. Build and explain a 3D model.
  7. Include timestamps from lecture videos.
  8. Scrap the discussion boards.

Dark web ChatGPT is here… — from therundown.ai

The Rundown: A new cybercrime generative AI tool called FraudGPT is being advertised on the Dark web and Telegram channels, offering offensive capabilities like crafting spear-phishing emails and creating undetectable malware.

Why it matters: Scammers can now look more realistic than ever before and at a larger scale. The sad truth is that the emergence of cybercrime AI tools like FraudGPT is just beginning.


From DSC:
If true and if it could help build and/or contribute to cloud-based learner profiles,  this could be huge.


Wayfair’s AI tool can redraw your living room and sell you furniture — from theverge.com by Wes Davis
The home decoration company’s new Decorify AI remodeling tool is clumsy but could be effective for visualization while remodeling.

A living room -- Wayfair is experimenting with using AI technologies to help people envision interior design moves

 

22 Classroom-Focused Resources on AI from Teachers Everywhere

22 Classroom-Focused Resources on AI from Teachers Everywhere — from coolcatteacher.com by Vicki Davis; via GSV

***


Back to School Survey: 44% of Teens “Likely” to Use AI To Do Their Schoolwork for Them This School Year — from prnewswire.com by Junior Achievement
Research by Junior Achievement Shows 60% of Teens Consider the Use of AI to Do Their Schoolwork for Them as “Cheating”

Excerpt:

COLORADO SPRINGS, Colo.July 26, 2023 /PRNewswire/ — A new survey of teens conducted for Junior Achievement by the research firm Big Village shows that nearly half of teens (44%) are “likely” to use AI to do their schoolwork instead of doing it themselves this coming school year. However, most teens (60%) consider using AI in this way as “cheating.” The survey of 1,006 13- to 17-year-olds was conducted by Big Village from July 6 through 11, 2023.

From DSC:
In a competitive society as we have in the U.S. and when many of our K-12 learning ecosystems are designed to create game players, we shouldn’t be surprised to see a significant amount of our students using AI to “win”/game the system.

As it becomes appropriate for each student, offering more choice and control should help to allow more students to pursue what they want to learn about. They won’t be as interested in gaming the system if they truly want to learn about something.

 

McKinsey Technology Trends Outlook 2023 — from mckinsey.com

Excerpt:

Which technology trends have the most momentum in an accelerating world? We ranked the top cross-industry trends that matter most for companies and executives.

McKinsey Technology Trends Outlook 2023

 

Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI
A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.


SEC’s Gensler Warns AI Risks Financial Stability — from bloomberg.com by Lydia Beyoud; via The Brainyacts
SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs


Per a recent Brainyacts posting:

The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.

The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.

The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.


How ChatGPT Code Interpreter (And Four Other AI Initiatives) Might Revolutionize Education — from edtechinsiders.substack.com by Phuong Do, Alex Sarlin, and Sarah Morin
And more on Meta’s Llama, education LLMs, the Supreme Court affirmative action ruling, and Byju’s continued unraveling

Let’s put it all together for emphasis. With Code Interpreter by ChatGPT, you can:

  1. Upload any file
  2. Tell ChatGPT what you want to do with it
  3. Receive your instructions translated into Python
  4. Execute the code
  5. Transform the output back into readable language (or visuals, charts, graphs, tables, etc.)
  6. Provide the results (and the underlying Python code)


AI Tools and Links — from Wally Boston

It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them.  The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.


Meet Claude — A helpful new AI assistant — from wondertools.substack.com by Jeremy Caplan
How to make the most of ChatGPT’s new alternative

Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.

Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.

Claude’s free version allows you to attach documents for analysis. ChatGPT’s doesn’t.


The Next Frontier For Large Language Models Is Biology — from forbes.com by Rob Toews

Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.

In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.



Seven AI companies agree to safeguards in the US — from bbc.com by Shiona McCallum; via Tom Barrett

Seven leading companies in artificial intelligence have committed to managing risks posed by the tech, the White House has said.

This will include testing the security of AI, and making the results of those tests public.

Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI joined US President Joe Biden to make the announcement.

 

New York Times sues AI — theneurondaily.com


In Hollywood, writers and actors/actresses are on strike due to AI-related items.


From DSC:
And while some are asking about other industries’/individuals’ data, Bryan Alexander asks this about academics:

While I’m here, also see Bryan’s posting –> How colleges and university are responding to AI now


AI’s Coming Constitutional Convention — from thebrainyacts.beehiiv.com

For centuries, constitutional conventions have been pivotal moments in history for codifying the rights and responsibilities that shape civilized societies. As artificial intelligence rapidly grows more powerful, the AI community faces a similar historic inflection point. The time has come to draft a “Constitutional Convention for AI” – to proactively encode principles that will steer these transformative technologies toward justice, empowerment, and human flourishing.

AI promises immense benefits, from curing diseases to unlocking clean energy. But uncontrolled, it poses existential dangers that could undermine human autonomy and dignity. Lawyers understand well that mere regulations or oversight are no match for a determined bad actor. Fundamental principles must be woven into the very fabric of the system.

 

The invisible cost of resisting AI in higher education — from blogs.lse.ac.uk by Dr. Philippa Hardman

Excerpt (emphasis DSC):

The implications of this development are perhaps more significant than we realise. There has been much discussion in recent months about the risks associated with the rise of generative AI for higher education, with most of the discussion centring around the challenge that ChatGPT poses to academic integrity.

However, much less work has been done on exploring the negative – even existential – consequences that might stem from not embracing AI in higher education. Are these new principles enough to reverse the risk of irrelevance?

What if we reimagine “learning” in higher education as something more than the recall and restructuring of existing information? What if instead of lectures, essays and exams we shifted to a model of problem sets, projects and portfolios?

I am often asked what this could look like in practice. If we turn to tried and tested instructional strategies which optimise for learner motivation and mastery, it would look something like this…

Also relevant/see:

Do or Die? — from drphilippahardman.substack.com by Dr. Philippa Hardman
The invisible cost of resisting AI in higher education

Excerpt:

  • Embracing AI in the higher education sector prepares students for the increasingly technology-driven job market and promotes more active, participatory learning experiences which we know lead to better outcomes for both students and employers.
  • With the rising popularity of alternative education routes such as bootcamps and apprenticeships, it’s crucial for traditional higher education to engage positively with AI in order to maintain its competitiveness and relevance.

For example, a teacher crafting a lesson plan no longer has to repeat that they’re teaching 3rd grade science. A developer preferring efficient code in a language that’s not Python – they can say it once, and it’s understood. Grocery shopping for a big family becomes easier, with the model accounting for 6 servings in the grocery list.


This is the worst AI will ever be, so focused are educators on the present they can’t see the future — from donaldclarkplanb.blogspot.com by Donald Clark

Teaching technology
There is also the misconception around the word ‘generative’, the assumption that all it does is create blocks of predictable text. Wrong. May of its best uses in learning are its ability to summarise, outline, provide guidance, support and many other pedagogic features that can be built into the software. This works and will mean tutors, teachers, teaching support, not taking support, coaches and many other services will emerge that aid both teaching and learning. They are being developed in their hundreds as we speak.

This simple fact, that this is the first technology to ‘learn’ and learn fast, on scale, continuously, across a range of media and tasks, it what makes it extraordinary.


On holding back the strange AI tide — from oneusefulthing.org by Ethan Mollick
There is no way to stop the disruption. We need to channel it instead

And empowering workers is not going to be possible with a top-down solution alone. Instead, consider:

  • Radical incentives to ensure that workers are willing to share what they learn. If they are worried about being punished, they won’t share. If they are worried they won’t be rewarded, they won’t share. If they are worried that the AI tools that they develop might replace them, or their coworkers, they won’t share. Corporate leaders need to figure out a way to reassure and reward workers, something they are not used to doing.
  • Empowering user-to-user innovation. Build prompt libraries that help workers develop and share prompts with other people inside the organization. Open up tools broadly to workers to use (while still setting policies around proprietary information), and see what they come up with. Create slack time for workers to develop, and discuss, AI approaches.
  • Don’t rely on outside providers or your existing R&D groups to tell you the answer. We are in the very early days of a new technology. Nobody really knows anything about the best ways to use AI, and they certainly don’t know the best ways to use it in your company. Only by diving in, responsibly, can you hope to figure out the best use cases.

Teaching: Preparing yourself for AI in the classroom — from chronicle.com by Beth McMurtrie

Auburn’s modules cover the following questions:

  • What do I need to know about AI?
  • What are the ethical considerations in a higher-ed context?
  • How will AI tools affect the courses I teach?
  • How are students using AI tools, and how can I partner with my students?
  • How do I need to rethink exams, papers, and projects I assign?
  • How do I redesign my courses in the wake of AI disruption?
  • What other AI tools or capabilities are coming, and how can I design for them?
  • What conversations need to happen in my department or discipline, and what is my role?

Transforming Higher Education: AI as an Assistive Technology for Inclusive Learning — from fenews.co.uk by Gain Hoole

In recent years, I have witnessed the transformative power of technology in higher education. One particular innovation that has captured my attention is Artificial Intelligence (AI). AI holds tremendous potential as an assistive technology for students with reasonable adjustments in further education (FE) and higher education (HE).

In this comprehensive blog post, I will delve into the multifaceted aspects of AI as an assistive technology, exploring its benefits, considerations, challenges, and the future it holds for transforming higher education.

The integration of AI as an assistive technology can create an inclusive educational environment where all students, regardless of disabilities or specific learning needs, have equal access to educational resources. Real-time transcription services, text-to-speech capabilities, and personalized learning experiences empower students like me to engage with course content in various formats and at our own pace (Fenews, 2023). This not only removes barriers but also fosters a more inclusive and diverse academic community.


5 Ways to Ease Students Off the Lecture and Into Active Learning — from chronicle.com by Jermey T. Murphy
Lecturing endures in college classrooms in part because students prefer that style of teaching. How can we shift that preference?

What can we do? Here are five considerations I’ll be following this coming fall in response to that nagging “less discussion, more instruction” evaluation.

  • Lecture … sparingly. 
  • Routinely ask how the course is going.
  • Be transparent.
  • …and more

A three-part series re: courseware out at The Chronicle of Higher Education:

  1. Millions of Students a Year Are Required to Buy Courseware. Often, It Replaces the Professor. — from chronicle.com by Taylor Swaak
    .
  2. Courseware Can Be Integral to a Course. Why, Then, Are Students Footing the Bill for It? — from chronicle.com by Taylor Swaak
    The Homework Tax | For students already struggling to afford college, courseware can add to the burden
    Their argument is multifold: For one, they say, products like these — which often deliver key elements of a course that an instructor would typically be responsible for, like homework, assessments, and grading — should not be the student’s burden. At least one student advocate said colleges, rather, should cover or subsidize the cost, as they do with software like learning-management systems, if they’re allowing faculty free rein to adopt the products.

    And the fact that students’ access to these products expires — sometimes after just a semester — rubs salt in the wound, and risks further disadvantaging students.
    .
  3. Bots Are Grabbing Students’ Personal Data When They Complete Assignments — from chronicle.com by Taylor Swaak
    When students use courseware, how much personal data is it collecting?

Institutions aren’t “letting the wolf into the henhouse”; instead, “we’re letting the hens out into a forest of wolves,” said Billy Meinke, an open educational resources technologist with the Outreach College at the University of Hawaii-Manoa who’s done research on publisher misuse of student data.
.


Here are five reading challenges to learn about learning this summer — from retrievalpractice.org by Pooja K. Agarwal, Ph.D.

Excerpt (emphasis DSC):

Here are five summer reading challenges to learn about the science of learning.

Important: make sure you remember what you learn! Engage yourself in retrieval practice and retrieve two things after each book, practice guide, and research article you read. Share your two things with our communities on Twitter and Facebook, make a list of what you’ve learned to boost your long-term learning,…


Assignment Makeovers in the AI Age: Essay Edition — from derekbruff.org Derek Bruff

Last week, I explored some ways an instructor might want to (or need to) redesign a reading response assignment for the fall, given the many AI text generation tools now available to students. This week, I want to continue that thread with another assignment makeover. Reading response assignments were just the warm up; now we’re tackling the essay assignment.


Here are ways professional education leaders can prepare students for the rise of AI — from highereddive.com by A. Benjamin Spencer
Institutions must adapt their curricula to incorporate artificial intelligence-related topics, the dean of William & Mary Law School argues.

First, they need to understand that the technological side of AI can no longer be simply left to the information technology experts. Regardless of the professional domain, understanding what AI is, how it works, how the underlying code and algorithms are designed, and what assumptions lie behind the computer code are important components to being able to use and consume the products of AI tools appropriately. 

 

It’s time for a Legal Moonshot — from jordanfurlong.substack.com by Jordan Furlong
All the challenges facing the legal sector today are systemic and entrenched. To solve them, we have to make a radical commitment to accomplish what we once believed impossible.

Here are three Legal Moonshots that the legal profession could take the lead on.

  1. Establish universal access to justice.
    Someday, this will be reality. Everyone will know their basic legal rights and can easily exercise them. Legal remedies will be free or extremely low-cost. Courts will be integrated into communities with simple entry and guided assistance, delivering clear and swift justice. AI-driven online services will render business agreements and settle everyday disputes. Everyone will have a last will and testament. Nobody will have to represent themselves. Justice will be real. That is all possible, and lawyers can lead the way there. It’s our Holy Grail. Let’s make it actually happen.
  2. Eliminate violence against women.  
  3. Root out public and private corruption.

The Tech Stack Law Firms and Legal Professionals Need to Succeed (Adriana Linares – LawTech Partners) — from tlpodcast.com with Adriana Linares

Adriana explains the differences between case management software, document management platforms, and practice management software. She also touches on the importance of document assembly software and how to maximize the use of data captured during the various stages of a legal matter. She closes out the discussion explaining why many in legal are missing out when they don’t use CRMs–Client and Customer Relationship Management platforms.


How To Use AI in Your Firm (with examples!) — a 1.5 hour webinar recording from clio.com; via The Brainyacts
You know your firm could benefit from AI—now, see how.

In this webinar recording you’ll learn about:

  • Practical use cases for AI in law firms—from legal research to practice area-specific prompts.
  • Popular AI tools and how to choose ones that work with your firm’s budget and goals.
  • The limitations, risks, and ethical considerations of AI for legal professionals.

Virtual Law Firms: Reinventing The Legal Profession With Technology — from forbes.com by Mohaimina Haque

Given these intrinsic advantages, it should come as no surprise that virtual law firms are on the rise. The shutdown and disruptions caused by Covid have provided a further impetus to this trend. For example, it would take an attorney the whole day to drive to the courthouse, park, wait for the judge to call their case, argue the matter, then drive back to the office. Now the same matter can be handled through Zoom and court filings can be filed online. However, after these measures were instituted, I’ve seen how the opposition to such virtual measures has eroded as the real savings in time and money to all parties concerned have become very clear.


Legal Soft Revolutionizes Legal Training with AI-Powered Platform — from globenewswire.com

LOS ANGELES, July 21, 2023 (GLOBE NEWSWIRE) — Legal Soft, a pioneering company in legal technology, is leveraging the power of artificial intelligence (AI) to revolutionize the development of training materials for law firms. Committed to advancing legal education and professional development, Legal Soft’s innovative AI-driven platform is transforming the training landscape for legal professionals.


AI Legal Case Analysis: The Next Frontier in Legal Technology — from dtgreviews.com

AI legal case analysis refers to the use of AI algorithms to analyze legal cases, identify patterns, predict outcomes, and provide insights that can aid in legal decision-making. This technology has the potential to revolutionize the way lawyers approach case strategy, conduct legal research, and even interact with clients.

One of the most significant benefits of AI legal case analysis is its ability to process vast amounts of data quickly and accurately. Traditional legal research is a time-consuming process that involves sifting through hundreds, if not thousands, of cases to find relevant precedents. AI can automate this process, analyzing thousands of cases in a fraction of the time it would take a human. This not only saves time but also increases the accuracy of the research reducing the risk of human error.

Moreover, AI can identify patterns and trends in case law that might be overlooked human researchers. For instance, AI can analyze the decisions of a particular judge or court to determine how they typically rule on certain issues. This information can be invaluable in formulating a case strategy or predicting the outcome of a case.

From DSC:
I’m not sure how I feel about this yet…but I have to admit that I’m very tentative and a bit suspect at this point.

 

‘Future lawyers need to be versatile and willing to innovate’ — from legalcheek.com

Lawyers’ jobs will be different and new roles have and will continue to emerge, such as the legal technologist. I believe further momentum will come with the growth of future lawyers.

I would like to see AI support the people via perhaps a chatbot that can offer reliable and accurate guidance to support individuals and services to help mitigate the pain relating to basic but fundamental legal matters to fill this void. Issues such as homelessness, debt, asylum, slavery etc. would be examples of such areas to prioritise.

To Future-Proof Their Firms, Attorneys Must Embrace AI — from forbes.com by Daniel Farrar

Law firms that want to use AI-powered legal tech should first adopt the mindset that AI is there to supplement their attorneys’ roles, not replace those roles. They must come to view AI-driven legal tech as a means to complete their jobs more efficiently and better address their clients’ needs.

Generative AI and the future of the legal profession: LexisNexis UK report canvases in-house and law firm views  — from legaltechnology.com by Caroline Hill

LexisNexis Legal & Professional (on 13 July) released a new report entitled “Generative AI and the future of the legal profession”, which highlights the at times surprising expectations of in-house counsel and law firms when it comes to generative AI adoption.

Forty nine percent of in-house counsel expect their law firms to be using generative AI in the next 12 months, including 11% who say they expect firms to be already using the technology. Only 8% didn’t want AI used on their work. In contrast, 24% of firms believe their clients would not want them to use AI.

The survey, conducted among 1,175 UK legal professionals from May to June 2023, finds 87% of legal professionals are aware of generative AI tools and of that group, 95% agree these tools will have an impact on the practice of law (38% said it will have a significant impact,11% said it will be transformative and 46% thought it would have “some impact”).

5 Considerations Before Starting With New Legal Tech  — from jdsupra.com

Suddenly the most urgent in-house legal conversations centered on which new technology to install next. And while I’m glad to see more legal departments shift their mindsets, I still preach caution. Budgets are tight, change is difficult and selecting the right tool at the wrong moment can prove counterproductive.

So before starting the search for innovative or new legal tech, it’s always best to assess your readiness with a few quick criteria.

 

Teaching Assistants that Actually Assist Instructors with Teaching — from opencontent.org by David Wiley

“…what if generative AI could provide every instructor with a genuine teaching assistant – a teaching assistant that actually assisted instructors with their teaching?”

Assignment Makeovers in the AI Age: Reading Response Edition — from derekbruff.org by Derek Bruff

For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.

What about option two? I think one version of this would be to do away with the reading response assignment altogether.

4 Steps to Help You Plan for ChatGPT in Your Classroom — from chronicle.com by Flower Darby
Why you should understand how to teach with AI tools — even if you have no plans to actually use them.


Some items re: AI in other areas:

15 Generative AI Tools A billion+ people will be collectively using very soon. I use most of them every day — from stefanbauschard.substack.com by Stefan Bauschard
ChatGPT, Bing, Office Suite, Google Docs, Claude, Perplexity.ai, Plug-Ins, MidJourney, Pi, Runway, Bard, Bing, Synthesia, D-ID

The Future of AI in Video: a look forward — from provideocoalition.com by Iain Anderson

Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster; resource from Tom Barrett

Along these lines of Hollywood and AI, see this Tweet:

Claude 2: ChatGPT rival launches chatbot that can summarise a novel –from theguardian.com by Dan Milmo; resource from Tom Barrett
Anthropic releases chatbot able to process large blocks of text and make judgments on what it is producing

Generative AI imagines new protein structures — from news.mit.edu by Rachel Gordon; resource from Sunday Signal
MIT researchers develop “FrameDiff,” a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.

Google’s medical AI chatbot is already being tested in hospitals — from theverge.com by Wes Davis; resource via GSV

Ready to Sing Elvis Karaoke … as Elvis? The Weird Rise of AI Music — from rollingstone.com by Brian Hiatt; resource from Misha da Vinci
From voice-cloning wars to looming copyright disputes to a potential flood of nonhuman music on streaming, AI is already a musical battleground

 
 

Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster
The reveal came as SAG-AFTRA actors confirmed they were going on strike.

When asked about the proposal during the press conference, Crabtree-Ireland said that “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.”

 
© 2025 | Daniel Christian