The resistance to AI in education isn’t really about learning — from medium.com by Peter Shea


A quick comment first from DSC:
Peter Shea gives us some interesting perspectives here. His thoughts should give many of us fodder for our own further reflection.


This reaction underscores a deeper issue: the resistance to AI in education is not truly about learning. It reflects a reluctance to re-evaluate the traditional roles of educators and to embrace the opportunities AI offers to enhance the learning experience.

In order to thrive in the learning ecosystem that will evolve in the Age of AI, the teaching profession needs to do some difficult but essential re-evaluation of their role, in order to better understand where they can provide the best value to learners. This requires confronting some comforting myths and uncomfortable truths.

Problem #2: The Closed World of Academic Culture
In addition, many teachers have spent little time working in non-academic professions. This is especially true for college instructors, who must devote five to seven years to graduate education before obtaining their first full-time position, and thus have little time to explore careers outside academia. This common lack of non-academic work experience heightens the anxiety that educators feel when contemplating the potential impact of generative AI on their work lives.


Also see this related posting:

Majority of Grads Wish They’d Been Taught AI in College — from insidehighered.com by Lauren Coffey
A new survey shows 70 percent of graduates think generative AI should be incorporated into courses. More than half said they felt unprepared for the workforce.

A majority of college graduates believe generative artificial intelligence tools should be incorporated into college classrooms, with more than half saying they felt unprepared for the workforce, according to a new survey from Cengage Group, an education-technology company.

The survey, released today, found that 70 percent of graduates believe basic generative AI training should be integrated into courses; 55 percent said their degree programs did not prepare them to use the new technology tools in the workforce.

 


“Who to follow in AI” in 2024? [Part I] — from ai-supremacy.com by Michael Spencer [some of posting is behind a paywall]
#1-20 [of 150] – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

Also see:

Along these lines, also see:


AI In Medicine: 3 Future Scenarios From Utopia To Dystopia — from medicalfuturist.com by Andrea Koncz
There’s a vast difference between baseless fantasizing and realistic forward planning. Structured methodologies help us learn how to “dream well”.

Key Takeaways

  • We’re often told that daydreaming and envisioning the future is a waste of time. But this notion is misguided.
  • We all instinctively plan for the future in small ways, like organizing a trip or preparing for a dinner party. This same principle can be applied to larger-scale issues, and smart planning does bring better results.
  • We show you a method that allows us to think “well” about the future on a larger scale so that it better meets our needs.

Adobe Unveils Powerful New Innovations in Illustrator and Photoshop Unlocking New Design Possibilities for Creative Pros — from news.adobe.com

  • Latest Illustrator and Photoshop releases accelerate creative workflows, save pros time and empower designers to realize their visions faster
  • New Firefly-enabled features like Generative Shape Fill in Illustrator along with the Dimension Tool, Mockup, Text to Pattern, the Contextual Taskbar and performance enhancement tools accelerate productivity and free up time so creative pros can dive deeper into the parts of their work they love
  • Photoshop introduces all-new Selection Brush Tool and the general availability of Generate Image, Adjustment Brush Tool and other workflow enhancements empowering creators to make complex edits and unique designs
    .


Nike is using AI to turn athletes’ dreams into shoes — from axios.com by Ina Fried

Zoom in: Nike used genAI for ideation, including using a variety of prompts to produce images with different textures, materials and color to kick off the design process.

What they’re saying: “It’s a new way for us to work,” Nike lead footwear designer Juliana Sagat told Axios during a media tour of the showcase on Tuesday.
.


AI meets ‘Do no harm’: Healthcare grapples with tech promises — from finance.yahoo.com by Maya Benjamin

Major companies are moving at high speed to capture the promises of artificial intelligence in healthcare while doctors and experts attempt to integrate the technology safely into patient care.

“Healthcare is probably the most impactful utility of generative AI that there will be,” Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to enhance drug discovery in the pharmaceutical industry, among other investments in healthcare companies, declared at the company’s AI Summit in June.


Mistral reignites this week’s LLM rivalry with Large 2 (source) — from superhuman.ai

Today, we are announcing Mistral Large 2, the new generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.


Meta releases the biggest and best open-source AI model yet — from theverge.com by Alex Heath
Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months.

Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.

Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.


4 ways to boost ChatGPT — from wondertools.substack.com by Jeremy Caplan & The PyCoach
Simple tactics for getting useful responses

To help you make the most of ChatGPT, I’ve invited & edited today’s guest post from the author of a smart AI newsletter called The Artificial Corner. I appreciate how Frank Andrade pushes ChatGPT to produce better results with four simple, clever tactics. He offers practical examples to help us all use AI more effectively.

Frank Andrade: Most of us fail to make the most of ChatGPT.

  1. We omit examples in our prompts.
  2. We fail to assign roles to ChatGPT to guide its behavior.
  3. We let ChatGPT guess instead of providing it with clear guidance.

If you rely on vague prompts, learning how to create high-quality instructions will get you better results. It’s a skill often referred to as prompt engineering. Here are several techniques to get you to the next level.

 

The Three Wave Strategy of AI Implementation — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

The First Wave: Low-Hanging Fruit

These are just examples:

  • Student services
  • Resume and Cover Letter Review (Career Services)Offering individual resume critiques
  • Academic Policy Development and Enforcement (Academic Affairs)…
  • Health Education and Outreach (Health and Wellness Services) …
  • Sustainability Education and Outreach (Sustainability and Environmental Initiatives) …
  • Digital Marketing and Social Media Management (University Communications and Marketing) …
  • Grant Proposal Development and Submission (Research and Innovation) …
  • Financial Aid Counseling (Financial Aid and Scholarships) …
  • Alumni Communications (Alumni Relations and Development) …
  • Scholarly Communications (Library Services) …
  • International Student and Scholar Services (International Programs and Global Engagement)

Duolingo Max: A Paid Subscription to Learn a Language Using ChatGPT AI (Worth It?) — from theaigirl.substack.com by Diana Dovgopol (behind paywall for the most part)
The integration of AI in language learning apps could be game-changing.


Research Insights #12: Copyrights and Academia — from aiedusimplified.substack.com by Lance Eaton
Scholarly authors are not going to be happy…

A while back, I wrote about some of my thoughts on generative AI around the copyright issues. Not much has changed since then, but a new article (Academic authors ‘shocked’ after Taylor & Francis sells access to their research to Microsoft AI) is definitely stirring up all sorts of concerns by academic authors. The basics of that article are that Taylor & Francis sold access to authors’ research to Microsoft for AI development without informing the authors, sparking significant concern among academics and the Society of Authors about transparency, consent, and the implications for authors’ rights and future earnings.

The stir can be seen as both valid and redundant. Two folks’ points stick out to me in this regard.

 

AI Tools for Legal Work: Claude, Gemini, Copilot, and More — from americanbar.org

Summary

  • Uncover four AI tools that can aid legal professionals, including Claude, Gemini, Copilot and ChatGPT.
  • Discover the capabilities of these tools and how their functionality can streamline law firm’s workflow, improving efficiency.
  • Learn how much these AI tools cost to incorporate into your tech stack, and determine if there are any security risks to be aware

Also relevant, see:

 

 

How to use Perplexity in your daily workflow — from ai-supremacy.com by Michael Spencer and Alex McFarland
“I barely use Google anymore (for anything)” says today’s guest author.

Make Perplexity your go-to research companion with these strategies:

  • Morning briefings: Start your day by asking Perplexity for the latest news in your field. (I personally like to use Perplexity to curate the top AI news of the day to consider writing about for Unite AI and Techopedia.)
  • Fact-checking: Use it to quickly verify information before including it in your work.
  • Brainstorming: Generate ideas for projects or content by asking open-ended questions.
  • Learning new concepts: When you encounter an unfamiliar term or idea, turn to Perplexity for a quick, comprehensive explanation.
  • Writing assistance: Use it to find relevant statistics, examples, or counterarguments for your content.

From DSC:
So I used it to ask “What is Elon Musk’s Memphis Supercluster?” (which went online yesterday I believe) — which resulted in this reply:

Elon Musk’s Memphis Supercluster is a newly activated AI training cluster that is claimed to be the most powerful in the world. Here are the key details about this supercomputer:

1. Location: The supercluster is located in Memphis, Tennessee[1][2].

2. Hardware: It consists of 100,000 liquid-cooled Nvidia H100 GPUs connected through a single RDMA (Remote Direct Memory Access) fabric[1][3].

3. Purpose: The supercluster is designed for training large language models (LLMs) and other advanced AI technologies for Musk’s xAI company[1][2].

4. Activation: The Memphis Supercluster began training at approximately 4:20 AM local time on July 22, 2024[1][3].

5. Collaboration: The project is a result of collaboration between xAI, X (formerly Twitter), Nvidia, and other supporting companies[1][2].

6. Investment: With each H100 GPU estimated to cost between $30,000 to $40,000, the total investment in GPUs alone is estimated to be between $3 billion to $4 billion[5].

7. Goals: Musk claims that this supercluster will be used to develop “the world’s most powerful AI by every measure” by December 2024[1].

8. Comparison: The Memphis Supercluster’s 100,000 H100 GPUs significantly outclass other supercomputers in terms of GPU horsepower, such as Frontier (37,888 AMD GPUs) and Microsoft Eagle (14,400 Nvidia H100 GPUs)[3].

9. Infrastructure: The project required significant infrastructure development, including fiber optic networking[5].

While Musk’s claims about the supercluster’s capabilities are ambitious, it remains to be seen how it will perform in practice and whether it will meet the stated goals within the given timeframe[1].

Citations:
[1] https://venturebeat.com/ai/elon-musk-announces-most-powerful-ai-training-cluster-in-the-world/
[2] https://www.tuaw.com/2024/07/22/elon-musks-xai-unveils-worlds-most-powerful-ai-training-cluster/
[3] https://www.tomshardware.com/pc-components/gpus/elon-musk-fires-up-the-most-powerful-ai-training-cluster-in-the-world-uses-100000-nvidia-h100-gpus-on-a-single-fabric
[4] https://www.thestack.technology/xai-elon-musk-ai-supercomputer/
[5] https://www.benzinga.com/news/24/07/39881748/elon-musks-xai-flips-the-switch-on-100k-nvidia-h100-gpus-worth-up-to-4b-in-memphis-supercluster-most
[6] https://www.youtube.com/watch?v=0JhtJa05C2Y

—–
Elon’s AI empire expands — from theneurondaily.com by Grant Harvey
Elon Musk’s team at xAI just powered on the “World’s Most Powerful AI Training Cluster.”
If you don’t know what a supercluster is, it’s basically a massive network of Nvidia GPUs (computer chips) working together as a single unit to solve “super” complex calculations at unprecedented speeds.

And this Memphis Supercluster is the most “super” supercluster we’ve ever seen. The new facility, dubbed the “Gigafactory of Compute”, is a beast:

  • 100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric (for context, Google snagged only 50,000 H100 GPUs last year).
  • Up to 150 megawatts of electricity usage per hour—enough for 100K homes.
  • At least one million gallons of water per day to keep cool!

What to expect: Better models, more frequently. That’s been the trend, at least—look at how the last few model releases have become more squished together. 


OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users — from tomsguide.com by Ryan Morrison

GPT-4o Advanced Voice is an entirely new type of voice assistant, similar to but larger than the recently unveiled French model Moshi, which argued with me over a story.

In demos of the model, we’ve seen GPT-4o Advanced Voice create custom character voices, generate sound effects while telling a story and even act as a live translator.

This native speech ability is a significant step in creating more natural AI assistants. In the future, it will also come with live vision abilities, allowing the AI to see what you see.


Could AGI break the world? — from theneurondaily.com by Noah Edelman

“Biggest IT outage in history” proves we’re not ready for AGI.

Here’s the TL;DR
—a faulty software update from cybersecurity firm Crowdstrike made this happen:

  • Grounded 5,000+ flights around the world.
  • Slowed healthcare across the UK.
  • Forced retailers to revert to cash-only transactions in Australia (what is this, the stone ages?!).


Here’s where AI comes in: Imagine today’s AI as a new operating system. In 5-10 years, it’ll likely be as integrated into our economy as Microsoft’s cloud servers are now. This isn’t that far-fetched—Microsoft is already planning to embed AI into all its programs.

So what if a Crowdstrike-like incident happens with a more powerful AI system? Some experts predict an AI-powered IT outage could be 10x worse than Friday’s fiasco.


The Crowdstrike outage and global software’s single-point failure problem — from cnbc.com by Kaya Ginsky

KEY POINTS

  • The CrowdStrike software bug that took down global IT infrastructure exposed a single-point-of-failure risk unrelated to malicious cyberattack.
  • National and cybersecurity experts say the risk of this kind of technical outage is increasing alongside the risk of hacks, and the market will need to adopt better competitive practices.
  • Government is also likely to look at new regulations related to software updates and patches.

The “largest IT outage in history,” briefly explained — from vox.com by Li Zhou
Airlines, banks, and hospitals saw computer systems go down because of a CrowdStrike software glitch.

 

The race to deploy GenAI in the legal sector — from sifted.eu by Kai Nicol-Schwarz
LegalFly’s €15m Series A is the latest in a string of raises for European GenAI legaltech startups

Speak to any lawyer and you’ll soon discover that the job is a far cry from the fevered excitement of a courtroom drama. Behind the scenes, there’s an endless amount of laborious and typically manual tasks like drafting, reviewing and negotiating contracts and other legal documents that have to be done manually daily.

It was this realisation that led four product managers at dating app giant Tinder, frustrated by what they saw as a lack of AI adoption at the company, to jump ship and found Belgium-based LegalFly last year. The startup is building a generative AI copilot for lawyers which eventually, it says, will be able to automate entire workflows in the legal profession.

“We were looking at what GenAI was good at, which is synthesising data and generating content,” says founder and CEO Ruben Miessen. “What industry works like that? Law, and it does it all in a very manual way.”

“The legal industry is a global behemoth that’s seen minimal innovation since the advent of Microsoft Word in the 90s,” says Carina Namih, partner at Plural. “GenAI — especially with a human in the loop to keep accuracy high — is ideally suited to drafting, editing and negotiating legal documents.”


Legal Technology Company Relativity Announces OpenAI ChatGPT Integration — from lawfuel.com

CHICAGO– July 18 – Relativity, a global legal technology company, today announced it is integrating with OpenAI’s ChatGPT Enterprise Compliance API. The integration adds ChatGPT Enterprise as a Collect in RelativityOne data source, allowing users to seamlessly collect and process human-to-AI conversational data.

“The future around human and AI interaction is changing rapidly, calling for innovative legal data management solutions to include novel data sources, such as conversations with AI agents,” said Chris Brown, Chief Product Officer at Relativity. “In answering that call, we are committed to equipping our community with the tools they need to traverse the evolving future of human-to-AI conversational data and putting users in control of this new data landscape.”

 

AI-assisted job fraud is spiking — from thedeepview.co by Ian Krietzberg

A recent report published by the Identity Theft Resource Center (ITRC) found that data from 2023 shows “an environment where bad actors are more effective, efficient and successful in launching attacks. The result is fewer victims (or at least fewer victim reports), but the impact on individuals and businesses is arguably more damaging.”

One of these attacks involves fake job postings.

The details: The ITRC said that victim reports of job and employment scams spiked some 118% in 2023. These scams were primarily carried out through LinkedIn and other job search platforms.

    • The bad actors here would either create fake (but professional-looking) job postings, profiles and websites or impersonate legitimate companies, all with the hopes of landing victims to move onto the interview process.
    • These actors would then move the conversation onto a third-party messaging platform, and ask for identity verification information (driver’s licenses, social security numbers, direct deposit information, etc.).

Hypernatural — AI videos you can actually use. — via Jeremy Caplan’s Wonder Tools

Hypernatural is an AI video platform that makes it easy to create beautiful, ready-to share videos from anything. Stop settling for glitchy 3s generated videos and boring stock footage. Turn your ideas, scripts, podcasts and more into incredible short-form videos in minutes.


GPT-4o mini: advancing cost-efficient intelligence — from openai.com
Introducing our most cost-efficient small model

OpenAI is committed to making intelligence as broadly accessible as possible. Today, we’re announcing GPT-4o mini, our most cost-efficient small model. We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable. GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo.

GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).

Also see what this means from Ben’s Bites, The Neuron, and as The Rundown AI asserts:

Why it matters: While it’s not GPT-5, the price and capabilities of this mini-release significantly lower the barrier to entry for AI integrations — and marks a massive leap over GPT 3.5 Turbo. With models getting cheaper, faster, and more intelligent with each release, the perfect storm for AI acceleration is forming.


Nvidia: More AI Waves Are Taking Shape — from seekingalpha.com by Eric Sprague

Summary

  • Nvidia Corporation is transitioning from a GPU designer to an AI factory builder.
  • AI spending will continue to grow in healthcare, government, and robotics.
  • CEO Jensen Huang says the AI robot industry could be bigger than the auto and consumer electronics industries combined.

Byte-Sized Courses: NVIDIA Offers Self-Paced Career Development in AI and Data Science — from blogs.nvidia.com by Andy Bui
Industry experts gather to share advice on starting a career in AI, highlighting technical training and certifications for career growth.

 

What aspects of teaching should remain human? — from hechingerreport.org by Chris Berdik
Even techno optimists hesitate to say teaching is best left to the bots, but there’s a debate about where to draw the line

ATLANTA — Science teacher Daniel Thompson circulated among his sixth graders at Ron Clark Academy on a recent spring morning, spot checking their work and leading them into discussions about the day’s lessons on weather and water. He had a helper: As Thompson paced around the class, peppering them with questions, he frequently turned to a voice-activated AI to summon apps and educational videos onto large-screen smartboards.

When a student asked, “Are there any animals that don’t need water?” Thompson put the question to the AI. Within seconds, an illustrated blurb about kangaroo rats appeared before the class.

Nitta said there’s something “deeply profound” about human communication that allows flesh-and-blood teachers to quickly spot and address things like confusion and flagging interest in real time.


Deep Learning: Five New Superpowers of Higher Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
How Deep Learning is Transforming Higher Education

While the traditional model of education is entrenched, emerging technologies like deep learning promise to shake its foundations and usher in an age of personalized, adaptive, and egalitarian education. It is expected to have a significant impact across higher education in several key ways.

…deep learning introduces adaptivity into the learning process. Unlike a typical lecture, deep learning systems can observe student performance in real-time. Confusion over a concept triggers instant changes to instructional tactics. Misconceptions are identified early and remediated quickly. Students stay in their zone of proximal development, constantly challenged but never overwhelmed. This adaptivity prevents frustration and stagnation.


InstructureCon 24 Conference Notes — from onedtech.philhillaa.com by Glenda Morgan
Another solid conference from the market leader, even with unclear roadmap

The new stuff: AI
Instructure rolled out multiple updates and improvements – more than last year. These included many AI-based or focused tools and services as well as some functional improvements. I’ll describe the AI features first.

Sal Khan was a surprise visitor to the keynote stage to announce the September availability of the full suite of AI-enabled Khanmigo Teacher Tools for Canvas users. The suite includes 20 tools, such as tools to generate lesson plans and quiz questions and write letters of recommendation. Next year, they plan to roll out tools for students themselves to use.

Other AI-based features include.

    • Discussion tool summaries and AI-generated responses…
    • Translation of inbox messages and discussions…
    • Smart search …
    • Intelligent Insights…

 

 

School 3.0: Reimagining Education in 2026, 2029, and 2034 — from davidborish.com by David Borish
.

The landscape of education is on the brink of a profound transformation, driven by rapid advancements in artificial intelligence. This shift was highlighted recently by Andrej Karpathy’s announcement of Eureka Labs, a venture aimed at creating an “AI-native” school. As we look ahead, it’s clear that the integration of AI in education will reshape how we learn, teach, and think about schooling altogether.

Traditional textbooks will begin to be replaced by interactive, AI-powered learning materials that adapt in real-time to a student’s progress.

As we approach 2029, the line between physical and virtual learning environments will blur significantly.

Curriculum design will become more flexible and personalized, with AI systems suggesting learning pathways based on each student’s interests, strengths, and career aspirations.

The boundaries between formal education and professional development will blur, creating a continuous learning ecosystem.

 


And while we’re having some fun, also see the following items:


 

Introducing Eureka Labs — “We are building a new kind of school that is AI native.” — by Andrej Karpathy, Previously Director of AI @ Tesla, founding team @ OpenAI

However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).


After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education — from techcrunch.com by Rebecca Bellan

Andrej Karpathy, former head of AI at Tesla and researcher at OpenAI, is launching Eureka Labs, an “AI native” education platform. In tech speak, that usually means built from the ground up with AI at its core. And while Eureka Labs’ AI ambitions are lofty, the company is starting with a more traditional approach to teaching.

San Francisco-based Eureka Labs, which Karpathy registered as an LLC in Delaware on June 21, aims to leverage recent progress in generative AI to create AI teaching assistants that can guide students through course materials.


What does it mean for students to be AI-ready? — from timeshighereducation.com by David Joyner
Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner

We owe it to our students to prepare them for this full range of AI skills, not merely the end points. The best way to fulfil this responsibility is to acknowledge and examine this new category of tools. More and more tools that students use daily – word processors, email, presentation software, development environments and more – have AI-based features. Practising with these tools is a valuable exercise for students, so we should not prohibit that behaviour. But at the same time, we do not have to just shrug our shoulders and accept however much AI assistance students feel like using.


Teachers say AI usage has surged since the school year started — from eschoolnews.com by Laura Ascione
Half of teachers report an increase in the use of AI and continue to seek professional learning

Fifty percent of educators reported an increase in AI usage, by both students and teachers, over the 2023–24 school year, according to The 2024 Educator AI Report: Perceptions, Practices, and Potential, from Imagine Learning, a digital curriculum solutions provider.

The report offers insight into how teachers’ perceptions of AI use in the classroom have evolved since the start of the 2023–24 school year.


OPINION: What teachers call AI cheating, leaders in the workforce might call progress — from hechingerreport.org by C. Edward Waston and Jose Antonio Bowen
Authors of a new guide explore what AI literacy might look like in a new era

Excerpt (emphasis DSC):

But this very ease has teachers wondering how we can keep our students motivated to do the hard work when there are so many new shortcuts. Learning goals, curriculums, courses and the way we grade assignments will all need to be reevaluated.

The new realities of work also must be considered. A shift in employers’ job postings rewards those with AI skills. Many companies report already adopting generative AI tools or anticipate incorporating them into their workflow in the near future.

A core tension has emerged: Many teachers want to keep AI out of our classrooms, but also know that future workplaces may demand AI literacy.

What we call cheating, business could see as efficiency and progress.

It is increasingly likely that using AI will emerge as an essential skill for students, regardless of their career ambitions, and that action is required of educational institutions as a result.


Teaching Writing With AI Without Replacing Thinking: 4 Tips — from by Erik Ofgang
AI has a lot of potential for writing students, but we can’t let it replace the thinking parts of writing, says writing professor Steve Graham

Reconciling these two goals — having AI help students learn to write more efficiently without hijacking the cognitive benefits of writing — should be a key goal of educators. Finding the ideal balance will require more work from both researchers and classroom educators, but Graham shares some initial tips for doing this currently.




Why I ban AI use for writing assignments — from timeshighereducation.com by James Stacey Taylor
Students may see handwriting essays in class as a needlessly time-consuming approach to assignments, but I want them to learn how to engage with arguments, develop their own views and convey them effectively, writes James Stacey Taylor

Could they use AI to generate objections to the arguments they read? Of course. AI does a good job of summarising objections to Singer’s view. But I don’t want students to parrot others’ objections. I want them to think of objections themselves. 

Could AI be useful for them in organising their exegesis of others’ views and their criticisms of them? Yes. But, again, part of what I want my students to learn is precisely what this outsources to the AI: how to organise their thoughts and communicate them effectively. 


How AI Will Change Education — from digitalnative.tech by Rex Woodbury
Predicting Innovation in Education, from Personalized Learning to the Downfall of College 

This week explores how AI will bleed into education, looking at three segments of education worth watching, then examining which business models will prevail.

  1. Personalized Learning and Tutoring
  2. Teacher Tools
  3. Alternatives to College
  4. Final Thoughts: Business Models and Why Education Matters

New Guidance from TeachAI and CSTA Emphasizes Computer Science Education More Important than Ever in an Age of AI — from csteachers.org by CSTA
The guidance features new survey data and insights from teachers and experts in computer science (CS) and AI, informing the future of CS education.

SEATTLE, WA – July 16, 2024 – Today, TeachAI, led by Code.org, ETS, the International Society of Technology in Education (ISTE), Khan Academy, and the World Economic Forum, launches a new initiative in partnership with the Computer Science Teachers Association (CSTA) to support and empower educators as they grapple with the growing opportunities and risks of AI in computer science (CS) education.

The briefs draw on early research and insights from CSTA members, organizations in the TeachAI advisory committee, and expert focus groups to address common misconceptions about AI and offer a balanced perspective on critical issues in CS education, including:

  • Why is it Still Important for Students to Learn to Program?
  • How Are Computer Science Educators Teaching With and About AI?
  • How Can Students Become Critical Consumers and Responsible Creators of AI?
 

OpenAI illegally barred staff from airing safety risks, whistleblowers say — from washingtonpost.com by Pranshu Verma, Cat Zakrzewski, and Nitasha Tiku
In a letter exclusively obtained by The Washington Post, whistleblowers asked the SEC to probe company’s allegedly restrictive non-disclosure agreements

OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.

The whistleblowers said OpenAI issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators, according to a seven-page letter sent to the SEC commissioner earlier this month that referred to the formal complaint. The letter was obtained exclusively by The Washington Post.

 

What to Know About Buying A Projector for School — from by Luke Edwards
Buy the right projector for school with these helpful tips and guidance.

Picking the right projector for school can be a tough decision as the types and prices range pretty widely. From affordable options to professional grade pricing, there are many choices. The problem is that the performance is also hugely varied. This guide aims to be the solution by offering all you need to know about buying the right projector for school where you are.

Luke covers a variety of topics including:

  • Types of projectors
  • Screen quality
  • Light type
  • Connectivity
  • Pricing

From DSC:
I posted this because Luke covered a variety of topics — and if you’re set on going with a projector, this is a solid article. But I hesitated to post this, as I’m not sure of the place that projectors will have in the future of our learning spaces. With voice-enabled apps and appliances continuing to be more prevalent — along with the presence of AI-based human-computer interactions and intelligent systems — will projectors be the way to go? Will enhanced interactive whiteboards be the way to go? Will there be new types of displays? I’m not sure. Time will tell.

 


The race against time to reinvent lawyers — from jordanfurlong.substack.com by Jordan Furlong
Our legal education and licensing systems produce one kind of lawyer. The legal market of the near future will need another kind. If we can’t close this gap fast, we’ll have a very serious problem.

Excerpt (emphasis DSC):

Lawyers will still need competencies like legal reasoning and analysis, statutory and contractual interpretation, and a range of basic legal knowledge. But it’s unhelpful to develop these skills through activities that lawyers won’t be performing much longer, while neglecting to provide them with other skills and prepare them for other situations that they will face. Our legal education and licensing systems are turning out lawyers whose competence profiles simply won’t match up with what people will need lawyers to do.

A good illustration of what I mean can be found in an excellent recent podcast from the Practising Law Institute, “Shaping the Law Firm Associate of the Future.” Over the course of the episode, moderator Jennifer Leonard of Creative Lawyers asked Professors Alice Armitage of UC Law San Francisco and Heidi K. Brown of New York Law School to identify some of the competencies that newly called lawyers and law firm associates are going to need in future. Here’s some of what they came up with:

  • Agile, nimble, extrapolative thinking
  • Collaborative, cross-disciplinary learning
  • Entrepreneurial, end-user-focused mindsets
  • Generative AI knowledge (“Their careers will be shaped by it”)
  • Identifying your optimal individual workflow
  • Iteration, learning by doing, and openness to failure
  • Leadership and interpersonal communication skills
  • Legal business know-how, including client standards and partner expectations
  • Receiving and giving feedback to enhance effectiveness

Legal Tech for Legal Departments – What In-House Lawyers Need to Know — from legal.thomsonreuters.com by Sterling Miller

Whatever the reason, you must understand the problem inside and out. Here are the key points to understanding your use case:

  • Identify the problem.
  • What is the current manual process to solve the problem?
  • Is there technology that will replace this manual process and solve the problem?
  • What will it cost and do you have (or can you get) the budget?
  • Will the benefits of the technology outweigh the cost? And how soon will those benefits pay off the cost? In other words, what is the return on investment?
  • Do you have the support of the organization to buy it (inside the legal department and elsewhere, e.g., CFO, CTO)?

2024-05-13: Of Legal AI — from emergentbehavior.co

Long discussion with a senior partner at a major Bay Area law firm:

Takeaways

A) They expect legal AI to decimate the profession…
B) Unimpressed by most specific legal AI offerings…
C) Generative AI error rates are acceptable even at 10–20%…
D) The future of corporate law is in-house…
E) The future of law in general?…
F) Of one large legal AI player…


2024 Legal Technology Survey Results — from lexology.com

Additional findings of the annual survey include:

  • 77 percent of firms have a formal technology strategy in place
  • Interest and intentions regarding generative A.I. remain high, with almost 80 percent of participating firms expecting to leverage it within the next five years. Many have either already begun or are planning to undertake data hygiene projects as a precursor to using generative A.I. and other automation solutions. Although legal market analysts have hypothesized that proprietary building of generative A.I. solutions remain out of reach for mid-sized firms, several Meritas survey respondents are making traction. Many other firms are also licensing third-party generative A.I. solutions.
  • The survey showed strong technology progression among several Meritas member firms, with most adopting a tech stack of core, foundational systems of infrastructure technology and adding cloud-based practice management, document management, time, billing, and document drafting applications.
  • Most firms reported increased adoption and utilization of options already available within their current core systems, such as Microsoft Office 365 Teams, SharePoint, document automation, and other native functionalities for increasing efficiencies; these functions were used more often in place of dedicated purpose-built solutions such as comparison and proofreading tools.
  • The legal technology market serving Meritas’ member firms continues to be fractured, with very few providers emerging as market leaders.

AI Set to Save Professionals 12 Hours Per Week by 2029 — from legalitprofessionals.com

Thomson Reuters, a global content and technology company, today released its 2024 Future of Professionals report, an annual survey of more than 2,200 professionals working across legal, tax, and risk & compliance fields globally. Respondents predicted that artificial intelligence (AI) has the potential to save them 12 hours per week in the next five years, or four hours per week over the upcoming year – equating to 200 hours annually.

This timesaving potential is the equivalent productivity boost of adding an extra colleague for every 10 team members on staff. Harnessing the power of AI across various professions opens immense economic opportunities. For a U.S. lawyer, this could translate to an estimated $100,000 in additional billable hours.*

 


Higher Education Has Not Been Forgotten by Generative AI — from insidehighered.com by Ray Schroeder
The generative AI (GenAI) revolution has not ignored higher education; a whole host of tools are available now and more revolutionary tools are on the way.

Some of the apps that have been developed for general use can be customized for specific topical areas in higher ed. For example, I created a version of GPT, “Ray’s EduAI Advisor,” that builds onto the current GPT-4o version with specific updates and perspectives on AI in higher education. It is freely available to users. With few tools and no knowledge of the programming involved, anyone can build their own GPT to supplement information for their classes or interest groups.

Excerpts from Ray’s EduAI Advisor bot:

AI’s global impact on higher education, particularly in at-scale classes and degree programs, is multifaceted, encompassing several key areas:
1. Personalized Learning…
2. Intelligent Tutoring Systems…
3. Automated Assessment…
4. Enhanced Accessibility…
5. Predictive Analytics…
6. Scalable Virtual Classrooms
7. Administrative Efficiency…
8. Continuous Improvement…

Instructure and Khan Academy Announce Partnership to Enhance Teaching and Learning With Khanmigo, the AI Tool for Education — from instructure.com
Shiren Vijiasingam and Jody Sailor make an exciting announcement about a new partnership sure to make a difference in education everywhere.

 
© 2024 | Daniel Christian