EdTech Is Going Crazy For AI — from joshbersin.com by Josh Bersin

Excerpts:

This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.

No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.

Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?

The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.


Speaking of the ASU+GSV Summit, see this posting from Michael Moe:

EIEIO…Brave New World
By: Michael Moe, CFA, Brent Peus, Owen Ritz

Excerpt:

Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…

***

Also see:

Imagining what’s possible in lifelong learning: Six insights from Stanford scholars at ASU+GSV — from acceleratelearning.stanford.edu by Isabel Sacks

Excerpt:

High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.



A guide to prompting AI (for what it is worth) — from oneusefulthing.org by Ethan Mollick
A little bit of magic, but mostly just practice

Excerpt (emphasis DSC):

Being “good at prompting” is a temporary state of affairs. The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.

The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.

From DSC:
Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.


From DSC:
Bizarre…at least for me in late April of 2023:


Excerpt from Lore Issue #28: Drake, Grimes, and The Future of AI Music — from lore.com

Here’s a summary of what you need to know:

  • The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
  • Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
  • Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
  • The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.

The Need for AI PD — from techlearning.com by Erik Ofgang
Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.

“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.”  This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI. 


Coursera Adds ChatGPT-Powered Learning Tools — from campustechnology.com by Kate Lucariello

Excerpt:

At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.

Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.

From DSC:
Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.


 

From DSC:
Regarding the core curricula of colleges and universities…

For decades now, faculty members have taught what they wanted to teach and what interested them. They taught what they wanted to research vs. what the wider marketplace/workplace needed. They were not responsive to the needs of the workplace — nor to the needs of their students!

And this situation has been all the more compounded by the increasing costs of obtaining a degree plus the exponential pace of change. We weren’t doing a good job before this exponential pace of change started taking place — and now it’s (almost?) impossible to keep up.

The bottom line on the article below: ***It’s sales.***

Therefore, it’s about what you are selling — and at what price. The story hasn’t changed much. The narrative (i.e., the curricula and more) is pretty much the same thing that’s been sold for years.

But the days of faculty members teaching whatever they wanted to are over, or significantly waning.

Faculty members, faculty senates, provosts, presidents, and accreditors are reaping what they’ve sown.

The questions are now:

  • Will new seeds be sown?
  • Will new crops arise in the future?
  • Will there be new narratives?
  • Will institutions be able to reinvent themselves (one potential example here)? Or will their cultures not allow such significant change to take place? Will alternatives to institutions of traditional higher education continue to pick up steam?

A Profession on the Edge — from chronicle.com by Eric Hoover
Why enrollment leaders are wearing down, burning out, and leaving jobs they once loved.

Excerpts:

Similar stories are echoing throughout the hallways of higher education. Vice presidents for enrollment, as well as admissions deans and directors, are wearing down, burning out, and leaving jobs they once loved. Though there’s no way to compile a chart quantifying the churn, industry insiders describe it as significant. “We’re at an inflection point,” says Rick Clark, executive director of undergraduate admission at Georgia Tech. “There have always been people leaving the field, but not in the numbers we’re seeing now.”

Some are being shoved out the door by presidents and boards. Some are resigning out of exhaustion, frustration, and disillusionment. And some who once sought top-level positions are rethinking their ambitions. “The pressures have ratcheted up tenfold,” says Angel B. Pérez, chief executive of the National Association for College Admission Counseling, known as NACAC. “I talk with someone each week who’s either leaving the field or considering leaving.”


From DSC:
This quote points to what I’m trying to address here:

Dahlstrom and other veterans of the field say they’ve experienced something especially disquieting: an erosion of faith in the transformational power of higher education. Though she sought a career in admissions to help students, her disillusionment grew after taking on a leadership role. She became less confident that she was equipped to effect positive changes, at her institution or beyond, especially when it came to the challenge of expanding college access in a nation of socioeconomic disparities: “I felt like a cog in a huge machine that’s not working, yet continues to grind while only small, temporary fixes are made.”

 

From DSC:
Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:


Our World Shaken, Not Stirred: Synthetic entertainment, hybrid social experiences, syncing ourselves with apps, and more. — from implications.com by Scott Belsky
Things will get weird. And exciting.

Excerpts:

Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.

A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.

Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.


Also relevant/see:

Digital storytelling with generative AI: notes on the appearance of #AICinema — from bryanalexander.org by Bryan Alexander

Excerpt:

This is how I viewed a fascinating article about the so-called #AICinema movement.  Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland.  It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.

Bryan links to:

Artists astound with AI-generated film stills from a parallel universe — from arstechnica.com by Benj Edwards
A Q&A with “synthographer” Julie Wieland on the #aicinema movement.

An AI-generated image from an #aicinema still series called Vinyl Vengeance by Julie Wieland, created using Midjourney.


From DSC:
How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?

Hmmm…should be interesting!


 

ANALYSIS: ‘Microcredentials’ poised to disrupt higher ed as degrees lose relevance to employers — from campusreform.org by Shelby Kearns; with thanks to Ray Schroeder on LinkedIn for this resource

Key points:

  • Survey respondents are demonstrating confidence in microcredentials–online training programs that take no more than six months to complete–as four-year degree programs often overlook job training.
  • ‘Grade inflation and efforts to help everyone … attend college make it harder for employers to differentiate among applicants.’
 


Also from Julie Sobowale, see:

  • Law’s AI revolution is here — from nationalmagazine.ca
    At least this much we know. Firms need to develop a strategy around language models.

Also re: legaltech, see:

  • Pioneers and Pathfinders: Richard Susskind — from seyfarth.com by J. Stephen Poor
    In our conversation, Richard discusses the ways we should all be thinking about legal innovation, the challenges of training lawyers for the future, and the qualifications of those likely to develop breakthrough technologies in law, as well as his own journey and how he became interested in AI as an undergraduate student.

Also re: legaltech, see:

There is an elephant in the room that is rarely discussed. Who owns the IP of AI-generated content?

 

Law has a magic wand now — from jordanfurlong.substack.com by Jordan Furlong
Some people think Large Language Models will transform the practice of law. I think it’s bigger than that.

Excerpts:

ChatGPT4 can also do things that only lawyers (used to be able to) do. It can look up and summarize a court decisionanalyze and apply sections of copyright law, and generate a statement of claim for breach of contract.

What happens when you introduce a magic wand into the legal market? On the buyer side, you reduce by a staggering degree the volume of tasks that you need to pay lawyers (whether in-house or outside counsel) to perform. It won’t happen overnight: Developing, testing, revising, approving, and installing these sorts of systems in corporations will take time. But once that’s done, the beauty of LLMs like ChatGPT4 is that they are not expert systems. Anyone can use them. Anyone will.

But I can’t shake the feeling that someday, we’ll divide the history of legal services into “Before GPT4” and “After GPT4.” I think it’s that big.


From DSC:
Jordan mentions: “Some people think Large Language Models will transform the practice of law. I think it’s bigger than that.”

I agree with Jordan. It most assuredly IS bigger than that. AI will profoundly impact many industries/disciplines. The legal sector is but one of them. Education is another. People’s expectations are now changing — and the “ramification wheels” are now in motion.

I take the position that many others have as well (at least as of this point in time) that take the position that AI will supplement humans’ capabilities and activities. But those who know AI-driven apps will outcompete those who don’t know about such apps. 

 

ChatGPT is Everywhere — from chronicle.com by Beth McMurtrie
Love it or hate it, academics can’t ignore the already pervasive technology.

Excerpt:

Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.

Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.

“Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.”

Sarah Eaton, associate professor of education at the University of Calgary



Artificial intelligence and academic integrity, post-plagiarism — from universityworldnews.com Sarah Elaine Eaton; with thanks to Robert Gibson out on LinkedIn for the resource

Excerpt:

The use of artificial intelligence tools does not automatically constitute academic dishonesty. It depends how the tools are used. For example, apps such as ChatGPT can be used to help reluctant writers generate a rough draft that they can then revise and update.

Used in this way, the technology can help students learn. The text can also be used to help students learn the skills of fact-checking and critical thinking, since the outputs from ChatGPT often contain factual errors.

When students use tools or other people to complete homework on their behalf, that is considered a form of academic dishonesty because the students are no longer learning the material themselves. The key point is that it is the students, and not the technology, that is to blame when students choose to have someone – or something – do their homework for them.

There is a difference between using technology to help students learn or to help them cheat. The same technology can be used for both purposes.

From DSC:
These couple of sentences…

In the age of post-plagiarism, humans use artificial intelligence apps to enhance and elevate creative outputs as a normal part of everyday life. We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable.

…reminded me of what’s been happening within the filmmaking world for years (i.e., such as in Star Wars, Jurrasic Park, and many others). It’s often hard to tell what’s real and what’s been generated by a computer.
 

A quick and sobering guide to cloning yourself — from oneusefulthing.substack.com by Professor Ethan Mollick
It took me a few minutes to create a fake me giving a fake lecture.

Excerpt:

I think a lot of people do not realize how rapidly the multiple strands of generative AI (audio, text, images, and video) are advancing, and what that means for the future.

With just a photograph and 60 seconds of audio, you can now create a deepfake of yourself in just a matter of minutes by combining a few cheap AI tools. I’ve tried it myself, and the results are mind-blowing, even if they’re not completely convincing. Just a few months ago, this was impossible. Now, it’s a reality.

To start, you should probably watch the short video of Virtual Me and Real Me giving the same talk about entrepreneurship. Nothing about the Virtual Me part of the video is real, even the script was completely AI-generated.

.


From DSC:
Also, I wanted to post the resource below just because I think it’s an excellent question!

If ChatGPT Can Disrupt Google In 2023, What About Your Company? — from forbes.com by Glenn Gow

Excerpts:

Board members and corporate execs don’t need AI to decode the lessons to be learned from this. The lessons should be loud and clear: If even the mighty Google can be potentially overthrown by AI disruption, you should be concerned about what this may mean for your company.

Professions that will be disrupted by generative AI include marketing, copywriting, illustration and design, sales, customer support, software coding, video editing, film-making, 3D modeling, architecture, engineering, gaming, music production, legal contracts, and even scientific research. Software applications will soon emerge that will make it easy and intuitive for anyone to use generative AI for those fields and more.
.


 

ChatGPT sets record for fastest-growing user base – analyst note — from reuters.com by Krystal Hu

Excerpt (emphasis DSC):

Feb 1 (Reuters) – ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.

The report, citing data from analytics firm Similarweb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December.

“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts wrote in the note.


From DSC:
This reminds me of the current exponential pace of change that we are experiencing…

..and how we struggle with that kind of pace.

 

ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation — from wsj.com by Berber Jin and Miles Kruppa
Tender offer at that valuation would make OpenAI one of the most valuable U.S. startups

Here’s how Microsoft could use ChatGPT — from The Algorithm by Melissa Heikkilä

Excerpt (emphasis DSC):

Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search. The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.

This is a big deal. If successful, it will bring powerful AI tools to the masses. So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.

And speaking of Microsoft and AI, also see:

I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.

Donald Clark

The Expanding Dark Forest and Generative AI — from maggieappleton.com by
Proving you’re a human on a web flooded with generative AI content

Assumed audience:

People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.

That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.

 

DeepMind CEO Demis Hassabis Urges Caution on AI — from time.com by Billy Perrigo

It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”

“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.

Demis Hassabis 

Excerpt (emphasis DSC):

Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.

AI-assisted plagiarism? ChatGPT bot says it has an answer for that — from theguardian.com by Alex Hern
Silicon Valley firm insists its new text generator, which writes human-sounding essays, can overcome fears over cheating

Excerpt:

Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.

Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.

Schools Shouldn’t Ban Access to ChatGPT — from time.com by Joanne Lipman and Rebecca Distler

Excerpt (emphasis DSC):

Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.

We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.

We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.

ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz
Are fears about generative AI warranted?

And see Joshua Kim’s A Friendly Attempt to Balance Steve Mintz’s Piece on Higher Ed Hard Truths out at nsidehighered.com | Comparing the health care and higher ed systems.

 



What Leaders Should Know About Emerging Technologies — from forbes.com by Benjamin Laker

Excerpt (emphasis DSC):

The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,” concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”

I asked ChatGPT to write my cover letters. 2 hiring managers said they would have given me an interview but the letters lacked personality. — from businessinsider.com by Beatrice Nolan

Key points:

  • An updated version of the AI chatbot ChatGPT was recently released to the public.
  • I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
  • The managers said they would’ve given me a call but that the letters lacked personality.

.



 
 

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing — from technologyreview.com by Will Douglas Heaven
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.

Excerpt:

OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game. It showcases a powerful new technique that could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.

The Minecraft AI learned to perform complicated sequences of keyboard and mouse clicks to complete tasks in the game, such as chopping down trees and crafting tools. It’s the first bot that can craft so-called diamond tools, a task that typically takes good human players 20 minutes of high-speed clicking—or around 24,000 actions.

The result is a breakthrough for a technique known as imitation learning, in which neural networks are trained to perform tasks by watching humans do them.

The team’s approach, called Video Pre-Training (VPT), gets around the bottleneck in imitation learning by training another neural network to label videos automatically.

Speak lands investment from OpenAI to expand its language learning platform — from techcrunch.com by Kyle Wiggers

Excerpts:

“Most language learning software can help with the beginning part of learning basic vocabulary and grammar, but gaining any degree of fluency requires speaking out loud in an interactive environment,” Zwick told TechCrunch in an email interview. “To date, the only way people can get that sort of practice is through human tutors, which can also be expensive, difficult and intimidating.”

Speak’s solution is a collection of interactive speaking experiences that allow learners to practice conversing in English. Through the platform, users can hold open-ended conversations with an “AI tutor” on a range of topics while receiving feedback on their pronunciation, grammar and vocabulary.

It’s one of the top education apps in Korea on the iOS App Store, with over 15 million lessons started annually, 100,000 active subscribers and “double-digit million” annual recurring revenue.

 

 

Is AI Generated Art Really Coming for Your Job? — from edugeekjournal.com by Matt Crosslin

Excerpt:

So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts. Look, PhotoShop and asset libraries made creating company logos very, very easy a long time ago. But people still don’t want to take the 30 minutes it takes to put one together, because thinking through all the options is not their thing. You still have to think through those options to enter an AI prompt. And people just want to leave that part to the artists. The same thing was true about the printing press. Hundreds of years of innovation has taught us that the hard part of the creation of art is the human coming up with the ideas, not the tools that create the art.

A quick comment from DSC:
Possibly, at least in some cases. But I’ve seen enough home-grown, poorly-designed graphics and logos to make me wonder if that will be the case.

 

How to Teach With Deep Fake Technology — from techlearning.com by Erik Ofgang
Despite the scary headlines, deep fake technology can be a powerful teaching tool

Excerpt:

The very concept of teaching with deep fake technology may be unsettling to some. After all, deep fake technology, which utilizes AI and machine learning and can alter videos and animate photographs in a manner that appears realistic, has frequently been covered in a negative light. The technology can be used to violate privacy and create fake videos of real people.

However, while these potential abuses of the technology are real and concerning that doesn’t mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.

From DSC:
I’m still not sure about this one…but I’ll try to be open to the possibilities here.

 

Educators Are Taking Action in AI Education to Make Future-Ready Communities — from edsurge.com by Annie Ning

Excerpt:

AI Explorations and Their Practical Use in School Environments is an ISTE initiative funded by General Motors. The program provides professional learning opportunities for educators, with the goal of preparing all students for careers with AI.

Recently, we spoke with three more participants of the AI Explorations program to learn about its ongoing impact in K-12 classrooms. Here, they share how the program is helping their districts implement AI curriculum with an eye toward equity in the classroom.

 

Stealth Legal AI Startup Harvey Raises $5M in Round Led By OpenAI — from lawnext.com by Bob Ambrogi

Excerpt (emphasis DSC):

A hitherto stealth legal AI startup emerged from the shadows today with news via TechCrunch that it has raised $5 million in funding led by the startup fund of OpenAI, the company that developed advanced neural network AI systems such as GPT-3 and DALL-E 2.

The startup, called Harvey, will build on the GPT-3 technology to enable lawyers to create legal documents or perform legal research by providing simple instructions using natural language.

The company was founded by Winston Weinberg, formerly an associate at law firm O’Melveny & Myers, and Gabriel Pereyra, formerly a research scientist at DeepMind and most recently a machine learning engineer at Meta AI.

 

Higher Education in Motion: The Digital and Cultural Transformations Ahead — from er.educause.edu by John O’Brien

Excerpts (emphasis DSC):

In 2015 when Janet Napolitano, then president of the University of California, responded to what she saw as a steadily growing “chorus of doom” predicting the demise of higher education, she did so with a turn of phrase that captured my imagination and still does. She said that higher education is not in crisis. “Instead, it is in motion, and it always has been.”

A brief insert by DSC:
Yes. In other words, it’s a learning ecosystem — with constant morphing & changing going on.

“We insisted then, and we continue to insist now, that digital transformation amounts to deep and coordinated change that substantially reshapes the operations, strategic directions, and value propositions of colleges and universities and that this change is enabled by culture, workforce, and technology shifts.

The tidal movement to digital transformation is linked to a demonstrably broader recognition of the strategic role and value of technology professionals and leaders on campus, another area of long-standing EDUCAUSE advocacy. For longer than we have talked about digital transformation, we have insisted that technology must be understood as a strategic asset, not a utility, and that senior IT leaders must be part of the campus strategic decision-making. But the idea of a strategic role for technology had disappointing traction among senior campus leaders before 2020.

From DSC:
The Presidents, Provosts, CIO’s, board members, influential faculty members, and other members of institutions’ key leadership positions who didn’t move powerfully forward with online-based learning over the last two+ decades missed the biggest thing to hit societies’ ability to learn in 500+ years — the Internet. Not since the invention of the printing press has learning had such an incredible gust of wind put in its sails. The affordances have been staggering, with millions of people now being educated in much less expensive ways (MOOCs, YouTube, LinkedIn Learning, other). Those who didn’t move forward with online-based learning in the past are currently scrambling to even survive. We’ll see how many close their doors as the number of effective alternatives increases.

Instead of functioning as a one-time fix during the pandemic, technology has become ubiquitous and relied upon to an ever-increasing degree across campus and across the student experience.

Moving forward, best of luck to those organizations who don’t have their CIOs at the decision-making table and reporting directly to the Presidents — and hopefully those CIO’s are innovative and visionary to begin with. Best of luck to those institutions who refuse to look up and around to see that the world has significantly changed from the time they got their degrees.

The current mix of new realities creates an opportunity for an evolution and, ideally, a synchronized reimagination of higher education overall. This will be driven by technology innovation and technology professionals—and will be made even more enduring by a campus culture of care for students, faculty, and staff.

Time will tell if the current cultures within many traditional institutions of higher education will allow them to adapt/change…or not.


Along the lines of transformations in our learning ecosystems, also see:


OPINION: Let’s use the pandemic as a dress-rehearsal for much-needed digital transformation — from hechingerreport.org by Jean-Claude Brizard
Schools must get ready for the next disruption and make high-quality learning available to all

Excerpts:

We should use this moment to catalyze a digital transformation of education that will prepare schools for our uncertain future.

What should come next is an examination of how schools can more deeply and deliberately harness technology to make high-quality learning accessible to every learner, even in the wake of a crisis. That means a digital transformation, with three key levers for change: in the classroom, in schools and at the systems level.

Platforms like these help improve student outcomes by enhancing teachers’ ability to meet individual students’ needs. They also allow learners to master new skills at their own pace, in their own way.

As Digital Transformation in Schools Continues, the Need for Enterprising IT Leaders Grows — from edtechmagazine.com by Ryan Petersen

K-12 IT leaders move beyond silos to make a meaningful impact inside and outside their schools.According to Korn Ferry’s research on enterprise leadership, “Enterprise leaders envision and grow; scale and create. They go beyond by going across the enterprise, optimizing the whole organization and its entire ecosystem by leading outside what they can control. These are leaders who see their role as being a participant in diverse and dynamic communities.”

 

 

OpenAI Says DALL-E Is Generating Over 2 Million Images a Day—and That’s Just Table Stakes — from singularityhub.com by Jason Dorrier

Excerpt:

The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.

Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.

And that was before OpenAI opened DALL-E 2 to everyone.

 


From DSC:
Further on down that Tweet is this example image — wow!
.

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

.


On the video side of things, also relevant/see:

Meta’s new text-to-video AI generator is like DALL-E for video — from theverge.com by James Vincent
Just type a description and the AI generates matching footage

A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta


From DSC:
Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?


 

From DSC:
Now you’re talking! A team-based effort to deliver an Associate’s Degree for 1/3 of the price! Plus a job-ready certificate from Google, IBM, or Salesforce. Nice. 

Check these items out!


We started Outlier because we believe that students deserve better. So we worked from the ground up to create the best online college courses in the world, just for curious-minded learners like you.

The brightest instructors, available on-demand. Interactive materials backed by cognitive science. Flexible timing. And that’s just the beginning.

Outlier.org

MasterClass’s Co-Founder Takes on the Community-College Degree — from wsj.com by Lindsay Ellis
A new, online-only education model promises associate degrees via prerecorded lectures from experts at Yale, NASA and other prestigious institutions

Excerpts (emphasis DSC):

One of the founders of the celebrity-fueled, e-learning platform MasterClass is applying the same approach to the humble community-college degree—one based on virtual, highly produced lectures from experts at prestigious institutions around the country.

The two-year degrees—offered in applied computing, liberal studies or business administration—will be issued by Golden Gate University, a nonprofit institution in San Francisco. Golden Gate faculty and staff, not the lecturers, will be the ones to hold office hours, moderate virtual discussions and grade homework, said Outlier, which is announcing the program Wednesday and plans to start courses in the spring.

Golden Gate University and Outlier.org Reinvent Affordable College with Degrees+ — from prnewswire.com

Excerpt:

For less than one-third the price of the national average college tuition, students will earn an associate degree plus a job-ready certificate from Google, IBM, or Salesforce

NEW YORK, Sept. 7, 2022 /PRNewswire/ — Golden Gate University is launching Degrees+, powered by Outlier.org, with three associate degrees that reimagine the two-year degree for a rising generation of students that demand high quality education without the crushing cost. For annual tuition of $4,470 all-inclusive, students will earn a two-year degree that uniquely brings together the best of a college education with a career-relevant industry certificate.

Beginning today, students can apply to be part of the first class, which starts in Spring 2023.

“Imagine if everyone had the option to go to college with top instructors from HarvardYale, Google, and NASA via the highest-quality online classes. By upgrading the two-year degree, we can massively reduce student debt and set students up for success, whether that’s transferring into a four-year degree or going straight into their careers.”

Aaron Rasmussen, CEO and founder of Outlier.org
and co-founder of MasterClass

Outlier.org & Universities Call for Greater Credit Transfer Transparency — from articles.outliner.org

Excerpt:

“Outlier.org is working with leading institutions across the country to build a new kind of on-ramp to higher education,” said Aaron Rasmussen, CEO and Founder of Outlier.org. “By partnering with schools to build bridges from our courses into their degree programs, we can help students reduce the cost of their education and graduate faster.”


From DSC:
All of this reminds me of a vision I put out on my Calvin-based website at the time (To His Glory! was the name of the website.) The vision was originally called “The Forthcoming Walmart of Education” — which I renamed to “EduMart Education.”

By the way…because I’m not crazy about Walmart, I’m not crazy about that name. In today’s terms, it might be better called the new “Amazon.com of Higher Education” or something along those lines. But you get the idea. Lower prices due to new business models.

.


 
© 2024 | Daniel Christian