From DSC: I don’t think all students hate AI. My guess is that a lot of them like AI and are very intrigued by it. The next generation is starting to see its potential — for good and/or for ill.
One of the comments (from the above item) said to check out the following video. I saw one (or both?) of these people on a recent 60 Minutes piece as well.
This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.
No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.
Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?
The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.
Speaking of the ASU+GSV Summit, see this posting from Michael Moe:
Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…
High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.
ChatGPT, Bing Chat, Google’s Bard—AI is infiltrating the lives of billions.
The 1% who understand it will run the world.
Here’s a list of key terms to jumpstart your learning:
Being “good at prompting” is a temporary state of affairs.The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.
…
The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.
From DSC: Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.
From DSC: Bizarre…at least for me in late April of 2023:
FaceTiming live with AI… This app came across the @ElunaAI Discord and I was very impressed with its responsiveness, natural expression and language, etc…
Feels like the beginning of another massive wave in consumer AI products.
The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.
The Need for AI PD — from techlearning.com by Erik Ofgang Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.
“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.” This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI.
At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.
Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.
From DSC: Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.
From DSC: Regarding the core curricula of colleges and universities…
For decades now, faculty members have taught what they wanted to teach and what interested them. They taught what they wanted to research vs. what the wider marketplace/workplace needed. They were not responsive to the needs of the workplace — nor to the needs of their students!
And this situation has been all the more compounded by the increasing costs of obtaining a degree plus the exponential pace of change. We weren’t doing a good job before this exponential pace of change started taking place — and now it’s (almost?) impossible to keep up.
The bottom line on the article below: ***It’s sales.***
Therefore, it’s about what you are selling — and at what price. The story hasn’t changed much. The narrative (i.e., the curricula and more) is pretty much the same thing that’s been sold for years.
But the days of faculty members teaching whatever they wanted to are over, or significantly waning.
Faculty members, faculty senates, provosts, presidents, and accreditors are reaping what they’ve sown.
The questions are now:
Will new seeds be sown?
Will new crops arise in the future?
Will there be new narratives?
Will institutions be able to reinvent themselves (one potential example here)? Or will their cultures not allow such significant change to take place? Will alternatives to institutions of traditional higher education continue to pick up steam?
A Profession on the Edge — from chronicle.com by Eric Hoover Why enrollment leaders are wearing down, burning out, and leaving jobs they once loved.
Excerpts:
Similar stories are echoing throughout the hallways of higher education. Vice presidents for enrollment, as well as admissions deans and directors, are wearing down, burning out, and leaving jobs they once loved. Though there’s no way to compile a chart quantifying the churn, industry insiders describe it as significant. “We’re at an inflection point,” says Rick Clark, executive director of undergraduate admission at Georgia Tech. “There have always been people leaving the field, but not in the numbers we’re seeing now.”
Some are being shoved out the door by presidents and boards. Some are resigning out of exhaustion, frustration, and disillusionment. And some who once sought top-level positions are rethinking their ambitions. “The pressures have ratcheted up tenfold,” says Angel B. Pérez, chief executive of the National Association for College Admission Counseling, known as NACAC. “I talk with someone each week who’s either leaving the field or considering leaving.”
From DSC: This quote points to what I’m trying to address here:
Dahlstrom and other veterans of the field say they’ve experienced something especially disquieting: an erosion of faith in the transformational power of higher education. Though she sought a career in admissions to help students, her disillusionment grew after taking on a leadership role. She became less confident that she was equipped to effect positive changes, at her institution or beyond, especially when it came to the challenge of expanding college access in a nation of socioeconomic disparities: “I felt like a cog in a huge machine that’s not working, yet continues to grind while only small, temporary fixes are made.”
From DSC: Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:
I just clued in how insane text2vid will get soon. As crazy as this sounds, we will be able to generate movies from just minor prompts and the path there is pretty clear.
Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.
A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.
Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.
This is how I viewed a fascinating article about the so-called #AICinema movement. Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland. It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.
From DSC: How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?
Survey respondents are demonstrating confidence in microcredentials–online training programs that take no more than six months to complete–as four-year degree programs often overlook job training.
‘Grade inflation and efforts to help everyone … attend college make it harder for employers to differentiate among applicants.’
Law’s AI revolution is here — from nationalmagazine.ca At least this much we know. Firms need to develop a strategy around language models.
Also re: legaltech, see:
Pioneers and Pathfinders: Richard Susskind — from seyfarth.com by J. Stephen Poor In our conversation, Richard discusses the ways we should all be thinking about legal innovation, the challenges of training lawyers for the future, and the qualifications of those likely to develop breakthrough technologies in law, as well as his own journey and how he became interested in AI as an undergraduate student.
Law has a magic wand now— from jordanfurlong.substack.com by Jordan Furlong Some people think Large Language Models will transform the practice of law. I think it’s bigger than that.
Excerpts:
ChatGPT4 can also do things that only lawyers (used to be able to) do. It can look up and summarize a court decision, analyze and apply sections of copyright law, and generate a statement of claim for breach of contract.
…
What happens when you introduce a magic wand into the legal market? On the buyer side, you reduce by a staggering degree the volume of tasks that you need to pay lawyers (whether in-house or outside counsel) to perform. It won’t happen overnight: Developing, testing, revising, approving, and installing these sorts of systems in corporations will take time. But once that’s done, the beauty of LLMs like ChatGPT4 is that they are not expert systems. Anyone can use them. Anyone will.
But I can’t shake the feeling that someday, we’ll divide the history of legal services into “Before GPT4” and “After GPT4.” I think it’s that big.
From DSC: Jordan mentions: “Some people think Large Language Models will transform the practice of law. I think it’s bigger than that.”
I agree with Jordan. It most assuredly IS bigger than that. AI will profoundly impact many industries/disciplines. The legal sector is but one of them. Education is another. People’s expectations are now changing — and the “ramification wheels” are now in motion.
I take the position that many others have as well (at least as of this point in time) that take the position that AI will supplement humans’ capabilities and activities. But those who know AI-driven apps will outcompete those who don’t know about such apps.
ChatGPT is Everywhere — from chronicle.com by Beth McMurtrie Love it or hate it, academics can’t ignore the already pervasive technology.
Excerpt:
Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.
Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.
“Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.”
Sarah Eaton, associate professor of education at the University of Calgary
The use of artificial intelligence tools does not automatically constitute academic dishonesty. It depends how the tools are used. For example, apps such as ChatGPT can be used to help reluctant writers generate a rough draft that they can then revise and update.
Used in this way, the technology can help students learn. The text can also be used to help students learn the skills of fact-checking and critical thinking, since the outputs from ChatGPT often contain factual errors.
When students use tools or other people to complete homework on their behalf, that is considered a form of academic dishonesty because the students are no longer learning the material themselves. The key point is that it is the students, and not the technology, that is to blame when students choose to have someone – or something – do their homework for them.
There is a difference between using technology to help students learn or to help them cheat. The same technology can be used for both purposes.
From DSC: These couple of sentences…
In the age of post-plagiarism, humans use artificial intelligence apps to enhance and elevate creative outputs as a normal part of everyday life. We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable.
…reminded me of what’s been happening within the filmmaking world for years (i.e., such as in Star Wars, Jurrasic Park, and many others). It’s often hard to tell what’s real and what’s been generated by a computer.
I think a lot of people do not realize how rapidly the multiple strands of generative AI (audio, text, images, and video) are advancing, and what that means for the future.
With just a photograph and 60 seconds of audio, you can now create a deepfake of yourself in just a matter of minutes by combining a few cheap AI tools. I’ve tried it myself, and the results are mind-blowing, even if they’re not completely convincing. Just a few months ago, this was impossible. Now, it’s a reality.
Board members and corporate execs don’t need AI to decode the lessons to be learned from this. The lessons should be loud and clear: If even the mighty Google can be potentially overthrown by AI disruption, you should be concerned about what this may mean for your company. …
Professions that will be disrupted by generative AI include marketing, copywriting, illustration and design, sales, customer support, software coding, video editing, film-making, 3D modeling, architecture, engineering, gaming, music production, legal contracts, and even scientific research. Software applications will soon emerge that will make it easy and intuitive for anyone to use generative AI for those fields and more. .
Feb 1 (Reuters) – ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.
The report, citing data from analytics firm Similarweb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December.
“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts wrote in the note.
From DSC: This reminds me of the current exponential pace of change that we are experiencing…
Here’s the list of sources: https://t.co/fJd4rh8kLy. The larger resource area at https://t.co/bN7CReGIEC has sample ChatGPT essays, strategies for mitigating harm, and questions for teachers to ask as well as a listserv.
— Anna Mills, amills@mastodon.oeru.org, she/her (@EnglishOER) January 11, 2023
Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search.The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.
This is a big deal. If successful, it will bring powerful AI tools to the masses.So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.
I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.
People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.
That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.
It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”
…
“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.
Demis Hassabis
Excerpt (emphasis DSC):
Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems.Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.
But with AI’s promise also comes peril.In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.
Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.
Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.
Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.
We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.
We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.
ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz Are fears about generative AI warranted?
The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,”concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”
An updated version of the AI chatbot ChatGPT was recently released to the public.
I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
The managers said they would’ve given me a call but that the letters lacked personality.
.
I mentor a young lad with poor literacy skills who is starting a landscaping business. He struggles to communicate with clients in a professional manner.
I created a GPT3-powered Gmail account to which he sends a message. It responds with the text to send to the client. pic.twitter.com/nlFX9Yx6wR
OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game. It showcases a powerful new technique that could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.
The Minecraft AI learned to perform complicated sequences of keyboard and mouse clicks to complete tasks in the game, such as chopping down trees and crafting tools. It’s the first bot that can craft so-called diamond tools, a task that typically takes good human players 20 minutes of high-speed clicking—or around 24,000 actions.
The result is a breakthrough for a technique known as imitation learning, in which neural networks are trained to perform tasks by watching humans do them.
…
The team’s approach, called Video Pre-Training (VPT), gets around the bottleneck in imitation learning by training another neural network to label videos automatically.
“Most language learning software can help with the beginning part of learning basic vocabulary and grammar, but gaining any degree of fluency requires speaking out loud in an interactive environment,” Zwick told TechCrunch in an email interview. “To date, the only way people can get that sort of practice is through human tutors, which can also be expensive, difficult and intimidating.”
Speak’s solution is a collection of interactive speaking experiences that allow learners to practice conversing in English. Through the platform, users can hold open-ended conversations with an “AI tutor” on a range of topics while receiving feedback on their pronunciation, grammar and vocabulary.
It’s one of the top education apps in Korea on the iOS App Store, with over 15 million lessons started annually, 100,000 active subscribers and “double-digit million” annual recurring revenue.
If you last checked in on AI image makers a month ago & thought “that is a fun toy, but is far from useful…” Well, in just the last week or so two of the major AI systems updated.
You can now generate a solid image in one try. For example, “otter on a plane using wifi” 1st try: pic.twitter.com/DhiYeVMEEV
So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts. Look, PhotoShop and asset libraries made creating company logos very, very easy a long time ago. But people still don’t want to take the 30 minutes it takes to put one together, because thinking through all the options is not their thing. You still have to think through those options to enter an AI prompt. And people just want to leave that part to the artists. The same thing was true about the printing press. Hundreds of years of innovation has taught us that the hard part of the creation of art is the human coming up with the ideas, not the tools that create the art.
A quick comment from DSC: Possibly, at least in some cases. But I’ve seen enough home-grown, poorly-designed graphics and logos to make me wonder if that will be the case.
How to Teach With Deep Fake Technology — from techlearning.com by Erik Ofgang Despite the scary headlines, deep fake technology can be a powerful teaching tool
Excerpt:
The very concept of teaching with deep fake technology may be unsettling to some. After all, deep fake technology, which utilizes AI and machine learning and can alter videos and animate photographs in a manner that appears realistic, has frequently been covered in a negative light. The technology can be used to violate privacy and create fake videos of real people.
However, while these potential abuses of the technology are real and concerning that doesn’t mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
From DSC: I’m still not sure about this one…but I’ll try to be open to the possibilities here.
Recently, we spoke with three more participants of the AI Explorations program to learn about its ongoing impact in K-12 classrooms. Here, they share how the program is helping their districts implement AI curriculum with an eye toward equity in the classroom.
A hitherto stealth legal AI startup emerged from the shadows today with news via TechCrunch that it has raised $5 million in funding led by the startup fund of OpenAI, the company that developed advanced neural network AI systems such as GPT-3 and DALL-E 2.
The startup, called Harvey, will build on the GPT-3 technology to enable lawyers to create legal documents or perform legal research by providing simple instructions using natural language.
The company was founded by Winston Weinberg, formerly an associate at law firm O’Melveny & Myers, and Gabriel Pereyra, formerly a research scientist at DeepMind and most recently a machine learning engineer at Meta AI.
In 2015 when Janet Napolitano, then president of the University of California, responded to what she saw as a steadily growing “chorus of doom” predicting the demise of higher education, she did so with a turn of phrase that captured my imagination and still does. She said that higher education is not in crisis. “Instead, it is in motion, and it always has been.”
A brief insert by DSC: Yes. In other words, it’s a learning ecosystem — with constant morphing & changing going on.
“We insisted then, and we continue to insist now, that digital transformation amounts to deep and coordinated change that substantially reshapes the operations, strategic directions, and value propositions of colleges and universities and that this change is enabled by culture, workforce, and technology shifts.
…
The tidal movement to digital transformation is linked to a demonstrably broader recognition of the strategic role and value of technology professionals and leaders on campus, another area of long-standing EDUCAUSE advocacy. For longer than we have talked about digital transformation, we have insisted that technology must be understood as a strategic asset, not a utility, and that senior IT leaders must be part of the campus strategic decision-making. But the idea of a strategic role for technology had disappointing traction among senior campus leaders before 2020.
From DSC: The Presidents, Provosts, CIO’s, board members, influential faculty members, and other members of institutions’ key leadership positions who didn’t move powerfully forward with online-based learning over the last two+ decades missed the biggest thing to hit societies’ ability to learn in 500+ years — the Internet. Not sincethe invention of the printing presshas learning had such an incredible gust of wind put in its sails. The affordances have been staggering, with millions of people now being educated in much less expensive ways (MOOCs, YouTube, LinkedIn Learning, other). Those who didn’t move forward with online-based learning in the past are currently scrambling to even survive. We’ll see how many close their doors as the number of effective alternatives increases.
Instead of functioning as a one-time fix during the pandemic, technology has become ubiquitous and relied upon to an ever-increasing degree across campus and across the student experience.
Moving forward, best of luck to those organizations who don’t have their CIOs at the decision-making table and reporting directly to the Presidents — and hopefully those CIO’s are innovative and visionary to begin with. Best of luck to those institutions who refuse to look up and around to see that the world has significantly changed from the time they got their degrees.
The current mix of new realities creates an opportunity for an evolution and, ideally, a synchronized reimagination of higher education overall. This will be driven by technology innovation and technology professionals—and will be made even more enduring by a campus culture of care for students, faculty, and staff.
Time will tell if the current cultures within many traditional institutions of higher education will allow them to adapt/change…or not.
Along the lines of transformations in our learning ecosystems, also see:
We should use this moment to catalyze a digital transformation of education that will prepare schools for our uncertain future.
What should come next is an examination of how schools can more deeply and deliberately harness technology to make high-quality learning accessible to every learner, even in the wake of a crisis. That means a digital transformation, with three key levers for change: in the classroom, in schools and at the systems level.
…
Platforms like these help improve student outcomes by enhancing teachers’ ability to meet individual students’ needs. They also allow learners to master new skills at their own pace, in their own way.
K-12 IT leaders move beyond silos to make a meaningful impact inside and outside their schools.According to Korn Ferry’s research on enterprise leadership, “Enterprise leaders envision and grow; scale and create. They go beyond by going across the enterprise, optimizing the whole organization and its entire ecosystem by leading outside what they can control. These are leaders who see their role as being a participant in diverse and dynamic communities.”
The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.
Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.
And that was before OpenAI opened DALL-E 2 to everyone.
A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta
From DSC: Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?