AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey
The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.

ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.

Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.

There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.

 

 

io.google/2024

.


How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.

Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.

On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.

Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.


Google I/O 2024: An I/O for a new generation — from blog.google

The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.

In this story:


Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI

  • Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
  • All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
  • Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.

 



New ways to engage with Gemini for Workspace — from workspace.google.com

Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.


Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers

Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.

At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.


Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.


 

Voice Banks (preserving our voices for AI) — from thebrainyacts.beehiiv.com by Josh Kubicki

The Ethical and Emotional Implications of AI Voice Preservation

Legal Considerations and Voice Rights
From a legal perspective, the burgeoning use of AI in voice cloning also introduces a complex web of rights and permissions. The recent passage of Tennessee’s ELVIS Act, which allows legal action against unauthorized recreations of an artist’s voice, underscores the necessity for robust legal frameworks to manage these technologies. For non-celebrities, the idea of a personal voice bank brings about its own set of legal challenges. How do we regulate the use of an individual’s voice after their death? Who holds the rights to control and consent to the usage of these digital artifacts?

To safeguard against misuse, any system of voice banking would need stringent controls over who can access and utilize these voices. The creation of such banks would necessitate clear guidelines and perhaps even contractual agreements stipulating the terms under which these voices may be used posthumously.

Should we all consider creating voice banks to preserve our voices, allowing future generations the chance to interact with us even after we are gone?

 

 

 

The Verge | What’s Next With AI | February 2024 | Consumer Survey

 

 

 

 

 

 




Microsoft AI creates talking deepfakes from single photo — from inavateonthenet.net


The Great Hall – where now with AI? It is not ‘Human Connection V Innovative Technology’ but ‘Human Connection + Innovative Technology’ — from donaldclarkplanb.blogspot.com by Donald Clark

The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared. 


Techno-Social Engineering: Why the Future May Not Be Human, TikTok’s Powerful ForYou Algorithm, & More — from by Misha Da Vinci

Things to consider as you dive into this edition:

  • As we increasingly depend on technology, how is it changing us?
  • In the interaction between humans and technology, who is adapting to whom?
  • Is the technology being built for humans, or are we being changed to fit into tech systems?
  • As time passes, will we become more like robots or the AI models we use?
  • Over the next 30 years, as we increasingly interact with technology, who or what will we become?

 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

Beyond the Hype: Taking a 50 Year Lens to the Impact of AI on Learning — from nafez.substack.com by Nafez Dakkak and Chris Dede
How do we make sure LLMs are not “digital duct tape”?

[Per Chris Dede] We often think of the product of teaching as the outcome (e.g. an essay, a drawing, etc.). The essence of education, in my view, lies not in the products or outcomes of learning but in the journey itself. The artifact is just a symbol that you’ve taken the journey.

The process of learning — the exploration, challenges, and personal growth that occur along the way — is where the real value lies. For instance, the act of writing an essay is valuable not merely for the final product but for the intellectual journey it represents. It forces you to improve and organize your thinking on a subject.

This distinction becomes important with the rise of generative AI, because it uniquely allows us to produce these artifacts without taking the journey.

As I’ve argued previously, I am worried that all this hype around LLMs renders them a “type of digital duct-tape to hold together an obsolete industrial-era educational system”. 


Speaking of AI in our learning ecosystems, also see:


On Building a AI Policy for Teaching & Learning — from by Lance Eaton
How students drove the development of a policy for students and faculty

Well, last month, the policy was finally approved by our Faculty Curriculum Committee and we can finally share the final version: AI Usage Policy. College Unbound also created (all-human, no AI used) a press release with the policy and some of the details.

To ensure you see this:

  • Usage Guidelines for AI Generative Tools at College Unbound
    These guidelines were created and reviewed by College Unbound students in Spring 2023 with the support of Lance Eaton, Director of Faculty Development & Innovation.  The students include S. Fast, K. Linder-Bey, Veronica Machado, Erica Maddox, Suleima L., Lora Roy.

ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds — from psypost.org by Eric W. Dolan

A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work. The study, published in the Canadian Psychological Association’s Mind Pad, found that “false citation rates” across various psychology subfields ranged from 6% to 60%. Surprisingly, these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.

MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals.

 

Forbes 2024 AI 50 List: Top Artificial Intelligence Startups  — from forbes.com by Kenrick Cai

The artificial intelligence sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. Applicants do not pay a fee to be considered and are judged for their business promise and technical usage of AI through a quantitative algorithm and qualitative judging panels. Companies are encouraged to share data on diversity, and our list aims to promote a more equitable startup ecosystem. But disparities remain sharp in the industry. Only 12 companies have women cofounders, five of whom serve as CEO, the same count as last year. For more, see our full package of coverage, including a detailed explanation of the list methodology, videos and analyses on trends in AI.


Adobe Previews Breakthrough AI Innovations to Advance Professional Video Workflows Within Adobe Premiere Pro — from news.adobe.com

  • New Generative AI video tools coming to Premiere Pro this year will streamline workflows and unlock new creative possibilities, from extending a shot to adding or removing objects in a scene
  • Adobe is developing a video model for Firefly, which will power video and audio editing workflows in Premiere Pro and enable anyone to create and ideate
    Adobe previews early explorations of bringing third-party generative AI models from OpenAI, Pika Labs and Runway directly into Premiere Pro, making it easy for customers to draw on the strengths of different models within the powerful workflows they use every day
  • AI-powered audio workflows in Premiere Pro are now generally available, making audio editing faster, easier and more intuitive

Also relevant see:




 

AI for the physical world — from superhuman.ai by Zain Kahn

Excerpt: (emphasis DSC)

A new company called Archetype is trying to tackle that problem: It wants to make AI useful for more than just interacting with and understanding the digital realm. The startup just unveiled Newton — “the first foundation model that understands the physical world.”

What’s it for?
A warehouse or factory might have 100 different sensors that have to be analyzed separately to figure out whether the entire system is working as intended. Newton can understand and interpret all of the sensors at the same time, giving a better overview of how everything’s working together. Another benefit: You can ask Newton questions in plain English without needing much technical expertise.

How does it work?

  • Newton collects data from radar, motion sensors, and chemical and environmental trackers
  • It uses an LLM to combine each of those data streams into a cohesive package
  • It translates that data into text, visualizations, or code so it’s easy to understand

Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data — from venturebeat.com by Michael Nuñez; via Tom Barrett’s Prompcraft e-newsletter

Apple has entered into a significant agreement with stock photography provider Shutterstock to license millions of images for training its artificial intelligence models. According to a Reuters report, the deal is estimated to be worth between $25 million and $50 million, placing Apple among several tech giants racing to secure vast troves of data to power their AI systems.


 

 

The University Student’s Guide To Ethical AI Use  — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource

This comprehensive guide offers:

  • Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
  • An overview of popular AI tools used in universities and its limitations as a study tool
  • Tips on how to ethically use AI and how to maximize its capabilities for students
  • Current existing punishment and penalties for cheating using AI
  • A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use

Some of the key facts you might find interesting are:

  • The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
  • 68% of students say using AI has impacted their academic performance positively.
  • Educators using AI tools say the technology helps speed up their grading process by as much as 75%.
 

[Report] The Top 100 AI for Work – April 2024 — from flexos.work; with thanks to Daan van Rossum for this resource
AI is helping us work up to 41% more effectively, according to recent Bain research. We review the platforms to consider for ourselves and our teams.

Following our AI Top 150, we spent the past few weeks analyzing data on the top AI platforms for work. This report shares key insights, including the AI tools you should consider adopting to work smarter, not harder.

While there is understandable concern about AI in the work context, the platforms in this list paint a different picture. It shows a future of work where people can do what humans are best suited for while offloading repetitive, digital tasks to AI.

This will fuel the notion that it’s not AI that takes your job but a supercharged human with an army of AI tools and agents. This should be a call to action for every working person and business leader reading this.

 

Dr Abigail Rekas, Lawyer & Lecturer at the School of Law, University of Galway

Abigail is a lecturer on two of the Law micro-credentials at University of Galway – Lawyering Technology & Innovation and Law & Analytics. Micro-credentials are short, flexible courses designed to fit around your busy life! They are designed in collaboration with industry to meet specific skills needs and are accredited by leading Irish universities.

Visit: universityofgalway.ie/courses/micro-credentials/


The Implications of Generative AI: From the Delivery of Legal Services to the Delivery of Justice — from iaals.du.edu by

The potential for AI’s impact is broad, as it has the ability to impact every aspect of human life, from home to work. It will impact our relationships to everything and everyone in our world. The implications for generative AI on the legal system, from how we deliver legal services to how we deliver justice, will be just as far reaching.

[N]ow we face the latest technological frontier: artificial intelligence (AI).… Law professors report with both awe and angst that AI apparently can earn Bs on law school assignments and even pass the bar exam. Legal research may soon be unimaginable without it. AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law.

When you can no longer sell the time it takes to achieve a client’s outcome, then you must sell the outcome itself and the client’s experience of getting there. That completely changes the dynamics of what law firms are all about.


Preparing the Next Generation of Tech-Ready Lawyers — from news.gsu.edu
Legal Analytics and Innovation Initiative Gives Students a Competitive Advantage

Georgia State University College of Law faculty understand this need and designed the Legal Analytics & Innovation Initiative (LAII) to equip students with the competitive skills desired by law firms and other companies that align with the emerging technological environment.

“As faculty, we realized we need to be forward-thinking about incorporating technology into our curriculum. Students must understand new areas of law that arise from or are significantly altered by technological advances, like cybersecurity, privacy and AI. They also must understand how these advances change the practice of law,” said Kris Niedringhaus, associate dean for Law Library, Information Services, Legal Technology & Innovation.


The Imperative Of Identifying Use Cases In Legal Tech: A Guiding Light For Innovation In The Age Of AI — from abovethelaw.com by Olga V. Mack
In the quest to integrate AI and legal technology into legal practice, use cases are not just important but indispensable.

As the legal profession continues to navigate the waters of digital transformation, the importance of use cases stands as a beacon guiding the journey. They are the litmus test for the practical value of technology, ensuring that innovations not only dazzle with potential but also deliver tangible benefits. In the quest to integrate AI and legal technology into legal practice, use cases are not just important but indispensable.

The future of legal tech is not about technology for technology’s sake. It’s about thoughtful, purpose-driven innovation that enhances the practice of law, improves client outcomes, and upholds the principles of justice. Use cases are the roadmap for this future, charting a course for technology that is meaningful, impactful, and aligned with the noble pursuit of law.

 

How Generative AI Owns Higher Education. Now What? — from forbes.co by Steve Andriole

Excerpt (emphasis DSC):

What about course videos? Professors can create them (by lecturing into a camera for several hours hopefully in different clothes) from the readings, from their interpretations of the readings, from their own case experiences – from anything they like. But now professors can direct the creation of the videos by talking – actually describing – to a CustomGPTabout what they’d like the video to communicate with their or another image. Wait. What? They can make a video by talking to a CustomGPT and even select the image they want the “actor” to use? Yes. They can also add a British accent and insert some (GenAI-developed) jokes into the videos if they like. All this and much more is now possible. This means that a professor can specify how long the video should be, what sources should be consulted and describe the demeanor the professor wants the video to project.

From DSC:
Though I wasn’t crazy about the clickbait type of title here, I still thought that the article was solid and thought-provoking. It contained several good ideas for using AI.


Excerpt from a recent EdSurge Higher Ed newsletter:


There are darker metaphors though — ones that focus on the hazards for humanity of the tech. Some professors worry that AI bots are simply replacing hired essay-writers for many students, doing work for a student that they can then pass off as their own (and doing it for free).

From DSC:
Hmmm…the use of essay writers was around long before AI became mainstream within higher education. So we already had a serious problem where students didn’t see the why in what they were being asked to do. Some students still aren’t sold on the why of the work in the first place. The situation seems to involve ethics, yes, but it also seems to say that we haven’t sold students on the benefits of putting in the work. Students seem to be saying I don’t care about this stuff…I just need the degree so I can exit stage left.

My main point: The issue didn’t start with AI…it started long before that.

And somewhat relevant here, also see:

I Have Bigger Fish to Fry: Why K12 Education is Not Thinking About AI — from medium.com by Maurie Beasley, M.Ed. (Edited by Jim Beasley)

This financial stagnation is occurring as we face a multitude of escalating challenges. These challenges include but are in no way limited to, chronic absenteeism, widespread student mental health issues, critical staff shortages, rampant classroom behavior issues, a palpable sense of apathy for education in students, and even, I dare say, hatred towards education among parents and policymakers.

Our current focus is on keeping our heads above water, ensuring our students’ safety and mental well-being, and simply keeping our schools staffed and our doors open.


Meet Ed: Ed is an educational friend designed to help students reach their limitless potential. — from lausd.org (Los Angeles School District, the second largest in the U.S.)

What is Ed?
An easy-to-understand learning platform designed by Los Angeles Unified to increase student achievement. It offers personalized guidance and resources to students and families 24/7 in over 100 languages.

Ed is an easy-to-understand learning platform designed by Los Angeles Unified to increase student achievement.

Also relevant/see:

  • Los Angeles Unified Bets Big on ‘Ed,’ an AI Tool for Students — from by Lauraine Langreo
    The Los Angeles Unified School District has launched an AI-powered learning tool that will serve as a “personal assistant” to students and their parents.The tool, named “Ed,” can provide students from the nation’s second-largest district information about their grades, attendance, upcoming tests, and suggested resources to help them improve their academic skills on their own time, Superintendent Alberto Carvalho announced March 20. Students can also use the app to find social-emotional-learning resources, see what’s for lunch, and determine when their bus will arrive.

Could OpenAI’s Sora be a big deal for elementary school kids? — from futureofbeinghuman.com by Andrew Maynard
Despite all the challenges it comes with, AI-generated video could unleash the creativity of young children and provide insights into their inner worlds – if it’s developed and used responsibly

Like many others, I’m concerned about the challenges that come with hyper-realistic AI-generated video. From deep fakes and disinformation to blurring the lines between fact and fiction, generative AI video is calling into question what we can trust, and what we cannot.

And yet despite all the issues the technology is raising, it also holds quite incredible potential, including as a learning and development tool — as long as we develop and use it responsibly.

I was reminded of this a few days back while watching the latest videos from OpenAI created by their AI video engine Sora — including the one below generated from the prompt “an elephant made of leaves running in the jungle”

What struck me while watching this — perhaps more than any of the other videos OpenAI has been posting on its TikTok channel — is the potential Sora has for translating the incredibly creative but often hard to articulate ideas someone may have in their head, into something others can experience.


Can AI Aid the Early Education Workforce? — from edsurge.com by Emily Tate Sullivan
During a panel at SXSW EDU 2024, early education leaders discussed the potential of AI to support and empower the adults who help our nation’s youngest children.

While the vast majority of the conversations about AI in education have centered on K-12 and higher education, few have considered the potential of this innovation in early care and education settings.

At the conference, a panel of early education leaders gathered to do just that, in a session exploring the potential of AI to support and empower the adults who help our nation’s youngest children, titled, “ChatECE: How AI Could Aid the Early Educator Workforce.”

Hau shared that K-12 educators are using the technology to improve efficiency in a number of ways, including to draft individualized education programs (IEPs), create templates for communicating with parents and administrators, and in some cases, to support building lesson plans.


From EIEIO…Seasons Of Change

Again, we’ve never seen change happen as fast as it’s happening.


Enhancing World Language Instruction With AI Image Generators — from eduoptia.org by Rachel Paparone
By crafting an AI prompt in the target language to create an image, students can get immediate feedback on their communication skills.

Educators are, perhaps rightfully so, cautious about incorporating AI in their classrooms. With thoughtful implementation, however, AI image generators, with their ability to use any language, can provide powerful ways for students to engage with the target language and increase their proficiency.


AI in the Classroom: A Teacher’s Toolkit for Transformation — from esheninger.blogspot.com by Eric Sheninger

While AI offers numerous benefits, it’s crucial to remember that it is a tool to empower educators, not replace them. The human connection between teacher and student remains central to fostering creativity, critical thinking, and social-emotional development. The role of teachers will shift towards becoming facilitators, curators, and mentors who guide students through personalized learning journeys. By harnessing the power of AI, educators can create dynamic and effective classrooms that cater to each student’s individual needs. This paves the way for a more engaging and enriching learning experience that empowers students to thrive.


Teachers Are Using AI to Create New Worlds, Help Students with Homework, and Teach English — from themarkup.org by Ross Teixeira; via Matthew Tower
Around the world, these seven teachers are making AI work for them and their students

In this article, seven teachers across the world share their insights on AI tools for educators. You will hear a host of varied opinions and perspectives on everything from whether AI could hasten the decline of learning foreign languages to whether AI-generated lesson plans are an infringement on teachers’ rights. A common theme emerged from those we spoke with: just as the internet changed education, AI tools are here to stay, and it is prudent for teachers to adapt.


Teachers Desperately Need AI Training. How Many Are Getting It? — from edweek.org by Lauraine Langreo

Even though it’s been more than a year since ChatGPT made a big splash in the K-12 world, many teachers say they are still not receiving any training on using artificial intelligence tools in the classroom.

More than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.

From DSC:
This article mentioned the following resource:

Artificial Intelligence Explorations for Educators — from iste.org


 

The $340 Billion Corporate Learning Industry Is Poised For Disruption — from joshbersin.com by Josh Bersin

What if, for example, the corporate learning system knew who you were and you could simply ask it a question and it would generate an answer, a series of resources, and a dynamic set of learning objects for you to consume? In some cases you’ll take the answer and run. In other cases you’ll pour through the content. And in other cases you’ll browse through the course and take the time to learn what you need.

And suppose all this happened in a totally personalized way. So you didn’t see a “standard course” but a special course based on your level of existing knowledge?

This is what AI is going to bring us. And yes, it’s already happening today.

 
© 2024 | Daniel Christian