The L&D Global Sentiment Survey, now in its 12th year, once again asked two key questions of L&D professionals worldwide:
What will be hot in workplace learning in 2025?
What are your L&D challenges in 2025?
For the obligatory question on what they considered ‘hot’ topics, respondents voted for one to three of 15 suggested options, plus a free text ‘Other’ option. Over 3,000 voters participated from nearly 100 countries. 85% shared their challenges for 2025.
The results show more interest in AI, a renewed focus on showing the value of L&D, and some signs of greater maturity around our understanding of AI in L&D.
The higher education community continues to grapple with questions related to using artificial intelligence (AI) in learning and work. In support of these efforts, we present the 2025 EDUCAUSE AI Landscape Study, summarizing our community’s sentiments and experiences related to strategy and leadership, policies and guidelines, use cases, the higher education workforce, and the institutional digital divide.
The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date.
The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally.
We’re still in the early stages of AI adoption in education, and it is critical that the entire ecosystem—education systems, technologists, educators, and governments—work together to ensure that all students globally have access to AI and develop the skills to use it responsibly
Leah Belsky, VP and general manager of education at OpenAI.
As you read through these use cases, you’ll notice that each one addresses multiple tasks from our list above.
1. Researching a topic for a lesson
2. Creating Tasks For Practice
3. Creating Sample Answers
4. Generating Ideas
5. Designing Lesson Plans
6. Creating Tests
7. Using AI in Virtual Classrooms
8. Creating Images
9. Creating worksheets
10. Correcting and Feedback
The most revolutionary aspect of DeepSeek for education isn’t just its cost—it’s the combination of open-source accessibility and local deployment capabilities. As Azeem Azhar notes, “R-1 is open-source. Anyone can download and run it on their own hardware. I have R1-8b (the second smallest model) running on my Mac Mini at home.”
…
Real-time Learning Enhancement
AI tutoring networks that collaborate to optimize individual learning paths
Immediate, multi-perspective feedback on student work
Continuous assessment and curriculum adaptation
The question isn’t whether this technology will transform education—it’s how quickly institutions can adapt to a world where advanced AI capabilities are finally within reach of every classroom.
I know through your feedback on my social media and blog posts that several of you have legitimate concerns about the impact of AI in education, especially those related to data privacy, academic dishonesty, AI dependence, loss of creativity and critical thinking, plagiarism, to mention a few. While these concerns are valid and deserve careful consideration, it’s also important to explore the potential benefits AI can bring when used thoughtfully.
Tools such as ChatGPT and Claude are like smart research assistants that are available 24/7 to support you with all kinds of tasks from drafting detailed lesson plans, creating differentiated materials, generating classroom activities, to summarizing and simplifying complex topics. Likewise, students can use them to enhance their learning by, for instance, brainstorming ideas for research projects, generating constructive feedback on assignments, practicing problem-solving in a guided way, and much more.
The point here is that AI is here to stay and expand, and we better learn how to use it thoughtfully and responsibly rather than avoid it out of fear or skepticism.
As part of our updates to the Edtech Insiders Generative AI Map, we’re excited to release a new mini market map and article deep dive on Generative AI tools that are specifically designed for Instructional Materials use cases.
In our database, the Instructional Materials use case category encompasses tools that:
Assist educators by streamlining lesson planning, curriculum development, and content customization
Enable educators or students to transform materials into alternative formats, such as videos, podcasts, or other interactive media, in addition to leveraging gaming principles or immersive VR to enhance engagement
Empower educators or students to transform text, video, slides or other source material into study aids like study guides, flashcards, practice tests, or graphic organizers
Engage students through interactive lessons featuring historical figures, authors, or fictional characters
Customize curriculum to individual needs or pedagogical approaches
Empower educators or students to quickly create online learning assets and courses
With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.
Copy [the following] prompts into a document
Use them in sequence as you write
Adjust the word counts and specifics as needed
Keep your responses for reference
Use the same prompt template for similar sections to maintain consistency
Each prompt builds on the previous one, creating a systematic approach to helping you write your book.
Using NotebookLM to Boost College Reading Comprehension— from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts
We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)
If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.
AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:
Break your own AI policies before you implement them. …
Fund your failures. …
Resist the pilot program. …
Host Anti-Tech Tech Talks …
…+ several more tips
While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.
Maria Anderson responded to Clay’s posting with this idea:
Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.
And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.
In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?
You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.
That’s where the new Grammarly feature comes in.
The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”
AI Agents Are Coming to Higher Education — from govtech.com AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.
Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.
Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy— from er.educause.edu by Vicki Strunk and James Willis Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.
The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.
Now is the time for visionary leadership in education.The era of artificial intelligence is reshaping the demands on education systems. Rigid policies, outdated curricula, and reliance on obsolete metrics are failing students. A recent survey from Resume Genius found that graduates lack skills in communication, collaboration, and critical thinking. Consequently, there is a growing trend in companies hiring candidates based on skills instead of traditional education or work experience. This underscores the urgent need for educational leaders to prioritize adaptability and innovation in their systems. Educational leaders must embrace a transformative approach to keep pace.
…
[Heretical leaders] bring courage, empathy, and strategic thinking to reimagine education’s potential. Here are their defining characteristics:
Visionary Thinking: They identify bold, innovative paths to progress.
Courage to Act: These leaders take calculated risks to overcome resistance and inertia.
Relentless Curiosity: They challenge assumptions and seek better alternatives.
Empathy for Stakeholders: Understanding the personal impact of change allows them to lead with compassion.
Strategic Disruption: Their deliberate actions ensure systemic improvements.
These qualities enable Heretical leaders to reframe challenges as opportunities and drive meaningful change.
From DSC: Readers of this blog will recognize that I believe visionary leadership is extremely important — in all areas of our society, but especially within our learning ecosystems. Vision trumps data, at least in my mind. There are times when data can be used to support a vision, but having a powerful vision is more lasting and impactful than relying on data to drive the organization.
So while I’d vote for a different term other than “heretical leaders,” I get what Dan is saying and I agree with him. Such leaders are going against the grain. They are swimming upstream. They are espousing perspectives that others often don’t buy into (at least initially or for some time).
Such were the leaders who introduced online learning into the K-16 educational systems back in the late ’90s and into the next two+ decades. The growth of online-based learning continues and has helped educate millions of people. Those leaders and the people who worked for such endeavors were going against the grain.
We haven’t seen the end point of online-based learning. I think it will become even more powerful and impactful when AI is used to determine which jobs are opening up, and which skills are needed for those jobs, and then provide a listing of sources of where one can obtain that knowledge and develop those skills. People will be key in this vision. But so will AI and personalized learning. It will be a collaborative effort.
By the way, I am NOT advocating for using AI to outsource our thinking. Also, having basic facts and background knowledge in a domain is critically important, especially to use AI effectively. But we should be teaching students about AI (as we learn more about it ourselves). We should be working collaboratively with our students to understand how best to use AI. It’s their futures at stake.
At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.
Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.
India emerges as Global Leader in AI Education: Bosch Tech Compass 2025 — from medianews4u.com 57% Indians receive employer-provided AI training, surpassing Germany, and other European nations
Bengaluru: India is emerging as a global leader in artificial intelligence (AI) education, with over 50% of its population actively self-educating in AI-related skills, according to Bosch’s fourth annual Tech Compass Survey. The report highlights India’s readiness to embrace AI in work, education, and daily life, positioning the nation as a frontrunner in the AI revolution.
AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.
The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.
While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.
Why Combine Them? Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.
Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.
Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference. .
There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.
That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.
Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.
Which Tool(s) To Use
Training, Support, & Guidance, Oh My!
Strategy: Setting a Direction…
Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick What to make of the statements of the AI labs?
What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.
Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.
…
Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.
Here are the five most popular stories that Education Week published in 2024 about AI in schools.
Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:
1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change
“I mean, that’s what I’ll always want for my own children and, frankly, for anyone’s children,” Khan said. “And the hope here is that we can use artificial intelligence and other technologies to amplify what a teacher can do so they can spend more time standing next to a student, figuring them out, having a person-to-person connection.”
…
“After a week you start to realize, like, how you can use it,” Brockman said. “That’s been one of the really important things about working with Sal and his team, to really figure out what’s the right way to sort of bring this to parents and to teachers and to classrooms and to do that in a way…so that the students really learn and aren’t just, you know, asking for the answers and that the parents can have oversight and the teachers can be involved in that process.”
More than 100 colleges and high schools are turning to a new AI tool called Nectir, allowing teachers to create a personalized learning partner that’s trained on their syllabi, textbooks, and assignments to help students with anything from questions related to their coursework to essay writing assistance and even future career guidance.
…
With Nectir, teachers can create an AI assistant tailored to their specific needs, whether for a single class, a department, or the entire campus. There are various personalization options available, enabling teachers to establish clear boundaries for the AI’s interactions, such as programming the assistant to assist only with certain subjects or responding in a way that aligns with their teaching style.
“It’ll really be that customized learning partner. Every single conversation that a student has with any of their assistants will then be fed into that student profile for them to be able to see based on what the AI thinks, what should I be doing next, not only in my educational journey, but in my career journey,” Ghai said.
How Will AI Influence Higher Ed in 2025? — from insidehighered.com by Kathryn Palmer No one knows for sure, but Inside Higher Ed asked seven experts for their predictions.
As the technology continues to evolve at a rapid pace, no one knows for sure how AI will influence higher education in 2025. But several experts offered Inside Higher Ed their predictions—and some guidance—for how colleges and universities will have to navigate AI’s potential in the new year.
In the short term, A.I. will help teachers create lesson plans, find illustrative examples and generate quizzes tailored to each student. Customized problem sets will serve as tools to combat cheating while A.I. provides instant feedback.
…
In the longer term, it’s possible to imagine a world where A.I. can ingest rich learner data and create personalized learning paths for students, all within a curriculum established by the teacher. Teachers can continue to be deeply involved in fostering student discussions, guiding group projects and engaging their students, while A.I. handles grading and uses the Socratic method to help students discover answers on their own. Teachers provide encouragement and one-on-one support when needed, using their newfound availability to give students some extra care.
Let’s be clear: A.I. will never replace the human touch that is so vital to education. No algorithm can replicate the empathy, creativity and passion a teacher brings to the classroom. But A.I. can certainly amplify those qualities. It can be our co-pilot, our chief of staff helping us extend our reach and improve our effectiveness.
Today, I want to reflect on two recent OpenAI developments that highlight this evolution: their belated publication of advice for students on integrating AI into writing workflows, and last week’s launch of the full GPTo1 Pro version. When OpenAI released their student writing guide, there were plenty of snarky comments about how this guidance arrives almost a year after they thoroughly disrupted the educational landscape. Fair enough – I took my own side swipes initially. But let’s look at what they’re actually advising, because the details matter more than the timing.
Tutoring programs exploded in the last five years as states and school districts searched for ways to counter plummeting achievement during COVID. But the cost of providing supplemental instruction to tens of millions of students can be eye-watering, even as the results seem to taper off as programs serve more students.
That’s where artificial intelligence could prove a decisive advantage. A report circulated in October by the National Student Support Accelerator found that an AI-powered tutoring assistant significantly improved the performance of hundreds of tutors by prompting them with new ways to explain concepts to students. With the help of the tool, dubbed Tutor CoPilot, students assigned to the weakest tutors began posting academic results nearly equal to those assigned to the strongest. And the cost to run the program was just $20 per pupil.
Faculty must have the time and support necessary to come to terms with this new technology and that requires us to change how we view professional development in higher education and K-12. We cannot treat generative AI as a one-off problem that can be solved by a workshop, an invited talk, or a course policy discussion. Generative AI in education has to be viewed as a continuum. Faculty need a myriad of support options each semester:
Course buyouts
Fellowships
Learning communities
Reading groups
AI Institutes and workshops
Funding to explore the scholarship of teaching and learning around generative AI
Education leaders should focus on integrating AI literacy, civic education, and work-based learning to equip students for future challenges and opportunities.
Building social capital and personalized learning environments will be crucial for student success in a world increasingly influenced by AI and decentralized power structures.
Institutions are balancing capacity issues and rapid technological advancements—including artificial intelligence—while addressing a loss of trust in higher education.
To adapt to the future, technology and data leaders must work strategically to restore trust, prepare for policy updates, and plan for online education growth.
Risks on the Horizon: ASL Levels The two key risks Dario is concerned about are:
a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy
These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):
1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.
Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.
Should You Still Learn to Code in an A.I. World? — from nytimes.com by Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.
Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.
For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.
There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).
It’s called Lovable, the “world’s first AI fullstack engineer.”
… Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.
Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.
As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.
From DSC: I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:
This is mind-blowing!
NVIDIA has introduced Edify 3D, a 3D AI generator that lets us create high-quality 3D scenes using just a simple prompt. And all the assets are fully editable!
From DSC: I’m not trying to gossip here. I post this because Sam Altman is the head of arguably one of the most powerful companies in the world today — at least in terms of introducing change to a variety of societies throughout the globe (both positive and negative). So when we’ve now seen almost the entire leadership team head out the door, this certainly gives me major pause. I don’t like it. Items like the ones below begin to capture some of why I’m troubled and suspicious about these troubling moves.
OpenAI to Become For-Profit Company — from wsj.com by Deepa Seetharaman, Berber Jin, and Tom Dotan (behind paywall) Planned restructuring comes amid personnel upheaval including resignation of chief technology officer
AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds. Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Runway Partners with Lionsgate — from runwayml.com via The Rundown AI Runway and Lionsgate are partnering to explore the use of AI in film production.
Lionsgate and Runway have entered into a first-of-its-kind partnership centered around the creation and training of a new AI model, customized on Lionsgate’s proprietary catalog. Fundamentally designed to help Lionsgate Studios, its filmmakers, directors and other creative talent augment their work, the model generates cinematic video that can be further iterated using Runway’s suite of controllable tools.
Per The Rundown:Lionsgate, the film company behind The Hunger Games, John Wick, and Saw, teamed up with AI video generation company Runway to create a custom AI model trained on Lionsgate’s film catalogue.
The details:
The partnership will develop an AI model specifically trained on Lionsgate’s proprietary content library, designed to generate cinematic video that filmmakers can further manipulate using Runway’s tools.
Lionsgate sees AI as a tool to augment and enhance its current operations, streamlining both pre-production and post-production processes.
Runway is considering ways to offer similar custom-trained models as templates for individual creators, expanding access to AI-powered filmmaking tools beyond major studios.
Why it matters: As many writers, actors, and filmmakers strike against ChatGPT, Lionsgate is diving head-first into the world of generative AI through its partnership with Runway. This is one of the first major collabs between an AI startup and a major Hollywood company — and its success or failure could set precedent for years to come.
Each prompt on ChatGPT flows through a server that runs thousands of calculations to determine the best words to use in a response.
In completing those calculations, these servers, typically housed in data centers, generate heat. Often, water systems are used to cool the equipment and keep it functioning. Water transports the heat generated in the data centers into cooling towers to help it escape the building, similar to how the human body uses sweat to keep cool, according to Shaolei Ren, an associate professor at UC Riverside.
Where electricity is cheaper, or water comparatively scarce, electricity is often used to cool these warehouses with large units resembling air-conditioners, he said. That means the amount of water andelectricity an individual query requires can depend on a data center’s location and vary widely.
AI, Humans and Work: 10 Thoughts.— from rishad.substack.com by Rishad Tobaccowala The Future Does Not Fit in the Containers of the Past. Edition 215.
10 thoughts about AI, Humans and Work in 10 minutes:
AI is still Under-hyped.
AI itself will be like electricity and is unlikely to be a differentiator for most firms.
AI is not alive but can be thought of as a new species.
Knowledge will be free and every knowledge workers job will change in 2025.
The key about AI is not to ask what AI will do to us but what AI can do for us.
Today the workforce is getting older, and the number of younger workers in positions of senior management is growing. These two developments might appear to spell trouble, in that they seem to set the generations against one another, but the author of this article argues that in fact they represent an important opportunity: If companies can figure out how to enable the intergenerational transfer of the wisdom that comes with age and experience, they can strengthen themselves — and the workplace as a whole.
It also allowed us to develop a list of the character qualities that most commonly defined our best informal mentors, among them: less ego and more collaboration skills, a knack at asking generative questions, and an ability to offer unvarnished insight that feels like a gift as opposed to judgment.
…
It’s time that we invest as much energy in helping older workers distill their wisdom as we do in helping younger workers accumulate their knowledge.
From DSC: I think Chip hits on many important and valuable insights in this article. His messages apply to all kinds of organizations. Still, they are especially relevant to the Magnificent Seven (i.e., Google parent Alphabet, Meta Platforms, Amazon.com, Tesla, Apple, Microsoft, and Nvidia) and other tech-related companies who often move forward with designing and producing things without ever thinking about whether they SHOULD be producing those things. What are the positive and negative ramifications of this technology on society? THAT’s a wise question.