SALT LAKE CITY, Oct. 22, 2024 /PRNewswire/ — Instructure, the leading learning ecosystem and UPCEA, the online and professional education association, announced the results of a survey on whether institutions are leveraging AI to improve learner outcomes and manage records, along with the specific ways these tools are being utilized. Overall, the study revealed interest in the potential of these technologies is far outpacing adoption. Most respondents are heavily involved in developing learner experiences and tracking outcomes, though nearly half report their institutions have yet to adopt AI-driven tools for these purposes. The research also found that only three percent of institutions have implemented Comprehensive Learner Records (CLRs), which provide a complete overview of an individual’s lifelong learning experiences.
In the nearly two years since generative artificial intelligence burst into public consciousness, U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests.
Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.
The report, from the Center on Reinventing Public Education, a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.
It is bonkers that I can write out all my life goals on a sheet of paper, take a photo of it, and just ask Claude or ChatGPT for help.
I get a complete plan, milestones, KPIs, motivation, and even action support to get there.
As beta testers, we’re shaping the tools of tomorrow. As researchers, we’re pioneering new pedagogical approaches. As ethical guardians, we’re ensuring that AI enhances rather than compromises the educational experience. As curators, we’re guiding students through the wealth of information AI provides. And as learners ourselves, we’re staying at the forefront of educational innovation.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
Going forward, the opportunity for AI agents will be “gigantic,” according to Nvidia founder and CEO Jensen Huang.
Already, progress is “spectacular and surprising,” with AI development moving faster and faster and the industry getting into the “flywheel zone” that technology needs to advance, Huang said in a fireside chat at Salesforce’s flagship event Dreamforce this week.
“This is an extraordinary time,” Huang said while on stage with Marc Benioff, Salesforce chair, CEO and co-founder. “In no time in history has technology moved faster than Moore’s Law. We’re moving way faster than Moore’s Law, are arguably reasonably Moore’s Law squared.”
“We’ll have agents working with agents, agents working with us,” said Huang.
As we navigate the rapidly evolving landscape of artificial intelligence in education, a troubling trend has emerged. What began as cautious skepticism has calcified into rigid opposition. The discourse surrounding AI in classrooms has shifted from empirical critique to categorical rejection, creating a chasm between the potential of AI and its practical implementation in education.
This hardening of attitudes comes at a significant cost. While educators and policymakers debate, students find themselves caught in the crossfire. They lack safe, guided access to AI tools that are increasingly ubiquitous in the world beyond school walls. In the absence of formal instruction, many are teaching themselves to use these tools, often in less than productive ways. Others live in a state of constant anxiety, fearing accusations of AI reliance in their work. These are just a few symptoms of an overarching educational culture that has become resistant to change, even as the world around it transforms at an unprecedented pace.
Yet, as this calcification sets in, I find myself in a curious position: the more I thoughtfully integrate AI into my teaching practice, the more I witness its potential to enhance and transform education
The urgency to integrate AI competencies into education is about preparing students not just to adapt to inevitable changes but to lead the charge in shaping an AI-augmented world. It’s about equipping them to ask the right questions, innovate responsibly, and navigate the ethical quandaries that come with such power.
AI in education should augment and complement their aptitude and expertise, to personalize and optimize the learning experience, and to support lifelong learning and development. AI in education should be a national priority and a collaborative effort among all stakeholders, to ensure that AI is designed and deployed in an ethical, equitable, and inclusive way that respects the diversity and dignity of all learners and educators and that promotes the common good and social justice. AI in education should be about the production of AI, not just the consumption of AI, meaning that learners and educators should have the opportunity to learn about AI, to participate in its creation and evaluation, and to shape its impact and direction.
86% of students globally are regularly using AI in their studies, with 54% of them using AI on a weekly basis, the recent Digital Education Council Global AI Student Survey found.
ChatGPT was found to be the most widely used AI tool, with 66% of students using it, and over 2 in 3 students reported using AI for information searching.
Despite their high rates of AI usage, 1 in 2 students do not feel AI ready. 58% reported that they do not feel that they had sufficient AI knowledge and skills, and 48% do not feel adequately prepared for an AI-enabled workplace.
The Post-AI Instructional Designer— from drphilippahardman.substack.com by Dr. Philippa Hardman How the ID role is changing, and what this means for your key skills, roles & responsibilities
Specifically, the study revealed that teachers who reported most productivity gains were those who used AI not just for creating outputs (like quizzes or worksheets) but also for seeking input on their ideas, decisions and strategies.
Those who engaged with AI as a thought partner throughout their workflow, using it to generate ideas, define problems, refine approaches, develop strategies and gain confidence in their decisions gained significantly more from their collaboration with AI than those who only delegated functional tasks to AI.
Leveraging Generative AI for Inclusive Excellence in Higher Education — from er.educause.edu by Lorna Gonzalez, Kristi O’Neil-Gonzalez, Megan Eberhardt-Alstot, Michael McGarry and Georgia Van Tyne Drawing from three lenses of inclusion, this article considers how to leverage generative AI as part of a constellation of mission-centered inclusive practices in higher education.
The hype and hesitation about generative artificial intelligence (AI) diffusion have led some colleges and universities to take a wait-and-see approach.Footnote1 However, AI integration does not need to be an either/or proposition where its use is either embraced or restricted or its adoption aimed at replacing or outright rejecting existing institutional functions and practices. Educators, educational leaders, and others considering academic applications for emerging technologies should consider ways in which generative AI can complement or augment mission-focused practices, such as those aimed at accessibility, diversity, equity, and inclusion. Drawing from three lenses of inclusion—accessibility, identity, and epistemology—this article offers practical suggestions and considerations that educators can deploy now. It also presents an imperative for higher education leaders to partner toward an infrastructure that enables inclusive practices in light of AI diffusion.
An example way to leverage AI:
How to Leverage AI for Identity Inclusion Educators can use the following strategies to intentionally design instructional content with identity inclusion in mind.
Provide a GPT or AI assistant with upcoming lesson content (e.g., lecture materials or assignment instructions) and ask it to provide feedback (e.g., troublesome vocabulary, difficult concepts, or complementary activities) from certain perspectives. Begin with a single perspective (e.g., first-time, first-year student), but layer in more to build complexity as you interact with the GPT output.
Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.
To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.
Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development.Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.
Change blindness — from oneusefulthing.org by Ethan Mollick 21 months later
I don’t think anyone is completely certain about where AI is going, but we do know that things have changed very quickly, as the examples in this post have hopefully demonstrated. If this rate of change continues, the world will look very different in another 21 months. The only way to know is to live through it.
Over the subsequent weeks, I’ve made other adjustments, but that first one was the one I asked myself:
What are you doing?
Why are you doing it that way?
How could you change that workflow with AI?
Applying the AI to the workflow, then asking, “Is this what I was aiming for? How can I improve the prompt to get closer?”
Documenting what worked (or didn’t). Re-doing the work with AI to see what happened, and asking again, “Did this work?”
So, something that took me WEEKS of hard work, and in some cases I found impossible, was made easy. Like, instead of weeks, it takes 10 minutes. The hard part? Building the prompt to do what I want, fine-tuning it to get the result. But that doesn’t take as long now.
The landscape of education is on the brink of a profound transformation, driven by rapid advancements in artificial intelligence. This shift was highlighted recently by Andrej Karpathy’s announcement of Eureka Labs, a venture aimed at creating an “AI-native” school. As we look ahead, it’s clear that the integration of AI in education will reshape how we learn, teach, and think about schooling altogether.
…
Traditional textbooks will begin to be replaced by interactive, AI-powered learning materials that adapt in real-time to a student’s progress.
…
As we approach 2029, the line between physical and virtual learning environments will blur significantly.
Curriculum design will become more flexible and personalized, with AI systems suggesting learning pathways based on each student’s interests, strengths, and career aspirations. … The boundaries between formal education and professional development will blur, creating a continuous learning ecosystem.
This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.
Key moments:
00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.
The team has summarized their primary contributions as follows.
The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.
a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.
The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.
Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.
Gen-3 Alpha Text to Video is now available to everyone.
A new frontier for high-fidelity, fast and controllable video generation.
We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.
From DSC: As is often the case, David put together a solid posting here. A few comments/reflections on it:
I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better. .
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
What about course videos? Professors can create them (by lecturing into a camera for several hours hopefully in different clothes) from the readings, from their interpretations of the readings, from their own case experiences – from anything they like. But now professors can direct the creation of the videos by talking – actually describing – to a CustomGPTabout what they’d like the video to communicate with their or another image. Wait. What?They can make a video by talking to a CustomGPT and even select the image they want the “actor” to use? Yes. They can also add a British accent and insert some (GenAI-developed) jokes into the videos if they like. All this and much more is now possible. This means that a professor can specify how long the video should be, what sources should be consulted and describe the demeanor the professor wants the video to project.
From DSC: Though I wasn’t crazy about the clickbait type of title here, I still thought that the article was solid and thought-provoking. It contained several good ideas for using AI.
Excerpt from a recent EdSurge Higher Ed newsletter:
There are darker metaphors though — ones that focus on the hazards for humanity of the tech. Some professors worry that AI bots are simply replacing hired essay-writers for many students, doing work for a student that they can then pass off as their own (and doing it for free).
From DSC: Hmmm…the use of essay writers was around long before AI became mainstream within higher education. So we already had a serious problem where students didn’t see the why in what they were being asked to do. Some students still aren’t sold on the why of the work in the first place. The situation seems to involve ethics, yes, but it also seems to say that we haven’t sold students on the benefits of putting in the work. Students seem to be saying I don’t care about this stuff…I just need the degree so I can exit stage left.
My main point: The issue didn’t start with AI…it started long before that.
This financial stagnation is occurring as we face a multitude of escalating challenges. These challenges include but are in no way limited to, chronic absenteeism, widespread student mental health issues, critical staff shortages, rampant classroom behavior issues, a palpable sense of apathy for education in students, and even, I dare say, hatred towards education among parents and policymakers.
…
Our current focus is on keeping our heads above water, ensuring our students’ safety and mental well-being, and simply keeping our schools staffed and our doors open.
What is Ed? An easy-to-understand learning platform designed by Los Angeles Unified to increase student achievement. It offers personalized guidance and resources to students and families 24/7 in over 100 languages.
Also relevant/see:
Los Angeles Unified Bets Big on ‘Ed,’ an AI Tool for Students — from by Lauraine Langreo
The Los Angeles Unified School District has launched an AI-powered learning tool that will serve as a “personal assistant” to students and their parents.The tool, named “Ed,” can provide students from the nation’s second-largest district information about their grades, attendance, upcoming tests, and suggested resources to help them improve their academic skills on their own time, Superintendent Alberto Carvalho announced March 20. Students can also use the app to find social-emotional-learning resources, see what’s for lunch, and determine when their bus will arrive.
Could OpenAI’s Sora be a big deal for elementary school kids?— from futureofbeinghuman.com by Andrew Maynard Despite all the challenges it comes with, AI-generated video could unleash the creativity of young children and provide insights into their inner worlds – if it’s developed and used responsibly
Like many others, I’m concerned about the challenges that come with hyper-realistic AI-generated video. From deep fakes and disinformation to blurring the lines between fact and fiction, generative AI video is calling into question what we can trust, and what we cannot.
And yet despite all the issues the technology is raising, it also holds quite incredible potential, including as a learning and development tool — as long as we develop and use it responsibly.
I was reminded of this a few days back while watching the latest videos from OpenAI created by their AI video engine Sora — including the one below generated from the prompt “an elephant made of leaves running in the jungle”
…
What struck me while watching this — perhaps more than any of the other videos OpenAI has been posting on its TikTok channel — is the potential Sora has for translating the incredibly creative but often hard to articulate ideas someone may have in their head, into something others can experience.
Can AI Aid the Early Education Workforce? — from edsurge.com by Emily Tate Sullivan During a panel at SXSW EDU 2024, early education leaders discussed the potential of AI to support and empower the adults who help our nation’s youngest children.
While the vast majority of the conversations about AI in education have centered on K-12 and higher education, few have considered the potential of this innovation in early care and education settings.
At the conference, a panel of early education leaders gathered to do just that, in a session exploring the potential of AI to support and empower the adults who help our nation’s youngest children, titled, “ChatECE: How AI Could Aid the Early Educator Workforce.”
Hau shared that K-12 educators are using the technology to improve efficiency in a number of ways, including to draft individualized education programs (IEPs), create templates for communicating with parents and administrators, and in some cases, to support building lesson plans.
Educators are, perhaps rightfully so, cautious about incorporating AI in their classrooms. With thoughtful implementation, however, AI image generators, with their ability to use any language, can provide powerful ways for students to engage with the target language and increase their proficiency.
While AI offers numerous benefits, it’s crucial to remember that it is a tool to empower educators, not replace them. The human connection between teacher and student remains central to fostering creativity, critical thinking, and social-emotional development. The role of teachers will shift towards becoming facilitators, curators, and mentors who guide students through personalized learning journeys. By harnessing the power of AI, educators can create dynamic and effective classrooms that cater to each student’s individual needs. This paves the way for a more engaging and enriching learning experience that empowers students to thrive.
In this article, seven teachers across the world share their insights on AI tools for educators. You will hear a host of varied opinions and perspectives on everything from whether AI could hasten the decline of learning foreign languages to whether AI-generated lesson plans are an infringement on teachers’ rights. A common theme emerged from those we spoke with: just as the internet changed education, AI tools are here to stay, and it is prudent for teachers to adapt.
Even though it’s been more than a year since ChatGPT made a big splash in the K-12 world, many teachers say they are still not receiving any training on using artificial intelligence tools in the classroom.
More than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.
From DSC: This article mentioned the following resource:
Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.
…
A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.
The Obscene Energy Demands of A.I.— from newyorker.com by Elizabeth Kolbert How can the world reach net zero if it keeps inventing new ways to consume energy?
“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”
A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman The generative AI payoff may only come when companies do deeper organizational surgery on their business.
Figure out where gen AI copilots can give you a real competitive advantage
Upskill the talent you have but be clear about the gen-AI-specific skills you need
Form a centralized team to establish standards that enable responsible scaling
Set up the technology architecture to scale
Ensure data quality and focus on unstructured data to fuel your models
Build trust and reusability to drive adoption and scale
Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.
…
However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.
There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?
It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.
By 2027, businesses predict that almost half (44%) of workers’ core skills will be disrupted.
Technology is moving faster than companies can design and scale up their training programmes, found the World Economic Forum’s Future of Jobs Report.
…
The Forum’s Global Risks Report 2024 found that “lack of economic opportunity” ranked as one of the top 10 biggest risks among risk experts over the next two years.
5. Skills will become even more important With 23% of jobs expected to change in the next five years, according to the Future of Jobs Report, millions of people will need to move between declining and growing jobs.
How Workers Rise— from the-job.beehiiv.com by Paul Fain A look forward at skills-based hiring and AI’s impacts on education and work.
Impacts of AI: Fuller is optimistic about companies making serious progress on skills-based hiring over the next five to 10 years. AI will help drive that transformation, he says, by creating the data to better understand the skills associated with jobs.
The technology will allow for a more accurate matching of skills and experiences, says Fuller, and for companies to “not rely on proxies like degrees or grade point averages or even the proxy of what someone currently makes or how fast they’ve gotten promoted on their résumé.”
Change is coming soon, Fuller predicts, particularly as AI’s impacts accelerate. And the disruption will affect wealthier Americans who’ve been spared during previous shifts in the labor market.
The Kicker: “When people in bedroom suburbs are losing their six-figure jobs, that changes politics,” Fuller says. “That changes the way people are viewing things like equity and where that leads. It’s certainly going to put a lot of pressure on the way the system has worked.”
1. Your own AI-powered coaching Learners can go into LinkedIn Learning and ask a question or explain a challenge they are currently facing at work (we’re focusing on areas within Leadership and Management to start). AI-powered coaching will pull from the collective knowledge of our expansive LinkedIn Learning library and, instantaneously, offer advice, examples, or feedback that is personalized to the learner’s skills, job, and career goals.
What makes us so excited about this launch is we can now take everything we as LinkedIn know about people’s careers and how they navigate them and help accelerate them with AI.
…
3. Learn exactly what you need to know for your next job
When looking for a new job, it’s often the time we think about refreshing our LinkedIn profiles. It’s also a time we can refresh our skills. And with skill sets for jobs having changed by 25% since 2015 – with the number expected to increase by 65% by 2030– keeping our skills a step ahead is one of the most important things we can do to stand out.
There are a couple of ways we’re making it easier to learn exactly what you need to know for your next job:
When you set a job alert, in addition to being notified about open jobs, we’ll recommend learning courses and Professional Certificate offerings to help you build the skills needed for that role.
When you view a job, we recommend specific courses to help you build the required skills. If you have LinkedIn Learning access through your company or as part of a Premium subscription, you can follow the skills for the job, that way we can let you know when we launch new courses for those skills and recommend you content on LinkedIn that better aligns to your career goals.
2024 Edtech Predictions from Edtech Insiders — from edtechinsiders.substack.com by Alex Sarlin, Ben Kornell, and Sarah Morin Omni-modal AI, edtech funding prospects, higher ed wake up calls, focus on career training, and more!
Alex: I talked to the 360 Learning folks at one point and they had this really interesting epiphany, which is basically that it’s been almost impossible for every individual company in the past to create a hierarchy of skills and a hierarchy of positions and actually organize what it looks like for people to move around and upskill within the company and get to new paths.
Until now. AI actually can do this very well. It can take not only job description data, but it can take actual performance data. It can actually look at what people do on a daily basis and back fit that to training, create automatic training based on it.
From DSC: I appreciated how they addressed K-12, higher ed, and the workforce all in one posting. Nice work. We don’t need siloes. We need more overall design thinking re: our learning ecosystems — as well as more collaborations. We need more on-ramps and pathways in a person’s learning/career journey.