Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.
AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.
Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.
The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.
The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.
The details:
Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.
Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.
How To Get Hired During the AI Apocalypse — from kathleendelaski.substack.com by Kathleen deLaski And other discussions to have with your kids on the way to college graduation
A less temporary, more existential threat to the four year degree: AI could hollow out the entry level job market for knowledge workers (i.e. new college grads). And if 56% of families were saying college “wasn’t worth it” in 2023,(WSJ), what will that number look like in 2026 or beyond? The one of my kids who went to college ended up working in a bike shop for a year-ish after graduation. No regrets, but it came as a shock to them that they weren’t more employable with their neuroscience degree.
A colleague provided a great example: Her son, newly graduated, went for a job interview as an entry level writer last month and he was asked, as a test, to produce a story with AI and then use that story to write a better one by himself. He would presumably be judged on his ability to prompt AI and then improve upon its product. Is that learning how to DO? I think so. It’s using AI tools to accomplish a workplace task.
Also relevant in terms of the job search, see the following gifted article:
‘We Are the Most Rejected Generation’— from nytimes.com by David Brooks; gifted article David talks admissions rates for selective colleges, ultra-hard to get summer internships, a tough entry into student clubs, and the job market.
Things get even worse when students leave school and enter the job market. They enter what I’ve come to think of as the seventh circle of Indeed hell. Applying for jobs online is easy, so you have millions of people sending hundreds of applications each into the great miasma of the internet, and God knows which impersonal algorithm is reading them. I keep hearing and reading stories about young people who applied to 400 jobs and got rejected by all of them.
It seems we’ve created a vast multilayered system that evaluates the worth of millions of young adults and, most of the time, tells them they are not up to snuff.
Many administrators and faculty members I’ve spoken to are mystified that students would create such an unforgiving set of status competitions. But the world of competitive exclusion is the world they know, so of course they are going to replicate it.
And in this column I’m not even trying to cover the rejections experienced by the 94 percent of American students who don’t go to elite schools and don’t apply for internships at Goldman Sachs. By middle school, the system has told them that because they don’t do well on academic tests, they are not smart, not winners. That’s among the most brutal rejections our society has to offer.
A wide range of roles can or will quickly be replaced with AI, including inside sales representatives, customer service representatives, junior lawyers, junior accountants, and physicians whose focus is diagnosis.
Dario Amodei — CEO of Anthropic, one of the world’s most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us:
AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.
Amodei said AI companies and government need to stop “sugar-coating” what’s coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.
Why it matters: Amodei, 42, who’s building the very technology he predicts could reorder society overnight, said he’s speaking out in hopes of jarring government and fellow AI companies into preparing — and protecting — the nation.
Anthropic’s “Prompt Engineering Overview” is a free masterclass that’s worth its weight in gold. Their “constitutional AI prompting” section helped us create a content filter that actually works—unlike the one that kept flagging our coffee bean reviews as “inappropriate.” Apparently “rich body” triggered something…
OpenAI’s “Cookbook” is like having a Michelin-star chef explain cooking—simple for beginners, but packed with pro techniques. Their JSON formatting examples saved us 3 hours of debugging last week…
Google’s “Prompt Design Strategies” breaks down complex concepts with clear examples. Their before/after gallery showing how slight prompt tweaks improve results made us rethink everything we knew about getting quality outputs.
Pro tip: Save these guides as PDFs before they disappear behind paywalls. The best AI users keep libraries of these resources for quick reference. .
“To address this, organizations should consider building a sustainable AI governance model, prioritizing transparency, and tackling the complex challenge of AI-fueled imposter syndrome through reinvention. Employers who fail to approach innovation with empathy and provide employees with autonomy run the risk of losing valuable staff and negatively impacting employee productivity.”
Key findings from the report include the following:
Employees are keeping their productivity gains a secret from their employers. …
In-office employees may still log in remotely after hours. …
Younger workers are more likely to switch jobs to gain more flexibility.
AI discovers new math algorithms— from by Zach Mink & Rowan Cheung PLUS: Anthropic reportedly set to launch new Sonnet, Opus models
The Rundown: Google just debuted AlphaEvolve, a coding agent that harnesses Gemini and evolutionary strategies to craft algorithms for scientific and computational challenges — driving efficiency inside Google and solving historic math problems.
… Why it matters: Yesterday, we had OpenAI’s Jakub Pachocki saying AI has shown “significant evidence” of being capable of novel insights, and today Google has taken that a step further. Math plays a role in nearly every aspect of life, and AI’s pattern and algorithmic strengths look ready to uncover a whole new world of scientific discovery.
At the recent HR Executive and Future Talent Council event at Bentley University near Boston, I talked with Top 100 HR Tech Influencer Joey Price about what he’s hearing from HR leaders. Price is president and CEO of Jumpstart HR and executive analyst at Aspect43, Jumpstart HR’s HR?tech research division, and author of a valuable new book, The Power of HR: How to Make an Organizational Impact as a People?Professional.
This puts him solidly at the center of HR’s most relevant conversations. Price described the curiosity he’s hearing from many HR leaders about AI agents, which have become increasingly prominent in recent months.
DC: THIS could unfortunately be the ROI companies will get from large investments in #AI — reduced headcount/employees/contract workers. https://t.co/zEWlqCSWzI
Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.
According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”
Something strange, and potentially alarming, is happening to the job market for young, educated workers.
According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.
What’s going on? I see three plausible explanations, and each might be a little bit true.
The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.
Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.
Key takeaway: Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be.Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.
…
A.4 How will AI disrupt this segment? Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.
No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”
Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?
But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.
…
In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.
This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”
I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!
DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.
What’s impressive about it?
Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.
The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.
From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25
All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.
That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.
… Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”
FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.
We are entering a new reality—one in which AI can reason and solve problems in remarkable ways. This intelligence on tap will rewrite the rules of business and transform knowledge work as we know it. Organizations today must navigate the challenge of preparing for an AI-enhanced future, where AI agents will gain increasing levels of capability over time that humans will need to harness as they redesign their business. Human ambition, creativity, and ingenuity will continue to create new economic value and opportunity as we redefine work and workflows.
As a result, a new organizational blueprint is emerging, one that blends machine intelligence with human judgment, building systems that are AI-operated but human-led. Like the Industrial Revolution and the internet era, this transformation will take decades to reach its full promise and involve broad technological, societal, and economic change.
To help leaders understand how knowledge work will evolve, Microsoft analyzed survey data from 31,000 workers across 31 countries, LinkedIn labor market trends, and trillions of Microsoft 365 productivity signals. We also spoke with AI-native startups, academics, economists, scientists, and thought leaders to explore what work could become. The data and insights point to the emergence of an entirely new organization, a Frontier Firm that looks markedly different from those we know today. Structured around on-demand intelligence and powered by “hybrid” teams of humans + agents, these companies scale rapidly, operate with agility, and generate value faster.
Frontier Firms are already taking shape, and within the next 2–5 years we expect that every organization will be on their journey to becoming one. 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. Adoption is accelerating: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.
The time to act is now. The question for every leader and employee is: how will you adapt?
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.
Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.
…
What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.
…
Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.
…
In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.
Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom.
From DSC: I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.
Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.
There are a few places where Scott and I differ.
The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
— Satya Nitta, a longtime computer researcher at
IBM’s Watson Research Center in Yorktown Heights, NY .
By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.
Also, I would use the term learning preferences where Scott uses the term learning styles.
Scott also mentions:
“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”
It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).
That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.
So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!
Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.
The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.
The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.
Blind Spot on AI — from the-job.beehiiv.com by Paul Fain Office tasks are being automated now, but nobody has answers on how education and worker upskilling should change.
Students and workers will need help adjusting to a labor market that appears to be on the verge of a historic disruption as many business processes are automated. Yet job projections and policy ideas are sorely lacking.
…
The benefits of agentic AI are already clear for a wide range of organizations, including small nonprofits like CareerVillage. But the ability to automate a broad range of business processes means that education programs and skills training for knowledge workers will need to change. And as Chung writes in a must-read essay, we have a blind spot with predicting the impacts of agentic AI on the labor market.
“Without robust projections,” he writes, “policymakers, businesses, and educators won’t be able to come to terms with how rapidly we need to start this upskilling.”
We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.
There is a speed limit. GenAI technology continues to advance at incredible speed. However, most organizations are moving at the speed of organizations, not at the speed of technology. No matter how quickly the technology advances—or how hard the companies producing GenAI technology push—organizational change in an enterprise can only happen so fast.
Barriers are evolving. Significant barriers to scaling and value creation are still widespread across key areas. And, over the past year regulatory uncertainty and risk management have risen in organizations’ lists of concerns to address. Also, levels of trust in GenAI are still moderate for the majority of organizations. Even so, with increased customization and accuracy of models—combined with a focus on better governance— adoption of GenAI is becoming more established.
Some uses are outpacing others. Application of GenAI is further along in some business areas than in others in terms of integration, return on investment (ROI) and expectations. The IT function is most mature; cybersecurity, operations, marketing and customer service are also showing strong adoption and results. Organizations reporting higher ROI for their most scaled initiatives are broadly further along in their GenAI journeys.
This company’s experience offers three crucial lessons for other organizational leaders who may be contemplating cutting or reducing talent development investments in their 2025 budgets to focus on “growth.”
Leadership development isn’t a luxury – it’s a strategic imperative…
Succession planning must be an ongoing process, not a reactive measure…
The cost of developing leaders is far less than the cost of not having them when you need them most…
Lou Gerstner, writing about his legendary turnaround at IBM, said, “Culture isn’t just one aspect of the game, it is the game. In the end, an organization is nothing more than the collective capacity of its people to create value… What does the culture reward and punish – individual achievement or team play, risk taking or consensus building?”
Most business gurus would readily agree with that, but if you’d ask them what culture actually is they would be hard pressed to give a coherent answer. Anthropologists, on the other hand, are much more rigorous in their approach and most would agree that three essential elements of a culture are norms, rituals and behaviors.
In a positive organizational culture, norms and rituals support behaviors that honor the mission of the enterprise. Negative cultures undermine that mission. A common problem with many transformation initiatives is that they focus on designing incentives to alter behaviors. Unfortunately, unless you can shift norms and rituals, nothing is likely to change.
Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.
…
A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.
The Obscene Energy Demands of A.I.— from newyorker.com by Elizabeth Kolbert How can the world reach net zero if it keeps inventing new ways to consume energy?
“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”
A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman The generative AI payoff may only come when companies do deeper organizational surgery on their business.
Figure out where gen AI copilots can give you a real competitive advantage
Upskill the talent you have but be clear about the gen-AI-specific skills you need
Form a centralized team to establish standards that enable responsible scaling
Set up the technology architecture to scale
Ensure data quality and focus on unstructured data to fuel your models
Build trust and reusability to drive adoption and scale
Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.
…
However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.
There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?
It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.
By 2027, businesses predict that almost half (44%) of workers’ core skills will be disrupted.
Technology is moving faster than companies can design and scale up their training programmes, found the World Economic Forum’s Future of Jobs Report.
…
The Forum’s Global Risks Report 2024 found that “lack of economic opportunity” ranked as one of the top 10 biggest risks among risk experts over the next two years.
5. Skills will become even more important With 23% of jobs expected to change in the next five years, according to the Future of Jobs Report, millions of people will need to move between declining and growing jobs.
1. Your own AI-powered coaching Learners can go into LinkedIn Learning and ask a question or explain a challenge they are currently facing at work (we’re focusing on areas within Leadership and Management to start). AI-powered coaching will pull from the collective knowledge of our expansive LinkedIn Learning library and, instantaneously, offer advice, examples, or feedback that is personalized to the learner’s skills, job, and career goals.
What makes us so excited about this launch is we can now take everything we as LinkedIn know about people’s careers and how they navigate them and help accelerate them with AI.
…
3. Learn exactly what you need to know for your next job
When looking for a new job, it’s often the time we think about refreshing our LinkedIn profiles. It’s also a time we can refresh our skills. And with skill sets for jobs having changed by 25% since 2015 – with the number expected to increase by 65% by 2030– keeping our skills a step ahead is one of the most important things we can do to stand out.
There are a couple of ways we’re making it easier to learn exactly what you need to know for your next job:
When you set a job alert, in addition to being notified about open jobs, we’ll recommend learning courses and Professional Certificate offerings to help you build the skills needed for that role.
When you view a job, we recommend specific courses to help you build the required skills. If you have LinkedIn Learning access through your company or as part of a Premium subscription, you can follow the skills for the job, that way we can let you know when we launch new courses for those skills and recommend you content on LinkedIn that better aligns to your career goals.
2024 Edtech Predictions from Edtech Insiders — from edtechinsiders.substack.com by Alex Sarlin, Ben Kornell, and Sarah Morin Omni-modal AI, edtech funding prospects, higher ed wake up calls, focus on career training, and more!
Alex: I talked to the 360 Learning folks at one point and they had this really interesting epiphany, which is basically that it’s been almost impossible for every individual company in the past to create a hierarchy of skills and a hierarchy of positions and actually organize what it looks like for people to move around and upskill within the company and get to new paths.
Until now. AI actually can do this very well. It can take not only job description data, but it can take actual performance data. It can actually look at what people do on a daily basis and back fit that to training, create automatic training based on it.
From DSC: I appreciated how they addressed K-12, higher ed, and the workforce all in one posting. Nice work. We don’t need siloes. We need more overall design thinking re: our learning ecosystems — as well as more collaborations. We need more on-ramps and pathways in a person’s learning/career journey.
What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.
I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.
From DSC: Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.
Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
Nvidia announced the GH200 GPU during the quarter.
Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:
Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
Revenue: $18.12 billion, vs. $16.18 billion expected
Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.
DC: Anyone surprised? This is why the U.S. doesn’t want high-powered chips going to China. History repeats itself…again. The ways of the world/power continue on.
Pentagon’s AI initiatives accelerate hard decisions on lethal autonomous weapons https://t.co/PTDmJugiE2
Generative AI will require skills upgrades for workers, according to a report from IBM based on a survey of executives from around the world. One finding: Business leaders say 40% of their workforces will need to reskill as AI and automation are implemented over the next three years. That could translate to 1.4 billion people in the global workforce who require upskilling, according to the company.