Introducing Gemini: our largest and most capable AI model — from blog.google by Sundar Pichai and Demis Hassabis
Making AI more helpful for everyone

Today, we’re a step closer to this vision as we introduce Gemini, the most capable and general model we’ve ever built.

Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.



One year in: from ChatGPT3.5 to a whole new world — from stefanbauschard.substack.com by Stefan Bauschard
Happy Birthday to ChatGPT 3.5+. You’re growing up so fast!

So, in many ways, ChatGPT and its friends are far from as intelligent as a human; they do not have “general” intelligence (AGI).

But this will not last for long. The debate about ProjectQ aside, AIs with the ability to engage in high-level reasoning, plan, and have long-term memory are expected in the next 2–3 years. We are already seeing AI agents that are developing the ability to act autonomously and collaborate to a degree. Once AIs can reason and plan, acting autonomously and collaborating will not be a challenge.


ChatGPT is winning the future — but what future is that? — from theverge.com by David Pierce
OpenAI didn’t mean to kickstart a generational shift in the technology industry. But it did. Now all we have to decide is where to go from here.

We don’t know yet if AI will ultimately change the world the way the internet, social media, and the smartphone did. Those things weren’t just technological leaps — they actually reorganized our lives in fundamental and irreversible ways. If the final form of AI is “my computer writes some of my emails for me,” AI won’t make that list. But there are a lot of smart people and trillions of dollars betting that’s the beginning of the AI story, not the end. If they’re right, the day OpenAI launched its “research preview” of ChatGPT will be much more than a product launch for the ages. It’ll be the day the world changed, and we didn’t even see it coming.


AI is overhyped” — from theneurondaily.com by Pete Huang & Noah Edelman

If you’re feeling like AI is the future, but you’re not sure where to start, here’s our advice for 2024 based on our convos with business leaders:

  1. Start with problems – Map out where your business is spending time and money, then ask if AI can help. Don’t do AI to say you’re doing AI.
  2. Model the behavior – Teams do better in making use of new tools when their leadership buys in. Show them your support.
  3. Do what you can, wait for the rest – With AI evolving so fast, “do nothing for now” is totally valid. Start with what you can do today (accelerating individual employee output) and keep up-to-date on the rest.

Google says new AI model Gemini outperforms ChatGPT in most tests — from theguardian.com by Dan Milmo
Gemini is being released in form of upgrade to Google’s chatbot Bard, but not yet in UK or EU

Google has unveiled a new artificial intelligence model that it claims outperforms ChatGPT in most tests and displays “advanced reasoning” across multiple formats, including an ability to view and mark a student’s physics homework.

The model, called Gemini, is the first to be announced since last month’s global AI safety summit, at which tech firms agreed to collaborate with governments on testing advanced systems before and after their release. Google said it was in discussions with the UK’s newly formed AI Safety Institute over testing Gemini’s most powerful version, which will be released next year.

 

Learning and employment record use cases -- from the National Governors Association

LERs Are Hot. What Are States Going To Do With Them?

Governors and state leaders are concerned about the current labor shortage, occurring during a time when many skilled workers are underemployed or even unemployed. Skills-based approaches to hiring and recruiting can shift that dynamic—making pathways to good careers accessible to a wider segment of the workforce and opening up new pools of talent for employers. They do so by focusing on what workers know and can do, not on the degrees or credentials they’ve earned.

That’s the theory. But a lot hinges on how things actually play out on the ground.

Technology will play a key role, and many states have zeroed in on learning and employment records—essentially digital resumes with verified records of people’s skills, educational experiences, and work histories—as an essential tool. A lot of important work is going into the technical design and specifications.

This project, on the other hand, aims to take a step back and look at the current state of play when it comes to the use cases for LERs. Just a few of the key questions:

  • How might employers, education providers, government agencies, and workers themselves actually use them? Will they?
  • In what areas do state policymakers have the most influence over key stakeholders and the most responsibility to invest?
  • What actions are needed now to ensure that LERs, and skills-based hiring more broadly, actually widen access to good jobs—rather than setting up a parallel system that perpetuates many of today’s inequities?
 

34 Big Ideas that will change our world in 2024 — from linkedin.com

34 Big Ideas that will change our world in 2024 -- from linkedin.com 

Excerpts:

6. ChatGPT’s hype will fade, as a new generation of tailor-made bots rises up
11. We’ll finally turn the corner on teacher pay in 2024
21. Employers will combat job applicants’ use of AI with…more AI
31. Universities will view the creator economy as a viable career path

 

When Educators and Employers Work Together, Students Succeed — from hbsp.harvard.edu by Joseph Fuller and Manjari Raman

(Emphasis below from DSC)

Last year, in “The Partnership Imperative,” we put forth a set of more than 40 best practices that employers and educators can use to develop a close collaboration. As part of that effort, we identified three main goals and laid out strategies for achieving each.

  1. Partner with each other to offer training and education that is aligned with industry needs. (DSC: Similar to how Instructional Designers want alignment with learning objectives, learning activities, and assessments of learning.)
  2. Establish relationships with each other that result in the recruitment and hiring of students and graduates.
  3. Make supply-and-demand decisions that are informed by the latest data and trends.

From DSC:
Under #1, their strategies include:

Cocreate and regularly update college curriculums so that they reflect relevant technical and foundational skills based on industry needs. Codesign programs that fit with students’ lives and industry hiring cycles. Incorporate classroom experiences that simulate real-world settings and scenarios.

I see AI being able to identify what those changing, currently sought-after, and foundational skills are based on industry needs (which shouldn’t be hard, and vendors like Microsoft are already doing this by combing through the posted job descriptions on their platforms). These findings/results will help build regularly updated learning playlists and should provide guidance to learning-related organizations/groups/individuals/teams on what content to develop and offer  (i.e., courses/learning modules/micro-learning-based streams of content, other).

 

Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

Amazon aims to provide free AI skills training to 2 million people by 2025 with its new ‘AI Ready’ commitment — from aboutamazon.com by Swami Sivasubramanian

Artificial intelligence (AI) is the most transformative technology of our generation. If we are going to unlock the full potential of AI to tackle the world’s most challenging problems, we need to make AI education accessible to anyone with a desire to learn.

That’s why Amazon is announcing “AI Ready,” a new commitment designed to provide free AI skills training to 2 million people globally by 2025. To achieve this goal, we’re launching new initiatives for adults and young learners, and scaling our existing free AI training programs—removing cost as a barrier to accessing these critical skills.

From DSC:
While this will likely serve Amazon just fine, it’s still an example of the leadership of a corporation seeking to help others out.

 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

OpenAI announces leadership transition — from openai.com
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

Excerpt (emphasis DSC):

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

From DSC:
I’m not here to pass judgment, but all of us on planet Earth should be at least concerned with this disturbing news.

AI is one of the most powerful set of emerging technologies on the planet right now. OpenAI is arguably the most powerful vendor/innovator/influencer/leader in that space. And Sam Altman is was the face of OpenAI — and arguably for AI itself. So this is a big deal.

What concerns me is what is NOT being relayed in this posting:

  • What was being hidden from OpenAI’s Board?
  • What else doesn’t the public know? 
  • Why is Greg Brockman stepping down as Chairman of the Board?

To whom much is given, much is expected.


Also related/see:

OpenAI CEO Sam Altman ousted, shocking AI world — from washingtonpost.com by Gerrit De Vynck and Nitasha Tiku
The artificial intelligence company’s directors said he was not ‘consistently candid in his communications with the board’

Altman’s sudden departure sent shock waves through the technology industry and the halls of government, where he had become a familiar presence in debates over the regulation of AI. His rise and apparent fall from tech’s top rung is one of the fastest in Silicon Valley history. In less than a year, he went from being Bay Area famous as a failed start-up founder who reinvented himself as a popular investor in small companies to becoming one of the most influential business leaders in the world. Journalists, politicians, tech investors and Fortune 500 CEOs alike had been clamoring for his attention.

OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O. — from nytimes.com by Cade Metz

Sam Altman, the high-profile chief executive of OpenAI, who became the face of the tech industry’s artificial intelligence boom, was pushed out of the company by its board of directors, OpenAI said in a blog post on Friday afternoon.


From DSC:
Updates — I just saw these items

.
Sam Altman fired as CEO of OpenAI — from theverge.com by Jay Peters
In a sudden move, Altman is leaving after the company’s board determined that he ‘was not consistently candid in his communications.’ President and co-founder Greg Brockman has also quit.



 

Where a developing, new kind of learning ecosystem is likely headed [Christian]

From DSC:
As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.

Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.

Project Chiron's vision: Our vision for education Every child will soon have a super-intelligent AI teacher by their side. We want to make sure they instill a love of learning in children.


From DSC:
This future learning platform will also focus on developing skills and competencies. Along those lines, see:

Scale for Skills-First — from the-job.beehiiv.com by Paul Fain
An ed-tech giant’s ambitious moves into digital credentialing and learner records.

A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.

Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.

Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.

Paul also links to 1EDTECH’s page regarding the Comprehensive Learning Record

 

From the military to the workforce: How to leverage veterans’ skills — from mckinsey.com
Traditional ways of hiring make it harder for many service members to land civilian jobs. A new approach could help veterans transition to the workforce—and add $15 billion to the US economy.

This is where military veterans can make a difference. Veterans represent a source of labor potential that is untapped relative to the breadth of experience and depth of skills that they acquire and develop during their service. Members of the military receive technical training, operate under pressure in austere environments, and develop strong interpersonal skills throughout their service, making them well qualified for numerous civilian occupations. While not every military role is directly transferrable to a civilian job, most skills are—including those that correspond to US industries experiencing labor shortages, such as infrastructure and manufacturing.

 

Learning Corporate Learning — Newsletter #70 — from transcend.substack.com by Alberto Arenaza and Michael Narea
A deep-dive into the corporate learning-edtech market for startups

The Transcend Newsletter explores the intersection of the future of education and the future [of] work, and the founders building it around the world.

 

Lastly, we look at four product categories within L&D:

  • Content: libraries of learning content covering a wide range of topics (Coursera & Udemy for Business, Pluralsight, Skillsoft). Live classes are increasingly a part of this category, like Electives, Section or NewCampus.
  • Upskilling: programs focused on learning new skills (upskilling) or relocation of talent within the company (reskilling), both being more intensive than just content (Multiverse, Guild).
  • Coaching: support from coaches, mentors or even peers for employees’ learning (BetterUp, CoachHub, Torch).
  • Simulations: a new wave of scalable learning experiences that creates practice scenarios for employees (Strivr, SimSkills)
 

Why Kindness at Work Pays Off — from hbr.org by Andrew Swinand; via Roberto Ferraro

Summary:
Whether you’re just entering the workforce, starting a new job, or transitioning into people management, kindness can be a valuable attribute that speaks volumes about your character, commitment, and long-term value. Here are a few simple routines you can integrate into your everyday work life that will spread kindness and help create a culture of kindness at your organization.

  • Practice radical self-care. The best way to be a valuable, thoughtful team member is to be disciplined about your own wellness — your physical, emotional, and mental well-being.
  • Do your job. Start with the basics by showing up on time and doing your job to the best of your ability. This is where your self-care practice comes into play — you can’t do your best work without taking care of yourself first.
  • Reach out to others with intention. Make plans to meet virtually or, even better, in person with your colleagues. Ask about their pets, their recent move, or their family. Most importantly, practice active listening.
  • Recognize and acknowledge people. Authentic, thoughtful interactions show that you’re thinking about the other person and reflecting on their unique attributes and value, which can cement social connections.
  • Be conscientious with your feedback. Being kind means offering feedback for the betterment of the person receiving it and the overall success of your company.

“When anxiety is high and morale is low, kindness isn’t a luxury — it’s a necessity. With mass layoffs, economic uncertainty, and geopolitical tensions, kindness is needed now more than ever, especially at work.”

 

From DSC:
The following item from The Washington Post made me ask, “Do we give students any/enough training on email etiquette? On effective ways to use LinkedIn, Twitter/X, messaging, other?”


You’re emailing wrong at work. Follow this etiquette guide. — from washingtonpost.com by Danielle Abril
Get the most out of your work email and avoid being a jerk with these etiquette tips for the modern workplace

Most situations depend on the workplace culture. Still, there are some basic rules. Three email and business experts gave us tips for good email etiquette so you can avoid being the jerk at work.

  • Consider not sending an email
  • Keep it short and clear
  • Make it easy to read
  • Don’t blow up the inbox
  • …and more

From DSC:
I would add to use bolding, color, italics, etc. to highlight and help structure the email’s key points and sections.


 

A future-facing minister, a young inventor and a shared vision: An AI tutor for every student — from news.microsoft.com by Chris Welsch

The Ministry of Education and Pativada see what has become known as the U.A.E. AI Tutor as a way to provide students with 24/7 assistance as well as help level the playing field for those families who cannot afford a private tutor. At the same time, the AI Tutor would be an aid to teachers, they say. “We see it as a tool that will support our teachers,” says Aljughaiman. “This is a supplement to classroom learning.”

If everything goes according to plan, every student in the United Arab Emirates’ school system will have a personal AI tutor – that fits in their pockets.

It’s a story that involves an element of coincidence, a forward-looking education minister and a tech team led by a chief executive officer who still lives at home with his parents.

In February 2023, the U.A.E.’s education minister, His Excellency Dr. Ahmad Belhoul Al Falasi, announced that the ministry was embracing AI technology and pursuing the idea of an AI tutor to help Emirati students succeed. And he also announced that the speech he presented had been written by ChatGPT. “We should not demonize AI,” he said at the time.



Fostering deep learning in humans and amplifying our intelligence in an AI World — from stefanbauschard.substack.com by Stefan Bauschard
A free 288-page report on advancements in AI and related technology, their effects on education, and our practical support for AI-amplified human deep learning

Six weeks ago, Dr. Sabba Quidwai and I accidentally stumbled upon an idea to compare the deep learning revolution in computer science to the mostly lacking deep learning efforts in education (Mehta & Fine). I started writing, and as these things often go with me, I thought there were many other things that would be useful to think through and for educators to know, and we ended up with this 288-page report.

***

Here’s an abstract from that report:

This report looks at the growing gap between the attention paid to the development of intelligence in machines and humans. While computer scientists have made great strides in developing human intelligence capacities in machines using deep learning technologies, including the abilities of machines to learn on their own, a significant part of the education system has not kept up with developing the intelligence capabilities in people that will enable them to succeed in the 21st century. Instead of fully embracing pedagogical methods that place primary emphasis on promoting collaboration, critical thinking, communication, creativity, and self-learning through experiential, interdisciplinary approaches grounded in human deep learning and combined with current technologies, a substantial portion of the educational system continues to heavily rely on traditional instructional methods and goals. These methods and goals prioritize knowledge acquisition and organization, areas in which machines already perform substantially better than people.

Also from Stefan Bauschard, see:

  • Debating in the World of AI
    Performative assessment, learning to collaborate with humans and machines, and developing special human qualities

13 Nuggets of AI Wisdom for Higher Education Leaders — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable AI Guidance for Higher Education Leaders

Incentivize faculty AI innovation with AI. 

Invest in people first, then technology. 

On teaching, learning, and assessment. AI has captured the attention of all institutional stakeholders. Capitalize to reimagine pedagogy and evaluation. Rethink lectures, examinations, and assignments to align with workforce needs. Consider incorporating Problem-Based Learning, building portfolios and proof of work, and conducting oral exams. And use AI to provide individualized support and assess real-world skills.

Actively engage students.


Some thoughts from George Siemens re: AI:

Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting learning.

Our education system has a uni-dimensional focus: learning things. Of course, we say we care about developing the whole learner, but the metrics that matter (grade, transcripts) that underpin the education system are largely focused on teaching students things that have long been Google-able but are now increasingly doable by AI. Developments in AI matters in ways that calls into question large parts of what happens in our universities. This is not a statement that people don’t need to learn core concepts and skills. My point is that the fulcrum of learning has shifted. Knowing things will continue to matter less and less going forward as AI improves its capabilities. We’ll need to start intentionally developing broader and broader attributes of learners: metacognition, wellness, affect, social engagement, etc. Education will continue to shift toward human skills and away from primary assessment of knowledge gains disconnected from skills and practice and ways of being.


AI, the Next Chapter for College Librarians — from insidehighered.com by Lauren Coffey
Librarians have lived through the disruptions of fax machines, websites and Wikipedia, and now they are bracing to do it again as artificial intelligence tools go mainstream: “Maybe it’s our time to shine.”

A few months after ChatGPT launched last fall, faculty and students at Northwestern University had many questions about the building wave of new artificial intelligence tools. So they turned to a familiar source of help: the library.

“At the time it was seen as a research and citation problem, so that led them to us,” said Michelle Guittar, head of instruction and curriculum support at Northwestern University Libraries.

In response, Guittar, along with librarian Jeanette Moss, created a landing page in April, “Using AI Tools in Your Research.” At the time, the university itself had yet to put together a comprehensive resource page.


From Dr. Nick Jackson’s recent post on LinkedIn: 

Last night the Digitech team of junior and senior teachers from Scotch College Adelaide showcased their 2023 experiments, innovation, successes and failures with technology in education. Accompanied by Student digital leaders, we saw the following:

  •  AI used for languagelearning where avatars can help with accents
  • Motioncapture suits being used in mediastudies
  • AI used in assessment and automatic grading of work
  • AR used in designtechnology
  • VR used for immersive Junior school experiences
  • A teacher’s AI toolkit that has changed teaching practice and workflow
  • AR and the EyeJack app used by students to create dynamic art work
  • VR use in careers education in Senior school
  • How ethics around AI is taught to Junior school students from Year 1
  • Experiments with MyStudyWorks

Almost an Agent: What GPTs can do — from oneusefulthing.org by Ethan Mollick

What would a real AI agent look like? A simple agent that writes academic papers would, after being given a dataset and a field of study, read about how to compose a good paper, analyze the data, conduct a literature review, generate hypotheses, test them, and then write up the results, all without intervention. You put in a request, you get a Word document that contains a draft of an academic paper.

A process kind of like this one:


What I Learned From an Experiment to Apply Generative AI to My Data Course — from edsurge.com by Wendy Castillo

As an educator, I have a duty to remain informed about the latest developments in generative AI, not only to ensure learning is happening, but to stay on top of what tools exist, what benefits and limitations they have, and most importantly, how students might be using them.

However, it’s also important to acknowledge that the quality of work produced by students now requires higher expectations and potential adjustments to grading practices. The baseline is no longer zero, it is AI. And the upper limit of what humans can achieve with these new capabilities remains an unknown frontier.


Artificial Intelligence in Higher Education: Trick or Treat? — from tytonpartners.com by Kristen Fox and Catherine Shaw
.

Two components of AI -- generative AI and predictive AI

 

Innovative growers: A view from the top — from mckinsey.com by Matt Banholzer, Rebecca Doherty, Alex Morris, and Scott Schwaitzberg
McKinsey research shows that a focus on aspiration, activation, and execution can help companies out-innovate and outgrow peers.

To find out, we identified and analyzed about 650 of the largest public companies that achieved profitable growth relative to their industry between 2016 and 2021 while also excelling in the essential capabilities associated with innovation.3 Some of these companies outgrew their peers, others were more innovative than competitors, but 53 companies managed to do both. The 50-plus “innovative growers,” as we call them, are a diverse group, spread across four continents and ten industries. They include renowned brands with a trillion-dollar market capitalization as well as smaller companies that are just starting to make a name for themselves, some as young as three years old (see sidebar, “Where do innovative growers come from?”).

For all their diversity, these companies consistently excel in both growth and innovation—and they share a number of best practices that other companies can learn from.

Do innovative growers perform better than others?

In a word, yes.

From DSC:
I’m adding higher ed to the categories of this posting, as we need to establish more CULTURES of innovation out there. But this is not easy to do, as those of us who have tried to swim upstream know.

Also see:

 
© 2024 | Daniel Christian