A future-facing minister, a young inventor and a shared vision: An AI tutor for every student — from news.microsoft.com by Chris Welsch

The Ministry of Education and Pativada see what has become known as the U.A.E. AI Tutor as a way to provide students with 24/7 assistance as well as help level the playing field for those families who cannot afford a private tutor. At the same time, the AI Tutor would be an aid to teachers, they say. “We see it as a tool that will support our teachers,” says Aljughaiman. “This is a supplement to classroom learning.”

If everything goes according to plan, every student in the United Arab Emirates’ school system will have a personal AI tutor – that fits in their pockets.

It’s a story that involves an element of coincidence, a forward-looking education minister and a tech team led by a chief executive officer who still lives at home with his parents.

In February 2023, the U.A.E.’s education minister, His Excellency Dr. Ahmad Belhoul Al Falasi, announced that the ministry was embracing AI technology and pursuing the idea of an AI tutor to help Emirati students succeed. And he also announced that the speech he presented had been written by ChatGPT. “We should not demonize AI,” he said at the time.



Fostering deep learning in humans and amplifying our intelligence in an AI World — from stefanbauschard.substack.com by Stefan Bauschard
A free 288-page report on advancements in AI and related technology, their effects on education, and our practical support for AI-amplified human deep learning

Six weeks ago, Dr. Sabba Quidwai and I accidentally stumbled upon an idea to compare the deep learning revolution in computer science to the mostly lacking deep learning efforts in education (Mehta & Fine). I started writing, and as these things often go with me, I thought there were many other things that would be useful to think through and for educators to know, and we ended up with this 288-page report.

***

Here’s an abstract from that report:

This report looks at the growing gap between the attention paid to the development of intelligence in machines and humans. While computer scientists have made great strides in developing human intelligence capacities in machines using deep learning technologies, including the abilities of machines to learn on their own, a significant part of the education system has not kept up with developing the intelligence capabilities in people that will enable them to succeed in the 21st century. Instead of fully embracing pedagogical methods that place primary emphasis on promoting collaboration, critical thinking, communication, creativity, and self-learning through experiential, interdisciplinary approaches grounded in human deep learning and combined with current technologies, a substantial portion of the educational system continues to heavily rely on traditional instructional methods and goals. These methods and goals prioritize knowledge acquisition and organization, areas in which machines already perform substantially better than people.

Also from Stefan Bauschard, see:

  • Debating in the World of AI
    Performative assessment, learning to collaborate with humans and machines, and developing special human qualities

13 Nuggets of AI Wisdom for Higher Education Leaders — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable AI Guidance for Higher Education Leaders

Incentivize faculty AI innovation with AI. 

Invest in people first, then technology. 

On teaching, learning, and assessment. AI has captured the attention of all institutional stakeholders. Capitalize to reimagine pedagogy and evaluation. Rethink lectures, examinations, and assignments to align with workforce needs. Consider incorporating Problem-Based Learning, building portfolios and proof of work, and conducting oral exams. And use AI to provide individualized support and assess real-world skills.

Actively engage students.


Some thoughts from George Siemens re: AI:

Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting learning.

Our education system has a uni-dimensional focus: learning things. Of course, we say we care about developing the whole learner, but the metrics that matter (grade, transcripts) that underpin the education system are largely focused on teaching students things that have long been Google-able but are now increasingly doable by AI. Developments in AI matters in ways that calls into question large parts of what happens in our universities. This is not a statement that people don’t need to learn core concepts and skills. My point is that the fulcrum of learning has shifted. Knowing things will continue to matter less and less going forward as AI improves its capabilities. We’ll need to start intentionally developing broader and broader attributes of learners: metacognition, wellness, affect, social engagement, etc. Education will continue to shift toward human skills and away from primary assessment of knowledge gains disconnected from skills and practice and ways of being.


AI, the Next Chapter for College Librarians — from insidehighered.com by Lauren Coffey
Librarians have lived through the disruptions of fax machines, websites and Wikipedia, and now they are bracing to do it again as artificial intelligence tools go mainstream: “Maybe it’s our time to shine.”

A few months after ChatGPT launched last fall, faculty and students at Northwestern University had many questions about the building wave of new artificial intelligence tools. So they turned to a familiar source of help: the library.

“At the time it was seen as a research and citation problem, so that led them to us,” said Michelle Guittar, head of instruction and curriculum support at Northwestern University Libraries.

In response, Guittar, along with librarian Jeanette Moss, created a landing page in April, “Using AI Tools in Your Research.” At the time, the university itself had yet to put together a comprehensive resource page.


From Dr. Nick Jackson’s recent post on LinkedIn: 

Last night the Digitech team of junior and senior teachers from Scotch College Adelaide showcased their 2023 experiments, innovation, successes and failures with technology in education. Accompanied by Student digital leaders, we saw the following:

  •  AI used for languagelearning where avatars can help with accents
  • Motioncapture suits being used in mediastudies
  • AI used in assessment and automatic grading of work
  • AR used in designtechnology
  • VR used for immersive Junior school experiences
  • A teacher’s AI toolkit that has changed teaching practice and workflow
  • AR and the EyeJack app used by students to create dynamic art work
  • VR use in careers education in Senior school
  • How ethics around AI is taught to Junior school students from Year 1
  • Experiments with MyStudyWorks

Almost an Agent: What GPTs can do — from oneusefulthing.org by Ethan Mollick

What would a real AI agent look like? A simple agent that writes academic papers would, after being given a dataset and a field of study, read about how to compose a good paper, analyze the data, conduct a literature review, generate hypotheses, test them, and then write up the results, all without intervention. You put in a request, you get a Word document that contains a draft of an academic paper.

A process kind of like this one:


What I Learned From an Experiment to Apply Generative AI to My Data Course — from edsurge.com by Wendy Castillo

As an educator, I have a duty to remain informed about the latest developments in generative AI, not only to ensure learning is happening, but to stay on top of what tools exist, what benefits and limitations they have, and most importantly, how students might be using them.

However, it’s also important to acknowledge that the quality of work produced by students now requires higher expectations and potential adjustments to grading practices. The baseline is no longer zero, it is AI. And the upper limit of what humans can achieve with these new capabilities remains an unknown frontier.


Artificial Intelligence in Higher Education: Trick or Treat? — from tytonpartners.com by Kristen Fox and Catherine Shaw
.

Two components of AI -- generative AI and predictive AI

 

Innovative growers: A view from the top — from mckinsey.com by Matt Banholzer, Rebecca Doherty, Alex Morris, and Scott Schwaitzberg
McKinsey research shows that a focus on aspiration, activation, and execution can help companies out-innovate and outgrow peers.

To find out, we identified and analyzed about 650 of the largest public companies that achieved profitable growth relative to their industry between 2016 and 2021 while also excelling in the essential capabilities associated with innovation.3 Some of these companies outgrew their peers, others were more innovative than competitors, but 53 companies managed to do both. The 50-plus “innovative growers,” as we call them, are a diverse group, spread across four continents and ten industries. They include renowned brands with a trillion-dollar market capitalization as well as smaller companies that are just starting to make a name for themselves, some as young as three years old (see sidebar, “Where do innovative growers come from?”).

For all their diversity, these companies consistently excel in both growth and innovation—and they share a number of best practices that other companies can learn from.

Do innovative growers perform better than others?

In a word, yes.

From DSC:
I’m adding higher ed to the categories of this posting, as we need to establish more CULTURES of innovation out there. But this is not easy to do, as those of us who have tried to swim upstream know.

Also see:

 

New models and developer products announced at DevDay — from openai.com
GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.

Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:

  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)


Introducing GPTs — from openai.com
You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.




OpenAI’s New Groundbreaking Update — from newsletter.thedailybite.co
Everything you need to know about OpenAI’s update, what people are building, and a prompt to skim long YouTube videos…

But among all this exciting news, the announcement of user-created “GPTs” took the cake.

That’s right, your very own personalized version of ChatGPT is coming, and it’s as groundbreaking as it sounds.

OpenAI’s groundbreaking announcement isn’t just a new feature – it’s a personal AI revolution. 

The upcoming customizable “GPTs” transform ChatGPT from a one-size-fits-all to a one-of-a-kind digital sidekick that is attuned to your life’s rhythm. 


Lore Issue #56: Biggest Week in AI This Year — from news.lore.com by Nathan Lands

First, Elon Musk announced “Grok,” a ChatGPT competitor inspired by “The Hitchhiker’s Guide to the Galaxy.” Surprisingly, in just a few months, xAI has managed to surpass the capabilities of GPT-3.5, signaling their impressive speed of execution and establishing them as a formidable long-term contender.

Then, OpenAI hosted their inaugural Dev Day, unveiling “GPT-4 Turbo,” which boasts a 128k context window, API costs slashed by threefold, text-to-speech capabilities, auto-model switching, agents, and even their version of an app store slated for launch next month.


The Day That Changed Everything — from joinsuperhuman.ai by Zain Kahn
ALSO: Everything you need to know about yesterday’s OpenAI announcements

  • OpenAI DevDay Part I: Custom ChatGPTs and the App Store of AI
  • OpenAI DevDay Part II: GPT-4 Turbo, Assistants, APIs, and more

OpenAI’s Big Reveal: Custom GPTs, GPT Store & More — from  news.theaiexchange.com
What you should know about the new announcements; how to get started with building custom GPTs


Incredible pace of OpenAI — from theaivalley.com by Barsee
PLUS: Elon’s Gork


 

 

Upskilling Workers to Match Workforce Demands — from learningguild.com by Brad Koch

If the rapid adoption of artificial intelligence by the general public in 2023 has taught us anything, it’s that much of the world is growing more comfortable with this technology and businesses will soon follow in individuals’ footsteps. The World Economic Forum’s Future of Jobs Report 2023 published earlier this year reinforces the trend: More than 75% of companies are looking to adopt big data, cloud computing, and AI in the next five years.

At the same time, the report notes that 44% of workers’ core skills are expected to change in the next five years. Not only will workplaces need to equip their employees with the necessary skills to use the technologies being implemented, but they will also need to make sure employees have the skills to fill the roles that artificial intelligence cannot. Here’s how companies can prepare their workers for shifting demands in the next five years.

Expand industry-relevant training and partnerships with educational institutions


Also from learningguild.com, see:


 

Accenture Life Trends 2024 — from accenture.com; via Mr. Bob Raidt on LinkedIn
The visible and invisible mediators between people and their world are changing.

In brief

  • The harmony between people, tech and business is showing tensions, and society is in flux.
  • Five trends explore the decline of customer obsession, the influence of generative AI, the stagnation of creativity, the balance of tech benefits and burden, and people’s new life goals.
  • Opportunity abounds for business and brands in the coming twelve months and beyond – read Accenture Life Trends 2024 to find out more.

5 Trends

01 Where’s the love?
Necessary cuts across enterprises have shunted customer obsession down the priority list—and customers are noticing.
02 The great interface shift
Generative AI is upgrading people’s experience of the internet from transactional to personal, enabling them to feel more digitally understood and relevant than ever.
03 Meh-diocrity
Creativity was once about the audience, but has become dependent on playing the tech system. Is this what creative stagnation feels like?
04 Error 429: Human request limit reached
Technology feels like it’s happening to people rather than for them—is a shift beginning, where they regain agency over its influence on daily life?
05 Decade of deconstruction
Traditional life paths are being rerouted by new limitations, necessities and opportunities, significantly shifting demographics.

 

Shocking AI Statistics in 2023 — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Chat GPT reached 100 million users faster than any other app. By February 2023, the chat.openai.com website saw an average of 25 million daily visitors. How can this rise in AI usage benefit your business’s function?
  2. 45% of executives say the popularity of ChatGPT has led them to increase investment in AI. If executives are investing in AI personally, then how will their beliefs affect corporate investment in AI to drive automation further? Also, how will this affect the amount of workers hired to manage AI systems within companies?
  3. eMarketer predicts that in 2024 at least 20% of Americans will use ChatGPT monthly and that a fifth of them are 25-34 year olds in the workforce. Does this mean that there are more young workers using AI?
  4. …plus 10 more stats

People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality — from arstechnica.com by Benj Edwards
Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.

It turns out that Willison’s experience is far from unique. Others have been spending hours talking to ChatGPT using its voice recognition and voice synthesis features, sometimes through car connections. The realistic nature of the voice interaction feels largely effortless, but it’s not flawless. Sometimes, it has trouble in noisy environments, and there can be a pause between statements. But the way the ChatGPT voices simulate vocal ticks and noises feels very human. “I’ve been using the voice function since yesterday and noticed that it makes breathing sounds when it speaks,” said one Reddit user. “It takes a deep breath before starting a sentence. And today, actually a minute ago, it coughed between words while answering my questions.”

From DSC:
Hmmmmmmm….I’m not liking the sound of this on my initial take of it. But perhaps there are some real positives to this. I need to keep an open mind.


Working with AI: Two paths to prompting — from oneusefulthing.org by Ethan Mollick
Don’t overcomplicate things

  1. Conversational Prompting [From DSC: i.e., keep it simple]
  2. Structured Prompting

For most people, [Conversational Prompting] is good enough to get started, and it is the technique I use most of the time when working with AI. Don’t overcomplicate things, just interact with the system and see what happens. After you have some experience, however, you may decide that you want to create prompts you can share with others, prompts that incorporate your expertise. We call this approach Structured Prompting, and, while improving AIs may make it irrelevant soon, it is currently a useful tool for helping others by encoding your knowledge into a prompt that anyone can use.


These fake images reveal how AI amplifies our worst stereotypes — from washingtonpost.com by Nitasha Tiku, Kevin Schaul, and Szu Yu Chen (behind paywall)
AI image generators like Stable Diffusion and DALL-E amplify bias in gender and race, despite efforts to detoxify the data fueling these results.

Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.

These stereotypes don’t reflect the real world; they stem from the data that trains the technology. Grabbed from the internet, these troves can be toxic — rife with pornography, misogyny, violence and bigotry.

Abeba Birhane, senior advisor for AI accountability at the Mozilla Foundation, contends that the tools can be improved if companies work hard to improve the data — an outcome she considers unlikely. In the meantime, the impact of these stereotypes will fall most heavily on the same communities harmed during the social media era, she said, adding: “People at the margins of society are continually excluded.”


ChatGPT app revenue shows no signs of slowing, but some other AI apps top it — from techcrunch.com by Sarah Perez; Via AI Valley – Barsee

ChatGPT, the AI-powered chatbot from OpenAI, far outpaces all other AI chatbot apps on mobile devices in terms of downloads and is a market leader by revenue, as well. However, it’s surprisingly not the top AI app by revenue — several photo AI apps and even other AI chatbots are actually making more money than ChatGPT, despite the latter having become a household name for an AI chat experience.


ChatGPT can now analyze files you upload to it without a plugin — from bgr.com by Joshua Hawkins; via Superhuman

According to new reports, OpenAI has begun rolling out a more streamlined approach to how people use ChatGPT. The new system will allow the AI to choose a model automatically, letting you run Python code, open a web browser, or generate images with DALL-E without extra interaction. Additionally, ChatGPT will now let you upload and analyze files.

 

Nearly half of CEOs believe that AI not only could—but should—replace their own jobs — from finance.yahoo.com by Orianna Rosa Royle; via Harsh Makadia

Researchers from edX, an education platform for upskilling workers, conducted a survey involving over 1,500 executives and knowledge workers. The findings revealed that nearly half of CEOs believe AI could potentially replace “most” or even all aspects of their own positions.

What’s even more intriguing is that 47% of the surveyed executives not only see the possibility of AI taking over their roles but also view it as a desirable development.

Why? Because they anticipate that AI could rekindle the need for traditional leadership for those who remain.

“Success in the CEO role hinges on effective leadership, and AI can liberate time for this crucial aspect of their role,” Andy Morgan, Head of edX for Business comments on the findings.

“CEOs understand that time saved on routine tasks can stimulate innovation, nurture creativity, and facilitate essential upskilling for their teams, fostering both individual and organizational success,” he adds.

But CEOs already know this: EdX’s research echoed that 79% of executives fear that if they don’t learn how to use AI, they’ll be unprepared for the future of work.

From DSC:
By the way, my first knee-jerk reaction to this was:

WHAT?!?!?!? And this from people who earn WAAAAY more than the average employee, no doubt.

After a chance to calm down a bit, I see that the article does say that CEOs aren’t going anywhere. Ah…ok…got it.


Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data — from .implications.com by Scott Belsky
In this edition, we explore some of the more peculiar ways that AI may change business models as well as recent releases for the world of creativity and marketing.

Time-based business models are liable for disruption via a value-based overhaul of compensation. Today, as most designers, lawyers, and many trades in between continue to charge by the hour, the AL-powered step-function improvements in workflows are liable to shake things up.

In such a world, time-based billing simply won’t work anymore unless the value derived from these services is also compressed by a multiple (unlikely). The classic time-based model of billing for lawyers, designers, consultants, freelancers etc is officially antiquated. So, how might the value be captured in a future where we no longer bill by the hour? …

The worlds of creativity and marketing are rapidly changing – and rapidly coming together

#AI #businessmodels #lawyers #billablehour

It becomes clear that just prompting to get images is a rather elementary use case of AI, compared to the ability to place and move objects, change perspective, adjust lighting, and many other actions using AI.



AlphaFold DB provides open access to over 200 million protein structure predictions to accelerate scientific research. — from

AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment.


After 25 years of growth for the $68 billion SEO industry, here’s how Google and other tech firms could render it extinct with AI — from fortune.com by Ravi Sen and The Conversation

But one other consequence is that I believe it may destroy the $68 billion search engine optimization industry that companies like Google helped create.

For the past 25 years or so, websites, news outlets, blogs and many others with a URL that wanted to get attention have used search engine optimization, or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.

As an associate professor of information and operations management, I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.


ChatGPT Plus members can upload and analyze files in the latest beta — from theverge.com by Wes Davis
ChatGPT Plus members can also use modes like Browse with Bing without manually switching, letting the chatbot decide when to use them.

OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.


Google agrees to invest up to $2 billion in OpenAI rival Anthropic — from reuters.com by Krystal Hu

Oct 27 (Reuters) – Alphabet’s (GOOGL.O) Google has agreed to invest up to $2 billion in the artificial intelligence company Anthropic, a spokesperson for the startup said on Friday.

The company has invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, the spokesperson said.

Google is already an investor in Anthropic, and the fresh investment would underscore a ramp-up in its efforts to better compete with Microsoft (MSFT.O), a major backer of ChatGPT creator OpenAI, as Big Tech companies race to infuse AI into their applications.


 

 
 

IBM Commits to Train 2 Million AI Learners for Free by 2026 — from campustechnology.com by Kate Lucariello

As part of the 2023 78th General Assembly of the United Nations, IBM announced it will train, for free, 2 million learners in artificial intelligence worldwide within the next three years, building on its existing commitment to skill 30 million learners by 2030.

The free program, called IBM SkillsBuild, will use its career-building platforms to partner with universities around the world to develop and make available new generative AI courses, with a significant focus on adult learners in underrepresented communities.



Goldman Sachs CIO is ‘anxious to see results’ from GenAI, but moving carefully — from venturebeat.com by Sharon Goldman

Consider Marco Argenti, CIO at Goldman Sachs — who told me in a recent interview that the leading global investment banking, securities and investment management firm has, nearly a year after ChatGPT was released, put exactly zero generative AI use cases into production. Instead, the company is “deeply into experimentation” and has a “high bar” of expectation before deployment. Certainly this is a highly-regulated company, so careful deployment must always be the norm. But Goldman Sachs is also far from new to implementing AI-driven tools — but is still treading slowly and carefully.

 

The Learning & Employment Records (LER) Ecosystem Map — with thanks to Melanie Booth on LinkedIn for this resource
Driving Opportunity and Equity Through Learning & Employment Records

The Learning & Employment Records (LER) Ecosystem Map

Imagine A World Where…

  • Everyone is empowered to access learning and earning opportunities based on what they know and can do, whether those skills and abilities are obtained through degrees, work experiences, or independent learning.
  • People can capture and communicate the skills and competencies they’ve acquired across their entire learning journey — from education, experience and service — with more ease, confidence, and clarity than a traditional resume.
  • Learners and earners control their information and can curate their skills to take advantage of every opportunity they are truly qualified to pursue, opening up pathways that help address systemic inequities.
  • Employers can tap into a wider talent pool and better match applicants to opportunities with verifiable credentials that represent skills, competencies, and achievements.

This is the world that we believe can be created by Learning and Employment Records (LERs), i.e. digital records of learning and work experiences that are linked to and controlled by learners and earners. An interoperable, well-governed LER ecosystem has the potential to transform the future of work so that it is more equitable, efficient, and effective for everyone involved— individuals, training and education providers, employers, and policymakers.


Also per Melanie Booth, see:

 

Creating an ‘ecosystem’ to close the Black talent gap in technology — from mckinsey.com (emphasis below from DSC)

Chris Perkins, associate partner, McKinsey: Promoting diversity in tech is more nuanced than driving traditional diversity initiatives. This is primarily because of the specialized hard and soft skills required to enter tech-oriented professions and succeed throughout their careers. Our research shows us that various actors, such as nonprofits, for-profits, government agencies, and educational institutions are approaching the problem in small pockets. Could we help catalyze an ecosystem with wraparound support across sectors?

To design this, we have to look at the full pipeline and its “leakage” points, from getting talent trained and in the door all the way up to the C-suite. These gaps are caused by lack of awareness and support in early childhood education through college, and lack of sponsorship and mentorship in early- and mid- career positions.

 

Next month Microsoft Corp. will start making its artificial intelligence features for Office widely available to corporate customers. Soon after, that will include the ability for it to read your emails, learn your writing style and compose messages on your behalf.

From DSC:
As readers of this blog know, I’m generally pro-technology. I see most technologies as tools — which can be used for good or for ill. So I will post items both pro and con concerning AI.

But outsourcing email communications to AI isn’t on my wish list or to-do list.

 

Chatbot hallucinations are poisoning web search — from link.wired.com by Will Knight

The age of generative AI threatens to sprinkle epistemological sand into the gears of web search by fooling algorithms designed for a time when the web was mostly written by humans.


Meta Is Paying Creators Millions for AI Chatbots — from bensbites.beehiiv.com

Meta is shelling out millions to get celebrities to license their likenesses for AI characters in a bid to draw users to its platforms.

Why should I care?
Meta is still all-in on its vision for the metaverse and AI, despite its recent struggles. Meta seems willing to pay top dollar to partner with big names who can draw their massive audiences to use the AI avatars. If the celebrity avatars take off, they could be a blueprint for how creators monetize their brands in virtual worlds. There’s also a chance Meta pulls the plug on funding if user traction is low, just as it did with Facebook Watch originals.


The Post-AI Workplace — from drphilippahardman.substack.com by Dr. Philippa Hartman
SAP SuccessFactors’ new product offers the most comprehensive insight yet into the post-AI workplace & workforce

Skills Maps
AI will be used to categorise, track and analyse employee skills and competencies. This will enable orgs to build a clear idea of pockets of talent and areas in need of focus, providing HR, L&D professionals & managers with the opportunity to take a data-driven approach to talent development and capability building.

Roles Impacted: HR Analysts, Managers, Learning & Development Professionals



More than 40% of labor force to be affected by AI in 3 years, Morgan Stanley forecasts — from cnbc.com by Samantha Subin

Analyst Brian Nowak estimates that the AI technology will have a $4.1 trillion economic effect on the labor force — or affect about 44% of labor — over the next few years by changing input costs, automating tasks and shifting the ways companies obtain, process and analyze information. Today, Morgan Stanley pegs the AI effect at $2.1 trillion, affecting 25% of labor.

“We see generative AI expanding the scope of business processes that can be automated,” he wrote in a Sunday note. “At the same time, the input costs supporting GenAI functionality are rapidly falling, enabling a strongly expansionary impact to software production. As a result, Generative AI is set to impact the labor markets, expand the enterprise software TAM, and drive incremental spend for Public Cloud services.”

Speaking of the changes in the workplace, also see:

 

Everyday Media Literacy: An Analog Guide for Your Digital Life — from routledge.com by Sue Ellen Christian

In this second edition, award-winning educator Sue Ellen Christian offers students an accessible and informed guide to how they can consume and create media intentionally and critically.

The textbook applies media literacy principles and critical thinking to the key issues facing young adults today, from analyzing and creating media messages to verifying information and understanding online privacy. Through discussion prompts, writing exercises, key terms, and links, readers are provided with a framework from which to critically consume and create media in their everyday lives. This new edition includes updates covering privacy aspects of AI, VR and the metaverse, and a new chapter on digital audiences, gaming, and the creative and often unpaid labor of social media and influencers. Chapters examine news literacy, online activism, digital inequality, social media and identity, and global media corporations, giving readers a nuanced understanding of the key concepts at the core of media literacy. Concise, creative, and curated, this book highlights the cultural, political, and economic dynamics of media in contemporary society, and how consumers can mindfully navigate their daily media use.

This textbook is perfect for students and educators of media literacy, journalism, and education looking to build their understanding in an engaging way.

 

Is Your AI Model Going Off the Rails? There May Be an Insurance Policy for That — from wsj.com by Belle Lin; via Brainyacts
As generative AI creates new risks for businesses, insurance companies sense an opportunity to cover the ways AI could go wrong

The many ways a generative artificial intelligence project can go off the rails poses an opportunity for insurance companies, even as those grim scenarios keep business technology executives up at night.

Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches several years ago, insurance providers have started taking steps into the AI space by offering financial protection against models that fail.

Corporate technology leaders say such policies could help them address risk-management concerns from board members, chief executives and legal departments.

 
© 2024 | Daniel Christian