Learners’ Edition: AI-powered Coaching, Professional Certifications + Inspiring conversations about mastering your learning & speaking skills

Learners’ Edition: AI-powered Coaching, Professional Certifications + Inspiring conversations about mastering your learning & speaking skills — from linkedin.com by Tomer Cohen

Excerpts:

1. Your own AI-powered coaching
Learners can go into LinkedIn Learning and ask a question or explain a challenge they are currently facing at work (we’re focusing on areas within Leadership and Management to start). AI-powered coaching will pull from the collective knowledge of our expansive LinkedIn Learning library and, instantaneously, offer advice, examples, or feedback that is personalized to the learner’s skills, job, and career goals.

What makes us so excited about this launch is we can now take everything we as LinkedIn know about people’s careers and how they navigate them and help accelerate them with AI.

3. Learn exactly what you need to know for your next job
When looking for a new job, it’s often the time we think about refreshing our LinkedIn profiles. It’s also a time we can refresh our skills. And with skill sets for jobs having changed by 25% since 2015 – with the number expected to increase by 65% by 2030– keeping our skills a step ahead is one of the most important things we can do to stand out.

There are a couple of ways we’re making it easier to learn exactly what you need to know for your next job:

When you set a job alert, in addition to being notified about open jobs, we’ll recommend learning courses and Professional Certificate offerings to help you build the skills needed for that role.

When you view a job, we recommend specific courses to help you build the required skills. If you have LinkedIn Learning access through your company or as part of a Premium subscription, you can follow the skills for the job, that way we can let you know when we launch new courses for those skills and recommend you content on LinkedIn that better aligns to your career goals.


2024 Edtech Predictions from Edtech Insiders — from edtechinsiders.substack.com by Alex Sarlin, Ben Kornell, and Sarah Morin
Omni-modal AI, edtech funding prospects, higher ed wake up calls, focus on career training, and more!

Alex: I talked to the 360 Learning folks at one point and they had this really interesting epiphany, which is basically that it’s been almost impossible for every individual company in the past to create a hierarchy of skills and a hierarchy of positions and actually organize what it looks like for people to move around and upskill within the company and get to new paths.

Until now. AI actually can do this very well. It can take not only job description data, but it can take actual performance data. It can actually look at what people do on a daily basis and back fit that to training, create automatic training based on it.

From DSC:
I appreciated how they addressed K-12, higher ed, and the workforce all in one posting. Nice work. We don’t need siloes. We need more overall design thinking re: our learning ecosystems — as well as more collaborations. We need more on-ramps and pathways in a person’s learning/career journey.

 

The biggest things that happened in AI this year — from superhuman.ai by Zain Kahn

January:

  • Microsoft raises eyebrows with a huge $10 Billion investment in OpenAI.

February:

  • Meta launches Llama 2, their open-source rival to OpenAI’s models.
  • OpenAI announces ChatGPT Plus, a paid version of their chatbot.
  • Microsoft announces a new AI-powered Bing Search.

March:

  • OpenAI announces the powerful GPT-4 model, still considered to be the gold standard.
  • Midjourney releases V5, which brings AI-powered image generation one step closer to reality.
  • Microsoft launches Copilot for Microsoft 365.
  • Google launches Bard, its rival to ChatGPT.

…and more


AI 2023: A Year in Review — from stefanbauschard.substack.com by Stefan Bauschard
2023 developments in AI and a hint of what they are building toward

Some of the items that Stefan includes in his posting include:

  • ChatGPT and other language models that generate text.
  • Image generators.
  • Video generators.
  • AI models that that can read, hear, and speak.
  • AI models that can see.
  • Improving models.
  • “Multimodal” models.
  • Training on specific content.
  • Reasoning & planning.
  • …and several others

The Dictionary.com Word of the Year is “hallucinate.” — from content.dictionary.com by Nick Norlen and Grant Barrett; via The Rundown AI

hallucinate
[ huhloo-suh-neyt ]

verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.


Soon, every employee will be both AI builder and AI consumer — from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn
“Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”

Along these lines, see:

  • TECH TRENDS 2024 — from deloitte.com
    Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.

UK Supreme Court rules AI is not an inventor — from theverge.com by Emilia David

The ruling follows a similar decision denying patent registrations naming AI as creators.

The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.


The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work — from nytimes.com by Michael M. Grynbaum and Ryan Mac

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

On this same topic, also see:


Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam Altman to Work on AI Devices — from bloomberg.com by Mark Gurman (behind paywall)

  • Design executive Tang Tan is set to leave Apple in February
  • Tan will join Ive’s LoveFrom design studio, work on AI project

AI 2023: Chatbots Spark New Tools — from heatherbcooper.substack.com by Jeather Cooper

ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.

Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.

These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.


Microsoft’s next Surface laptops will reportedly be its first true ‘AI PCs’ — from theverge.com by Emma Roth
Next year’s Surface Laptop 6 and Surface Pro 10 will feature Arm and Intel options, according to Windows Central.

Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.


How one of the world’s oldest newspapers is using AI to reinvent journalism — from theguardian.com by Alexandra Topping
Berrow’s Worcester Journal is one of several papers owned by the UK’s second biggest regional news publisher to hire ‘AI-assisted’ reporters

With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.

“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”



What to Expect in AI in 2024 — from hai.stanford.edu by
Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.

Topics include:

  • White Collar Work Shifts
  • Deepfake Proliferation
  • GPUs Shortage
  • More Helpful Agents
  • Hopes for U.S. Regulation
  • Asking Big Questions, Applying New Policies
  • Companies Will Navigate Complicated Regulations

Addendum on 1/2/24:


 

The rise of AI fake news is creating a ‘misinformation superspreader’ — from washingtonpost.com by Pranshu Verma
AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.


AI, and everything else — from pitch.com by Benedict Evans


Chevy Chatbots Go Rogue — from
How a customer service chatbot made a splash on social media; write your holiday cards with AI

Their AI chatbot, designed to assist customers in their vehicle search, became a social media sensation for all the wrong reasons. One user even convinced the chatbot to agree to sell a 2024 Chevy Tahoe for just one dollar!

This story is exactly why AI implementation needs to be approached strategically. Learning to use AI, also means learning to build thinking of the guardrails and boundaries.

Here’s our tips.


Rite Aid used facial recognition on shoppers, fueling harassment, FTC says — from washingtonpost.com by Drew Harwell
A landmark settlement over the pharmacy chain’s use of the surveillance technology could raise further doubts about facial recognition’s use in stores, airports and other venues

The pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday, part of a landmark settlement that could raise questions about the technology’s use in stores, airports and other venues nationwide.

But the chain’s “reckless” failure to adopt safeguards, coupled with the technology’s long history of inaccurate matches and racial biases, ultimately led store employees to falsely accuse shoppers of theft, leading to “embarrassment, harassment, and other harm” in front of their family members, co-workers and friends, the FTC said in a statement.


 

Regarding this Tweet on X/Twitter:


To Unleash Legal Tech, Lawyers And Engineers Need To Talk — from forbes.com by Tanguy Chau

Here, I’ll explore some ways that engineers and lawyers see the world differently based on their strengths and experiences, and I’ll explain how they can better communicate to build better software products, especially in AI, for attorneys. Ideally, this will lead to happier lawyers and more satisfied clients.


Zuputo: Africa’s first women-led legal tech startup launches — from myjoyonline.com

A groundbreaking legal tech startup, Zuputo, is set to reshape the legal landscape across Africa by making legal services more accessible, affordable, and user-friendly.

Founded by Jessie Abugre and Nana Adwoa Amponsah-Mensah, this women-led venture has become synonymous with simplicity and efficiency in legal solutions.


 

AI Is Transforming Corporate Learning Even Faster Than I Expected — from joshbersin.com by Josh Bersin

Excerpts (emphasis DSC):

Of all the domains to be impacted by AI, perhaps the biggest transformation is taking place in corporate learning. After a year of experimentation, it’s now clear that AI will revolutionize this space.

Here’s a simple example. I asked Galileo™, which is powered by 25 years of research and case studies, “How do I deal with an employee who’s always late? And please give me a narrative to help?” Rather than take me to a course on management or show me a bunch of videos, it simply answered the question. This type of interaction is where much of corporate learning is going.

In all my years as an analyst, I’ve never seen a technology with so much potential. AI will revolutionize the L&D landscape, reinventing how we do our work so L&D professionals can spend time consulting with the business.

 

An Opinionated Guide to Which AI to Use: ChatGPT Anniversary Edition — from oneusefulthing.org by Ethan Mollick
A simple answer, and then a less simple one.

If you are at all interested in generative AI and understanding what it can do and why it matters, you should just get access to OpenAI’s GPT-4 in as unadulterated and direct way as possible. Everything else comes after that.

Now, to be clear, this is not the free ChatGPT, which uses GPT-3.5.


America's Next Top Model LLMs in Educational Settings

1. America’s Next Top Model LLMs in Educational Settings

  • PDF
    Topics Discussed:
    Need for a Comprehensive Student-Centric Approach
    Collaboration between EdTech Companies and Educators
    Personalized Learning Orchestration
    Innovation and Agility of Startups vs. Resources of Big Tech
    The Essential Role of AI in Transforming Education
  • Video recording from Edtech Insiders

2. Hello, Mr. Chips: AI Teaching Assistants, Tutors, Coaches and Characters

  • PDF
    Topics discussed:
    Engagement and Co-Creation
    Educator Skills and AI Implementation
    Teacher Empowerment and Student Creativity
    Efficacy and Ethical Concerns
  • Video recording from Edtech Insiders

He Was Falsely Accused of Using AI. Here’s What He Wishes His Professor Did Instead | Tech & Learning — from techlearning.com by Erik Ofgang
When William Quarterman was accused of submitting an AI-generated midterm essay, he started having panic attacks. He says his professor should have handled the situation differently.


Teaching: Practical AI strategies for the classroom — from chronicle.com by Beckie Supiano and Luna Laliberte

Here are several strategies you can try.

  • Quick Hits: Several presenters suggested exercises that can be quick, easy, and fun for students. Invite your class to complete a Mad Libs using ChatGPT. It’s a playful way to leverage ChatGPT’s ability to predict the next word, giving students insight into how generative AI works on a fundamental level. You can also have your students use ChatGPT to rewrite their own writing in the tone and style of their favorite writers. This exercise demonstrates AI’s ability to mimic style and teaches students about adopting different tones in writing.
  • Vetting Sources
  • Grade ChatGPT
  • Lead by Example  

Embracing Artificial Intelligence in the Classroom — from gse.harvard.edu; via Alex Webb at Bloomberg
Generative AI tools can reflect our failure of imagination and that is when the real learning starts


Class Disrupted S5 E3: The Possibilities & Challenges of Artificial Intelligence in the Classroom — from the74million.org by Michael B. Horn & Diane Tavenner
AI expert and Minerva University senior Irhum Shafkat joins Michael and Diane to discuss where the technology is going.


Schools urged to teach children how to use AI from age of 11 — from news.sky.com by Tom Acres
Artificial intelligence tools such as ChatGPT are being used by children to help with homework and studying – and there are calls for it to become a central part of the school curriculum.

Excerpt (emphasis DSC):

Schools have been urged to teach children how to use AI from the age of 11 as the technology threatens to upend the jobs market.

Rather than wait for pupils to take up computer science at GCSE, the British Computer Society (BCS, The Chartered institute for IT) said all youngsters need to learn to work with tools like ChatGPT.

The professional body for computing wants a digital literary qualification to be introduced, with a strong focus on artificial intelligence and its practical applications.

An understanding of AI should also become a key part of teacher training and headteacher qualifications, it added.


Improving Your Teaching With an AI Coach — from edutopia.org by Stephen Noonoo
New tools are leveraging artificial intelligence to help teachers glean insights into how they interact with students.


COMMENTARY
Embracing artificial intelligence in the workforce starts with higher education — from nebraskaexaminer.com by Jaci Lindburg and Cassie Mallette

When students can understand the benefit of using it effectively, and learn how to use AI to brainstorm, problem solve, and think through decision making scenarios, they can work more efficiently, make difficult decisions faster and improve a company’s production output.

It is through embracing the power and potential of AI that we can equip our students with future-ready skills. Through intentional teaching strategies that guide students to think creatively about how to use AI in their work, higher education can ensure that students are on the cutting edge in terms of using advancing technologies and being workforce ready upon graduation.

Also see:

The ChatGPT/AI Prompt Book is a resource for the UNO community that demonstrates how students can use AI in their studies and how faculty can incorporate it into their courses and daily work. The goal: to teach individuals how to be better prompt engineers and develop the skills needed to utilize this emerging technology as one of the many tools available to them in the workforce.


Two Ideas for Teaching with AI: Course-Level GPTs and AI-Enhanced Polling — from derekbruff.org by Derek Bruff

Excerpt (emphasis DSC):

Might we see course-level GPTs, where the chatbot is familiar with the content in a particular course and can help students navigate and maybe even learn that material? The answer is, yes and they’re already here. Top Hat recently launched Ace, an AI-powered learning assistant embedded in its courseware platform. An instructor can activate Ace, which then trains itself on all the learning materials in the course. Students can then use Ace as a personal tutor of sorts, asking it questions about course material.

Ace from Top Hat -- empowering educators and students with a human-centered application of AI


Reflections On AI Policies in Higher Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
And Why First-Hand Generative AI Experience is Crucial for Leadership

AI is already showing far-reaching consequences for societies and educational institutions around the world. It is my contention that it is impossible to set strategic direction for AI in higher education if you haven’t yet tried working with the technology yourself. The first wave of overwhelming, profound surprise simply cannot be outsourced to other parts of the organization.

I mention this because the need for both strategic and operational guidance for generative AI is growing rapidly in higher education institutions. Without the necessary – and quite basic – personal generative AI experience, however, it becomes difficult for leadership to meaningfully direct and anchor AI in the organization.

And without clear guidance in place, uncertainty arises for all internal stakeholders about expectations and appropriate uses of AI. This makes developing an institutional AI policy not just sensible, but necessary.




A free report for educational leaders and policymakers who want to understand the AI World — from stefanbauschard.substack.com by Stefan Bauschard
And the immediate need for AI literacy

Beyond synthesizing many ideas from educational theory and AI deep learning, the report provides a comprehensive overview of developments in the field of AI, including current “exponential advances.” It’s updated through the release of Gemini and Meta’s new “Seamless” translation technology that arguably eliminates the need for most translators, and probably even the need to learn to speak another language for most purposes.

We were a mere 18 hours too late from covering an entire newscast (and news channel) that is produced with AI in a way that creates representations that are indistinguishable from what is “real” (see below) though it super-charges our comprehensive case and immediate AI literacy.

We also provide several suggestions and a potential roadmap for schools to help students prepare for an AI World where computers are substantialy smarter than them in many ways.

 

Google hopes that this personalized AI -- called Notebook LM -- will help people with their note-taking, thinking, brainstorming, learning, and creating.

Google NotebookLM (experiment)

From DSC:
Google hopes that this personalized AI/app will help people with their note-taking, thinking, brainstorming, learning, and creating.

It reminds me of what Derek Bruff was just saying in regards to Top Hat’s Ace product being able to work with a much narrower set of information — i.e., a course — and to be almost like a personal learning assistant for the course you are taking. (As Derek mentions, this depends upon how extensively one uses the CMS/LMS in the first place.)

 

Can new AI help to level up the scales of justice?


From DSC:
As you can see from the above items, Mr. David Goodrich, a great human being and a fellow Instructional Designer, had a great comment and question regarding the source of my hope that AI — and other forms of legaltech — could significantly provide more access to justice here in America. Our civil justice system has some serious problems — involving such areas as housing, employment, healthcare, education, families, and more.

I’d like to respond to that question here.

First of all, I completely get what David is saying. I, too, have serious doubts that our horrible access to justice (#A2J) situation will get better. Why? Because:

  • Many people working within the legal field like it this way, as they are all but assured victory in most of the civil lawsuits out there.
  • The Bar Associations of most of the states do not support changes that would threaten their incomes/livelihoods. This is especially true in California and Florida.
  • The legal field in general is not composed, for the most part, of highly innovative people who make things happen for the benefit of others. For example, the American Bar Association is 20+ years behind in terms of providing the level of online-based learning opportunities that they should be offering. They very tightly control how legal education is delivered in the U.S.

Here are several areas that provide me with hope for our future


There are innovative individuals out there fighting for change.
And though some of these individuals don’t reside in the United States, their work still impacts many here in America. For examples, see:

There are innovative new companies, firms, and other types of organizations out there fighting for change. For examples:

There are innovative new tools and technologies out there such as:

  • Artificial Intelligence (AI) and Machine Learning (ML) 
    • AI and machine learning remain pivotal in legaltech, especially for in-house lawyers who deal with vast quantities of contracts and complex legal matters. In 2024, these technologies will be integral for legal research, contract review, and the drafting of legal documents. Statistics from the Tech & the Law 2023 Report state more than three in five corporate legal departments (61%) have adopted generative AI in some capacity, with 7% actively using generative AI in their day-to-day work. With constant improvements to LLM (Large Language Models) by the big players, i.e. OpenAI, Google, and Microsoft (via OpenAI), 2024 will see more opportunities open and efficiencies gained for legal teams. (Source)
    • From drafting contracts to answering legal questions and summarising legal issues, AI is revolutionising the legal profession and although viewed with a sceptical eye by some law firms, is generally perceived to be capable of bringing huge benefits. (Source)
    • Legal bots like Harvey will assist lawyers with discovery.
  • Technology-assisted review (TAR) in e-discovery
  • Due to COVID 19, there were virtual courtrooms set up and just like with virtual/online-based learning within higher education, many judges, litigants, lawyers, and staff appreciated the time savings and productivity gains. Along these lines, see Richard Susskind’s work. [Richard] predicts a world of online courts, AI-based global legal businesses, disruptive legal technologies, liberalized markets, commoditization, alternative sourcing, simulated practice on the metaverse, and many new legal jobs. (Source)

There are innovative states out there fighting for change. For examples:

  • Utah in 2020 launched a pilot program that suspended ethics rules to allow for non-lawyer ownership of legal services providers and let non-lawyers apply for a waiver to offer certain legal services. (Source)
  • Arizona in 2021 changed its regulatory rules to allow for non-lawyer ownership. (Source)
  • Alaska with their Alaska Legal Services Corporation
  • …and others

And the last one — but certainly not the least one — is where my faith comes into play. I believe that the Triune God exists — The Father, The Son, and The Holy Spirit — and that the LORD is very active in our lives and throughout the globe. And one of the things the LORD values highly is JUSTICE. For examples:

  • Many seek an audience with a ruler, but it is from the Lord that one gets justice. Proverbs 29:26 NIV
  • These are the things you are to do: Speak the truth to each other, and render true and sound judgment in your courts; Zechariah 8:16 NIV
  • …and many others as can be seen below

The LORD values JUSTICE greatly!


So I believe that the LORD will actively help us provide greater access to justice in America.


Well…there you have it David. Thanks for your question/comment! I appreciate it!

 

From DSC:
Wouldn’t it be nice to be able to gift someone an article or access to a particular learning module? This would be the case whether you are a subscriber to that vendor/service or not. I thought about this after seeing the following email from MLive.com.
.

MLive.com's gift an article promotion from December 2023; one must be a subscriber though to gift an article

.

Not only is this a brilliant marketing move — as recipients can get an idea of the services/value offered — but it can provide concrete information to someone.

Perhaps colleges and universities should take this idea and run with it. They could gift courses and/or individual lectures! Doing so could open up some new revenue streams, aid adult learners in their lifelong learning pathways, and help people build new skills — all while helping market the colleges and universities. Involved faculty/staff members could get a percentage of the sales. Sounds like a WIN-WIN to me.

 

Exploring blockchain’s potential impact on the education sector — from e27.co by Moch Akbar Azzihad M
By the year 2024, the application of blockchain technology is anticipated to have a substantial influence on the education sector

Areas mentioned include:

  • Credentials that are both secure and able to be verified
  • Records of accomplishments that are not hidden
  • Enrollment process that is both streamlined and automated
  • Storage of information that is both secure and decentralised
  • Financing and decentralised operations
 

Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

Amazon aims to provide free AI skills training to 2 million people by 2025 with its new ‘AI Ready’ commitment — from aboutamazon.com by Swami Sivasubramanian

Artificial intelligence (AI) is the most transformative technology of our generation. If we are going to unlock the full potential of AI to tackle the world’s most challenging problems, we need to make AI education accessible to anyone with a desire to learn.

That’s why Amazon is announcing “AI Ready,” a new commitment designed to provide free AI skills training to 2 million people globally by 2025. To achieve this goal, we’re launching new initiatives for adults and young learners, and scaling our existing free AI training programs—removing cost as a barrier to accessing these critical skills.

From DSC:
While this will likely serve Amazon just fine, it’s still an example of the leadership of a corporation seeking to help others out.

 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

Where a developing, new kind of learning ecosystem is likely headed [Christian]

From DSC:
As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.

Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.

Project Chiron's vision: Our vision for education Every child will soon have a super-intelligent AI teacher by their side. We want to make sure they instill a love of learning in children.


From DSC:
This future learning platform will also focus on developing skills and competencies. Along those lines, see:

Scale for Skills-First — from the-job.beehiiv.com by Paul Fain
An ed-tech giant’s ambitious moves into digital credentialing and learner records.

A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.

Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.

Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.

Paul also links to 1EDTECH’s page regarding the Comprehensive Learning Record

 


From GPTs (pt. 3) — from theneurondaily.com by Noah Edelman

BTW, here are a few GPTs worth checking out today:

  • ConvertAnything—convert images, audio, videos, PDFs, files, & more.
  • editGPT—edit any writing (like Grammarly inside ChatGPT).
  • Grimoire—a coding assistant that helps you build anything!

Some notes from Dan Fitzpatrick – The AI Educator:

Custom GPT Bots:

  • These could help with the creation of interactive learning assistants, aligned with curricula.
  • They can be easily created with natural language programming.
  • Important to note users must have a ChatGPT Plus paid account

Custom GPT Store:

  • Marketplace for sharing and accessing educational GPT tools created by other teachers.
  • A store could offer access to specialised tools for diverse learning needs.
  • A store could enhance teaching strategies when accessing proven, effective GPT applications.

From DSC:
I appreciate Dan’s potential menu of options for a child’s education:

Monday AM: Sports club
Monday PM: Synthesis Online School AI Tutor
Tuesday AM: Music Lesson
Tuesday PM: Synthesis Online School Group Work
Wednesday AM: Drama Rehearsal
Wednesday PM: Synthesis Online School AI Tutor
Thursday AM: Volunteer work
Thursday PM: Private study
Friday AM: Work experience
Friday PM: Work experience

Our daughter has special learning needs and this is very similar to what she is doing. 

Also, Dan has a couple of videos out here at Google for Education:



Tuesday’s AI Ten for Educators (November 14) — from stefanbauschard.substack.com by Stefan Bauschard
Ten AI developments for educators to be aware of

Two boxes. In my May Cottesmore presentation, I put up two boxes:

(a) Box 1 — How educators can use AI to do what they do now (lesson plans, quizzes, tests, vocabulary lists, etc.)

(b) Box 2 — How the education system needs to change because, in the near future (sort of already), everyone is going to have multiple AIs working with them all day, and the premium on intelligence, especially “knowledge-based” intelligence, is going to decline rapidly. It’s hard to think that significant changes in the education system won’t be needed to accommodate that change.

There is a lot of focus on preparing educators to work in Box 1, which is important, if for no other reason than that they can see the power of even the current but limited technologies, but the hard questions are starting to be about Box 2. I encourage you to start those conversations, as the “ed tech” companies already are, and they’ll be happy to provide the answers and the services if you don’t want to.

Practical suggestions: Two AI teams in your institution. Team 1 works on Box A and Team 2 works on Box B.

 
© 2024 | Daniel Christian