The Cambrian Explosion of AI Edtech Is Here — from edtechinsiders.substack.com by Alex Sarlin, Sarah Morin, and Ben Kornell

Excerpt:

Our AI in Edtech Takeaways

After chronicling 160+ AI tools (which is surely only a small fraction of the total), we’re seeing a few clear patterns among the tools that have come out so far- here are 10 categories that are jumping out!

  • Virtual Teaching Assistants:
  • Virtual Tutors:
  • AI-Powered Study Tools:  
  • Educational Content Creation:
  • Educational Search:
  • Auto-generated Learning Paths: 
  • AI-Powered Research:
  • Speak to Characters:  
  • Grammar and Writing: 
  • AI Cheating Detection:

 


Ready or not, AI is here — from the chronicle.com’s The Edge, by Goldie Blumenstyk

Excerpt:

“I don’t usually get worked up about announcements but I see promise in JFF’s plans for a new Center for Artificial Intelligence & the Future of Work, in no small part because the organization bridges higher ed, K-12 education, employers, and policymakers.”

Goldie Blumenstyk

Goldie’s article links to:

Jobs for the Future Launches New Center for Artificial Intelligence & the Future of Work — from archive.jff.org
Center launches as JFF releases preliminary survey data which finds a majority of workers feel they need new skills and training to prepare for AI’s future impact.

Excerpt:

BOSTON June 14, 2023 —Jobs for the Future (JFF), a national nonprofit that drives transformation in the U.S. education and workforce systems, today announced the launch of its new Center for Artificial Intelligence &the Future of Work. This center will play an integral role in JFF’s mission and newly announced 10-year North Star goal to help 75 million people facing systemic barriers to advancement work in quality jobs. As AI’s explosive growth reshapes every aspect of how we learn, work, and live, this new center will serve as a nexus of collaboration among stakeholders from every part of the education-to-career ecosystem to explore the most promising opportunities—and profound challenges—of AI’s potential to advance an accessible and equitable future of learning and work.

 


OpenAI Considers ‘App Store’ For ChatGPT — from searchenginejournal.com by; with thanks to Barsee at AI Valley for this resource
OpenAI explores launching an ‘app store’ for AI models, potentially challenging current partners and expanding customer reach.

Highlights:

  • OpenAI considers launching an ‘app store’ for customized AI chatbots.
  • This move could create competition with current partners and extend OpenAI’s customer reach.
  • Early interest from companies like Aquant and Khan Academy shows potential, but product development and market positioning challenges remain.

The Rise of AI: New Rules for Super T Professionals and Next Steps for EdLeaders — from gettingsmart.com by Tom Vander Ark

Key Points

  • The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
  • Here are six preliminary conclusions about the nature of work and learning.

Wonder Tools: AI to try — from wondertools.substack.com by Jeremy Caplan
9 playful little ways to explore AI

Excerpt:

  1. Create a personalized children’s story ? Schrodi
    Collaborate with AI on a free customized, illustrated story for someone special. Give your story’s hero a name, pick a genre (e.g. comedy, thriller), choose an illustration style (e.g. watercolor, 3d animation) and provide a prompt to shape a simple story. You can even suggest a moral. After a minute, download a full-color PDF to share. Or print it and read your new mini picture book aloud.
  2. Generate a quiz ? | Piggy
    Put in a link, a topic, or some text and you’ll get a quiz you can share, featuring multiple-choice or true-false questions. Example: try this quick entrepreneurship quiz Piggy generated for me.

 


3 Questions for Coursera About Generative AI in Education — from insidehighered.com by Joshua Kim
How this tech will change the learning experience, course creation and more.

Excerpt (emphasis DSC):

Q: How will generative AI impact teaching and learning in the near and long term?

Baker Stein: One-on-one tutoring at scale is finally being unlocked for learners around the world. This type of quality education is no longer only available to students with the means to hire a private tutor. I’m also particularly excited to see how educators make use of generative AI tools to create courses much faster and likely at a higher quality with increased personalization for each student or even by experimenting with new technologies like extended reality. Professors will be able to put their time toward high-impact activities like mentoring, researching and office hours instead of tedious course-creation tasks. This helps open up the capacity for educators to iterate on their courses faster to keep pace with industry and global changes that may impact their field of study.

Another important use case is how generative AI can serve as a great equalizer for students when it comes to writing, especially second language learners.

 
 

 

From DSC:
As Rob Toews points out in his recent article out at Forbes.com, we had better hope that the Taiwan Semiconductor Manufacturing Company (TSMC) builds out the capacity to make chips in various countries. Why? Because:

The following statement is utterly ludicrous. It is also true. The world’s most important advanced technology is nearly all produced in a single facility.

What’s more, that facility is located in one of the most geopolitically fraught areas on earth—an area in which many analysts believe that war is inevitable within the decade.

The future of artificial intelligence hangs in the balance.

The Taiwan Semiconductor Manufacturing Company (TSMC) makes ***all of the world’s advanced AI chips.*** Most importantly, this means Nvidia’s GPUs; it also includes the AI chips from Google, AMD, Amazon, Microsoft, Cerebras, SambaNova, Untether and every other credible competitor.

— from The Geopolitics Of AI Chips Will Define The Future Of AI
out at Forbes.com by Rob Toews

Little surprise, then, that Time Magazine described TSMC
as “the world’s most important company that you’ve
probably never heard of.”

 


From DSC:
If that facility was actually the only one and something happened to it, look at how many things would be impacted as of early May 2023!


 

Examples of generative AI models

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

Radar Trends to Watch: May 2023 Developments in Programming, Security, Web, and More — from oreilly.com by Mike Loukides

Excerpt:

Large language models continue to colonize the technology landscape. They’ve broken out of the AI category, and now are showing up in security, programming, and even the web. That’s a natural progression, and not something we should be afraid of: they’re not coming for our jobs. But they are remaking the technology industry.

One part of this remaking is the proliferation of “small” large language models. We’ve noted the appearance of llama.cpp, Alpaca, Vicuna, Dolly 2.0, Koala, and a few others. But that’s just the tip of the iceberg. Small LLMs are appearing every day, and some will even run in a web browser. This trend promises to be even more important than the rise of the “large” LLMs, like GPT-4. Only a few organizations can build, train, and run the large LLMs. But almost anyone can train a small LLM that will run on a well-equipped laptop or desktop.

 

Work Shift: How AI Might Upend Pay — from bloomberg.com by Jo Constantz

Excerpt:

This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.

“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”

Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.


US Supreme Court rejects computer scientist’s lawsuit over AI-generated inventions — from reuters.com by Blake Brittain

Excerpt:

WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.

The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.


Deep learning pioneer Geoffrey Hinton has quit Google — from technologyreview.com by Will Douglas Heaven
Hinton will be speaking at EmTech Digital on Wednesday.

Excerpt:

Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.

According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.

***


What Is Agent Assist? — from blogs.nvidia.com
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.

Excerpt:

Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.

It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.

From DSC:
Is this type of thing going to provide a learning assistant/agent as well?


A chatbot that asks questions could help you spot when it makes no sense — from technologyreview.com by Melissa Heikkilä
Engaging our critical thinking is one way to stop getting fooled by lying AI.

Excerpt:

AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.

One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.


Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images — from stability.ai

Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images


New AI Powered Denoise in PhotoShop — from jeadigitalmedia.org

In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.


 

 

In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.


Also relevant/see:


 




AutoGPT is the next big thing in AI— from therundown.ai by Rowan Cheung

Excerpt:

AutoGPT has been making waves on the internet recently, trending on both GitHub and Twitter. If you thought ChatGPT was crazy, AutoGPT is about to blow your mind.

AutoGPT creates AI “agents” that operate automatically on their own and complete tasks for you. In case you’ve missed our previous issues covering it, here’s a quick rundown:

    • It’s open-sourced [code]
    • It works by chaining together LLM “thoughts”
    • It has internet access, long-term and short-term memory, access to popular websites, and file storage

.



From DSC:
I want to highlight that paper from Stanford, as I’ve seen it cited several times recently:.

Generative Agents: Interactive Simulacra of Human Behavior -- a paper from Stanford from April 2023


From DSC:
And for a rather fun idea/application of these emerging technologies, see:

  • Quick Prompt: Kitchen Design — from linusekenstam.substack.com by Linus Ekenstam
    Midjourney Prompt. Create elegant kitchen photos using this starting prompt. Make it your own, experiment, add, remove and tinker to create new ideas.

…which made me wonder how we might use these techs in the development of new learning spaces (or in renovating current learning spaces).


From DSC:
On a much different — but still potential — note, also see:

A.I. could lead to a ‘nuclear-level catastrophe’ according to a third of researchers, a new Stanford report finds — from fortune.com by Tristan Bove

Excerpt:

Many experts in A.I. and computer science say the technology is likely a watershed moment for human society. But 36% don’t mean that as a positive, warning that decisions made by A.I. could lead to “nuclear-level catastrophe,” according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., published earlier this month.


 
 

The above Tweet links to:

Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.



However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)


In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 

Planning for AGI and beyond — from OpenAI.org by Sam Altman

Excerpt:

There are several things we think are important to do now to prepare for AGI.

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

*AGI stands for Artificial General Intelligence

 

College Inside - a biweekly newsletter about the future of postsecondary education in prisons

The future of computer programming in prison – College Inside; written by Open Campus national reporter Charlotte West.
A biweekly newsletter about the future of postsecondary education in prisons.

Excerpt:

Participant Leonard Bishop hadn’t touched technology in the 17 years he served in the federal system prior to transferring to the D.C. Jail in 2018. When he first got a tablet, he said it took him a few days to figure out how to navigate through it, but then “I couldn’t put it down.”

Bishop said he was surprised by how easy it was to learn the skills he needed to earn the AWS certification. “It helps you transition back into society, especially for someone who has been gone so long,” he said.


Also relevant/see:

This AWS Cloud certification program opens new paths for inmates — from amazon.com; with thanks to Paul Fain for this resource
A jail-based program aims to expand career opportunities through cloud-skills training.

Excerpt:

Julian Blair knew nothing about cloud computing when he became incarcerated in a Washington, D.C. jail more than two years ago.

“I’d never done anything with a computer besides video games, typing papers in college, and downloading music on an iPad,” said Blair.

Now, after three months of work with an educational program led by APDS and Amazon Web Services (AWS) inside the jail, Blair and 10 other residents at the facility have successfully passed the AWS Certified Cloud Practitioner exam.


 
© 2024 | Daniel Christian