Canva’s new AI tools automate boring, labor-intensive design tasks — from theverge.com by Jess Weatherbed
Magic Studio features like Magic Switch automatically convert your designs into blogs, social media posts, emails, and more to save time on manually editing documents.


Canva launches Magic Studio, partners with Runway ML for video — from bensbites.beehiiv.com by Ben Tossell

Here are the highlights of launched features under the new Magic Studio:

  • Magic Design – Turn ideas into designs instantly with AI-generated templates.
  • Magic Switch – Transform content into different formats and languages with one click.
  • Magic Grab – Make images editable like Canva templates for easy editing.
  • Magic Expand – Use AI to expand images beyond the original frame.
  • Magic Morph – Transform text and shapes with creative effects and prompts.
  • Magic Edit – Make complex image edits using simple text prompts.
  • Magic Media – Generate professional photos, videos and artworks from text prompts.
  • Magic Animate – Add animated transitions and motion to designs instantly.
  • Magic Write – Generate draft text and summaries powered by AI.



Adobe Firefly

Meet Adobe Firefly -- Adobe is going hard with the use of AI. This is a key product along those lines.


Addendums on 10/11/23:


Adobe Releases New AI Models Aimed at Improved Graphic Design — from bloomberg.com
New version of Firefly is bigger than initial tool, Adobe says Illustrator, Express programs each get own generative tools


 


From DSC:
Which reminds me of some graphics:

The pace has changed -- don't come onto the track in a Model T

 

101 creative ideas to use AI in education, A crowdsourced collection — from zenodo.org by Chrissi Nerantzi, Sandra Abegglen, Marianna Karatsiori, & Antonio Martínez-Arboleda (Eds.); with thanks to George Veletsianos for this resource

101 creative ideas to use AI in education, A crowdsourced collection

As an example, here’s one of the ideas from the crowdsourced collection:

Chat with anyone in the past

Chatting with Napoleon Bonaparte

 


On a somewhat related note, also see:

Merlyn Mind launches education-focused LLMs for classroom integration of generative AI — from venturebeat.com by Victor Dey

Excerpt:

Merlyn Mind, an AI-powered digital assistant platform, announced the launch of a suite of large language models (LLMs) specifically tailored for the education sector under an open-source license.

Designing courses in an age of AI — from teachinginhighered.com by Maria Andersen
Maria Andersen shares about designing courses in an age of artificial intelligence (AI) on episode 469 of the Teaching in Higher Ed podcast.

With generative AI, we have an incredible acceleration of change happening.

Maria Andersen

 
 

VR system to be used to prepare crime victims for court — from inavateonthenet.net

Excerpt:

An innovative VR system is being used to help victims of crime prepare for giving evidence in court, allowing victims to engage with key members of the judicial process virtually.

The system, designed by Immersonal, is to be rolled out across 52 Scottish courts over the next year, with the technology also being piloted in the Hague as part of the International Criminal Court. The aim is to dissuade the fears and discomfort of victims and witnesses who may be unfamiliar with the court process.
.

VR system to be used to prepare crime victims for court

.


.

Here’s another interesting item for you…one that also may eventually be XR-related:
.

 

Brainyacts #57: Education Tech— from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts:

Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.

  1. Personalized learning
  2. Virtual tutors and coaches
  3. Interactive simulations
  4. Enhanced course materials
  5. Collaborative learning
  6. Automated assessment and feedback
  7. Continuous improvement
  8. Accessibility and inclusivity

AI Will Democratize Learning — from td.org by Julia Stiglitz and Sourabh Bajaj

Excerpts:

In particular, we’re betting on four trends for AI and L&D.

  1. Rapid content production
  2. Personalized content
  3. Detailed, continuous feedback
  4. Learner-driven exploration

In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.

Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.

The Next Evolution in Educational Technologies and Assisted Learning Enablement — from educationoneducation.substack.com by Jeannine Proctor

Excerpt:

Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.

How Generative AI Will Enable Personalized Learning Experiences — from campustechnology.com by Rhea Kelly

Excerpt:

With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different — from the74million.org by John Bailey; with thanks to GSV for this resource

Excerpts:

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

    1. Smarter capabilities
    2. Reasoning engines
    3. Language is the interface
    4. Unprecedented scale

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier — from blogs.nvidia.com by Aaron Lefohn
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.

Excerpt:

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

 

Also relevant to the item from Nvidia (above), see:

Unreal Engine’s Metahuman Creator — with thanks to Mr. Steven Chevalia for this resource

Excerpt:

MetaHuman is a complete framework that gives any creator the power to use highly realistic human characters in any way imaginable.

It includes MetaHuman Creator, a free cloud-based app that enables you to create fully rigged photorealistic digital humans in minutes.

From Unreal Engine -- Dozens of ready-made MetaHumans are at your fingertips.

 

From DSC:
After seeing this…

…I wondered:

  • Could GPT-4 create the “Choir Practice” app mentioned below?
    (Choir Practice was an idea for an app for people who want to rehearse their parts at home)
  • Could GPT-4 be used to extract audio/parts from a musical score and post the parts separately for people to download/practice their individual parts?

This line of thought reminded me of this posting that I did back on 10/27/2010 entitled, “For those institutions (or individuals) who might want to make a few million.”

Choir Practice -- an app for people who want to rehearse at home

And I want to say that when I went back to look at this posting, I was a bit ashamed of myself. I’d like to apologize for the times when I’ve been too excited about something and exaggerated/hyped an idea up on this Learning Ecosystems blog. For example, I used the words millions of dollars in the title…and that probably wouldn’t be the case these days. (But with inflation being what it is, heh…who knows!? Maybe I shouldn’t be too hard on myself.) I just had choirs in mind when I posted the idea…and there aren’t as many choirs around these days.  🙂

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 

Mixed media online project serves as inspiration for student journalists — from jeadigitalmedia.org by Michelle Balmeo

Excerpt:

If you’re on the hunt for inspiration, go check out Facing Life: Eight stories of life after life in California’s prisons.

This project, created by Pendarvis Harshaw and Brandon Tauszik, has so many wonderful and original storytelling components, it’s the perfect model for student journalists looking for ways to tell important stories online.

Facing Life -- eight stories of life after life in California's prisons

 

Best Document Cameras for Teachers — from techlearning.com by Luke Edwards
Get the best document camera for teachers to make the classroom more digitally immersive

Along the lines of edtech, also see:

Tech & Learning Names Winners of the Best of 2022 Awards — from techlearning.com by TL Editors
This annual award celebrates recognizing the very best in EdTech from 2022

.
The Tech & Learning Awards of Excellence: Best of 2022 celebrate educational technology from the last 12 months that has excelled in supporting teachers, students, and education professionals in the classroom, for professional development, or general management of education resources and learning. Nominated products are divided into three categories: Primary, Secondary, or Higher Education.

 

94% of Consumers are Satisfied with Virtual Primary Care — from hitconsultant.net

Excerpt from What You Should Know (emphasis DSC):

  • For people who have used virtual primary care, the vast majority of them (94%) are satisfied with their experience, and nearly four in five (79%) say it has allowed them to take charge of their health. The study included findings around familiarity and experience with virtual primary care, virtual primary care and chronic conditions, current health and practices, and more.
  • As digital health technology continues to advance and the healthcare industry evolves, many Americans want the ability to utilize more digital methods when it comes to managing their health, according to a study recently released by Elevance Health — formerly Anthem, Inc. Elevance Health commissioned to conduct an online study of over 5,000 US adults age 18+ around virtual primary care.
 

The talent needed to adopt mobile AR in industry — from chieflearningofficer.com by Yao Huang Ph.D.

Excerpt:

Therefore, when adopting mobile AR to improve job performance, L&D professionals need to shift their mindset from offering training with AR alone to offering performance support with AR in the middle of the workflow.

The learning director from a supply chain industry pointed out that “70 percent of the information needed to build performance support systems already exists. The problem is it is all over the place and is available on different systems.”

It is the learning and development professional’s job to design a solution with the capability of the technology and present it in a way that most benefits the end users.

All participants revealed that mobile AR adoption in L&D is still new, but growing rapidly. L&D professionals face many opportunities and challenges. Understanding the benefits, challenges and opportunities of mobile AR used in the workplace is imperative.

A brief insert from DSC:
Augmented Reality (AR) is about to hit the mainstream in the next 1-3 years. It will connect the physical world with the digital world in powerful, helpful ways (and likely in negative ways as well). I think it will be far bigger and more commonly used than Virtual Reality (VR). (By the way, I’m also including Mixed Reality (MR) within the greater AR domain.) With Artificial Intelligence (AI) making strides in object recognition, AR could be huge.

Learning & Development groups should ask for funding soon — or develop proposals for future funding as the new hardware and software products mature — in order to upskill at least some members of their groups in the near future.

As within Teaching & Learning Centers within higher education, L&D groups need to practice what they preach — and be sure to train their own people as well.

 

From DSC:
I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.

It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets.  Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.

One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)).  If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.

For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.

And/or perhaps this is a feature in our future videoconferencing applications.

But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.

Along these lines, see:

.

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

 

Making a Digital Window Wall from TVs — from theawesomer.com

Drew Builds Stuff has an office in the basement of his parents’ house. Because of its subterranean location, it doesn’t get much light. To brighten things up, he built a window wall out of three 75? 4K TVs, resulting in a 12-foot diagonal image. Since he can load up any video footage, he can pretend to be anywhere on Earth.

From DSC:
Perhaps some ideas here for learning spaces!

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 
© 2022 | Daniel Christian