Introducing Copilot+ PCs — from blogs.microsoft.com

[On May 20th], at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs.

Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all–day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can’t on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.

From DSC:
As a first off-the-hip look, Recall could be fraught with possible security/privacy-related issues. But what do I know? The Neuron states “Microsoft assures that everything Recall sees remains private.” Ok…


From The Rundown AI concerning the above announcements:

The details:

  • A new system enables Copilot+ PCs to run AI workloads up to 20x faster and 100x more efficiently than traditional PCs.
    Windows 11 has been rearchitected specifically for AI, integrating the Copilot assistant directly into the OS.
  • New AI experiences include a new feature called Recall, which allows users to search for anything they’ve seen on their screen with natural language.
  • Copilot’s new screen-sharing feature allows AI to watch, hear, and understand what a user is doing on their computer and answer questions in real-time.
  • Copilot+ PCs will start at $999, and ship with OpenAI’s latest GPT-4o models.

Why it matters: Tony Stark’s all-powerful JARVIS AI assistant is getting closer to reality every day. Once Copilot, ChatGPT, Project Astra, or anyone else can not only respond but start executing tasks autonomously, things will start getting really exciting — and likely initiate a whole new era of tech work.


 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

Guiding Students in Special Education to Generate Ideas for Writing — from edutopia.org by Erin Houghton
When students are stuck, breaking the brainstorming stage down into separate steps can help them get started writing.

Students who first generate ideas about a topic—access what they know about it—more easily write their outlines and drafts for the bigger-picture assignment. For Sally, brainstorming was too overwhelming as an initial step, so we started off by naming examples. I gave Sally a topic—name ways characters in Charlotte’s Web helped one another—she named examples of things (characters), and we generated a list of ways those characters helped one another.

IMPLEMENTING BRAINSTORMING AS SKILL BUILDING
This “naming” strategy is easy to implement with individual students or in groups. These are steps to get you started.

Step 1. Introduce the student to the exercise.
Step 2. Select a topic for practice.


[Opinion] It’s okay to play: How ‘play theory’ can revitalize U.S. education — from hechingerreport.org by Tyler Samstag
City planners are recognizing that play and learning are intertwined and turning public spaces into opportunities for active learning

When we’re young, playing and learning are inseparable.

Simple games like peekaboo and hide-and-seek help us learn crucial lessons about time, anticipation and cause and effect. We discover words, numbers, colors and sounds through toys, puzzles, storybooks and cartoons. Everywhere we turn, there’s something fun to do and something new to learn.

Then, somewhere around early elementary school, learning and play officially become separated for life.

Suddenly, learning becomes a task that only takes place in proper classrooms with the help of textbooks, homework and tests. Meanwhile, play becomes a distraction that we’re only allowed to indulge in during our free time, often by earning it as a reward for studying. As a result, students tend to grow up feeling as if learning is a stressful chore while playing is a reward.

Similar interactive learning experiences are popping up in urban areas from California to the East Coast, with equally promising results: art, games and music are being incorporated into green spaces, public parks, transportation stations, laundromats and more.


And on a somewhat related note, also see:


Though meant for higher ed, this is also applicable to the area of pedagogy within K12:

Space to fail. And learn — from educationalist.substack.com by Alexandra Mihai
I want to use today’s newsletter to talk about how we can help students to own their mistakes and really learn from them, so I’m sharing some thoughts, some learning design ideas and some resources…

10 ideas to make failure a learning opportunity

  • Start with yourself:
  • Admit when you don’t know something
  • Try to come up with “goal free problems”
  • Always dig deeper:
  • Encourage practice:
 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 
  1. The GPT-4 Browser That Will Change Your Search Game — from noise.beehiiv.com by Alex Banks
    Why Microsoft Has The ‘Edge’ On Google

Excerpts:

Microsoft has launched a GPT-4 enhanced Edge browser.

By integrating OpenAI’s GPT-4 technology with Microsoft Edge, you can now use ChatGPT as a copilot in your Bing browser. This delivers superior search results, generates content, and can even transform your copywriting skills (read on to find out how).

Benefits mentioned include: Better Search, Complete Answers, and Creative Spark.

The new interactive chat feature means you can get the complete answer you are looking for by refining your search by asking for more details, clarity, and ideas.

From DSC:
I have to say that since the late 90’s, I haven’t been a big fan of web browsers from Microsoft. (I don’t like how Microsoft unfairly buried Netscape Navigator and the folks who had out-innovated them during that time.) As such, I don’t use Edge so I can’t fully comment on the above article.

But I do have to say that this is the type of thing that may make me reevaluate my stance regarding Microsoft’s browsers. Integrating GPT-4 into their search/chat functionalities seems like it would be a very solid, strategic move — at least as of late April 2023.


Speaking of new items coming from Microsoft, also see:

Microsoft makes its AI-powered Designer tool available in preview — from techcrunch.com by Kyle Wiggers

Excerpts:

[On 4/27/23], Microsoft Designer, Microsoft’s AI-powered design tool, launched in public preview with an expanded set of features.

Announced in October, Designer is a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. It leverages user-created content and DALL-E 2, OpenAI’s text-to-image AI, to ideate designs, with drop-downs and text boxes for further customization and personalization.

Designer will remain free during the preview period, Microsoft says — it’s available via the Designer website and in Microsoft’s Edge browser through the sidebar. Once the Designer app is generally available, it’ll be included in Microsoft 365 Personal and Family subscriptions and have “some” functionality free to use for non-subscribers, though Microsoft didn’t elaborate.

 

How Easy Is It/Will It Be to Use AI to Design a Course? — from wallyboston.com by Wally Boston

Excerpt:

Last week I received a text message from a friend to check out a March 29th Campus Technology article about French AI startup, Nolej. Nolej (pronounced “Knowledge”) has developed an OpenAI-based instructional content generator for educators called NolejAI.

Access to NolejAI is through a browser. Users can upload video, audio, text documents, or a website url. NolejAI will generate an interactive micro-learning package which is a standalone digital lesson including content transcript, summaries, a glossary of terms, flashcards, and quizzes. All the lesson materials generated is based upon the uploaded materials.


From DSC:
I wonder if this will turn out to be the case:

I am sure it’s only a matter of time before NolejAI or another product becomes capable of generating a standard three credit hour college course. Whether that is six months or two years, it’s likely sooner than we think.


Also relevant/see:

The Ultimate 100 AI Tools

The Ultimate 100 AI Tools -- as of 4-12-23


 

Meet MathGPT: a Chatbot Tutor Built Specific to a Math Textbook — from thejournal.com by Kristal Kuykendall

Excerpt:

Micro-tutoring platform PhotoStudy has unveiled a new chatbot built on OpenAI’s ChatGPT APIs that can teach a complete elementary algebra textbook with “extremely high accuracy,” the company said.

“Textbook publishers and teachers can now transform their textbooks and teaching with a ChatGPT-like assistant that can teach all the material in a textbook, assess student progress, provide personalized help in weaker areas, generate quizzes with support for text, images, audio, and ultimately a student customized avatar for video interaction,” PhotoStudy said in its news release.

Some sample questions the MathGPT tool can answer:

    • “I don’t know how to solve a linear equation…”
    • “I have no idea what’s going on in class but we are doing Chapter 2. Can we start at the top?”
    • “Can you help me understand how to solve this mixture of coins problem?”
    • “I need to practice for my midterm tomorrow, through Chapter 6. Help.”
 

Job Titles: It’s Not Only Instructional Design — from idolcourses.com by Ivett Csordas

Excerpt:

When I first came across the title “Instructional Designer” while looking for alternative career options, I was just as confused as anybody would be hearing about our job for the first time. I remember asking questions like: What does an Instructional Designer do? Why is it called Instructional Design? Wouldn’t a title such as Learning Experience Designer or Training Content Developer suit them better? How are their skill sets different from curriculum developers like teachers’? etc.

Then, the more I learnt about the different roles of Instructional Designers, and the more job interviews I had, ironically, the less clarity I had over the companies’ expectations of us.

The truth is that the role of an Instructional Designer varies from company to company. What a person hired with the title “Instructional Designer” ends up doing depends on a range of factors such as the company’s training portfolio, the profile of their learners, the size of the L&D team, the way they use technology, just to mention a few.

From DSC:
I don’t know a thing about idolcourses.com, but I really appreciated running across this posting by Ivett Csordas about the various job titles out there and the differences between some of these job titles. The posting deals with job titles associated with developers, designers, LXD, LMS roles, managers, L&D Coordinators, specialists, consultants, and strategists.

 

 

behance.net/live/   <— Check out our revamped schedule!

Join us in the morning for Adobe Express streams — If you are an aspiring creative, small business owner, or looking to kickstart a side hustle – these live streams are for you!

Then level up your skills with Creative Challenges, Bootcamps, and Pro-Tips. Get inspired by artists from all over the world during our live learning events. Tune in to connect directly with your instructors and other creatives just like you.

In the afternoon, join creatives in their own Community Streams! Laugh and create along side other Adobe Live Community members on Behance, Youtube and Twitch!

For weekly updates on the Adobe Live schedule + insight into upcoming guests and content, join our discord communities!

Watch Adobe Live Now!

 

From DSC:
Check this confluence of emerging technologies out!

Also see:

How to spot AI-generated text — from technologyreview.com by Melissa Heikkilä
The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Excerpt:

This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning.

“A typo in the text is actually a really good indicator that it was human-written,” she adds.

7 Best Tech Developments of 2022 — from /thetechranch.comby

Excerpt:

As we near the end of 2022, it’s a great time to look back at some of the top technologies that have emerged this year. From AI and virtual reality to renewable energy and biotechnology, there have been a number of exciting developments that have the potential to shape the future in a big way. Here are some of the top technologies that have emerged in 2022:

 

Virtual or in-person: The next generation of trial lawyers must be prepared for anything — from reuters.com by Stratton Horres and Karen L. Bashor

A view of the jury box (front), where jurors would sit in and look towards the judge's chair (C), the witness stand (R) and stenographer's desk (L) in court room 422 of the New York Supreme Court

Excerpt:

In this article, we will examine several key ways in which COVID-19 has changed trial proceedings, strategy and preparation and how mentoring programs can make a difference.

COVID-19 has shaken up the jury trial experience for both new and experienced attorneys. For those whose only trials have been conducted during COVID-19 restrictions and for everyone easing back into the in-person trials, these are key elements to keep in mind practicing forward. Firm mentoring programs should be considered to prepare the future generation of trial lawyers for both live and virtual trials.

From DSC:
I think law firms will need to expand the number of disciplines coming to their strategic tables. That is, as more disciplines are required to successfully practice law in the 21st century, more folks with technical backgrounds and/or abilities will be needed. Web front and back end developers, User Experience Designers, Instructional Designers, Audio/Visual Specialists, and others come to my mind. Such people can help develop the necessary spaces, skills, training, and mentoring programs mentioned in this article. As within our learning ecosystems, the efficient and powerful use of teams of specialists will deliver the best products and services.

 

Using Virtual Reality for Career Training — from techlearning.com by Erik Ofgang
The Boys & Girls Clubs of Indiana have had success using virtual reality simulations to teach students about career opportunities.

a Woman with a virtual reality set on occupies one half of the screen. The other shows virtual tools that she is controlling.

Excerpts:

Virtual reality can help boost CTE programs and teach students about potential careers in fields they may know nothing about, says Lana Taylor from the Indiana Alliance of Boys & Girls Clubs of America.

One of those other resources has been a partnership with Transfer VR to provide students access to headsets to participate in career simulations that can give them a tactile sense of what working in certain careers might be like.

“Not all kids are meant to go to college, not all kids want to do it,” Taylor says. “So it’s important to give them some exposure to different careers and workforce paths that maybe they hadn’t thought of before.” 


AI interviews in VR prepare students for real jobseeking — from inavateonthenet.net

 

How Long Should a Branching Scenario Be?— from christytuckerlearning.com by Christy Tucker
How long should a branching scenario be? Is 45 minutes too long? Is there an ideal length for a branching scenario?

Excerpt:

Most of the time, the branching scenarios and simulations I build are around 10 minutes long. Overall, I usually end up at 5-15 minutes for branching scenarios, with interactive video scenarios being at the longer end.

From DSC:
This makes sense to me, as (up to) 6 minutes turned out to be an ideal length for videos.

Excerpt from Optimal Video Length for Student Engagement — from blog.edx.org

The optimal video length is 6 minutes or shorter — students watched most of the way through these short videos. In fact, the average engagement time of any video maxes out at 6 minutes, regardless of its length. And engagement times decrease as videos lengthen: For instance, on average students spent around 3 minutes on videos that are longer than 12 minutes, which means that they engaged with less than a quarter of the content. Finally, certificate-earning students engaged more with videos, presumably because they had greater motivation to learn the material. (These findings appeared in a recent Wall Street Journal article, An Early Report Card on Massive Open Online Courses and its accompanying infographic.)

The take-home message for instructors is that, to maximize student engagement, they should work with instructional designers and video producers to break up their lectures into small, bite-sized pieces.

 

Why text-to-speech tools might have a place in your classroom with Dr. Kirsten Kohlmeyer – Easy TeTech Podcast 183 — from classtechtips.com by Monica Burns

Excerpt:

In this episode, Assistive Technology Director, Dr. Kirsten Kohlmeyer, joins to discuss the power of accessibility and text-to-speech tools in classroom environments. You’ll also hear plenty of digital resources to check out for text-to-speech options, audiobooks, and more!

Assistive tools can provide:

  • Text-to-speech
  • Definitions/vocabularies
  • Ability to level the Lexile level of a reading
  • Capability to declutter a website
  • More chances to read to learn something new
  • and more

Speaking of tools, also see:

 
© 2024 | Daniel Christian