Enter the New Era of Mobile AI With Samsung Galaxy S24 Series — from news.samsung.com

Galaxy AI introduces meaningful intelligence aimed at enhancing every part of life, especially the phone’s most fundamental role: communication. When you need to defy language barriers, Galaxy S24 makes it easier than ever. Chat with another student or colleague from abroad. Book a reservation while on vacation in another country. It’s all possible with Live Translate,2 two-way, real-time voice and text translations of phone calls within the native app. No third-party apps are required, and on-device AI keeps conversations completely private.

With Interpreter, live conversations can be instantly translated on a split-screen view so people standing opposite each other can read a text transcription of what the other person has said. It even works without cellular data or Wi-Fi.


Galaxy S24 — from theneurondaily.com by Noah Edelman & Pete Huang

Samsung just announced the first truly AI-powered smartphone: the Galaxy S24.


For us AI power users, the features aren’t exactly new, but it’s the first time we’ve seen them packaged up into a smartphone (Siri doesn’t count, sorry).


Samsung’s Galaxy S24 line arrives with camera improvements and generative AI tricks — from techcrunch.com by Brian Heater
Starting at $800, the new flagships offer brighter screens and a slew of new photo-editing tools

 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 
 

Apple’s $3,499 Vision Pro AR headset is finally here — from techcrunch.com by Brian Heater

Image of the Vision Pro AR headset from Apple

Image Credits: Apple

Excerpts:

“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”


Reflections from Scott Belsky re: the Vision Pro — from implications.com


Apple WWDC 2023: Everything announced from the Apple Vision Pro to iOS 17, MacBook Air and more — from techcrunch.com by Christine Hall



Apple unveils new tech — from therundown.ai (The Rundown)

Here were the biggest things announced:

  • A 15” Macbook Air, now the thinnest 15’ laptop available
  • The new Mac Pro workstation, presumably a billion dollars
  • M2 Ultra, Apple’s new super chip
  • NameDrop, an AirDrop-integrated data-sharing feature allowing users to share contact info just by bringing their phones together
  • Journal, an ML-powered personalized journalling app
  • Standby, turning your iPhone into a nightstand alarm clock
  • A new, AI-powered update to autocorrect (finally)
  • Apple Vision Pro


Apple announces AR/VR headset called Vision Pro — from joinsuperhuman.ai by Zain Kahn

Excerpt:

“This is the first Apple product you look through and not at.” – Tim Cook

And with those famous words, Apple announced a new era of consumer tech.

Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.

Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.


From DSC:
Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?

I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.


 
  1. The GPT-4 Browser That Will Change Your Search Game — from noise.beehiiv.com by Alex Banks
    Why Microsoft Has The ‘Edge’ On Google

Excerpts:

Microsoft has launched a GPT-4 enhanced Edge browser.

By integrating OpenAI’s GPT-4 technology with Microsoft Edge, you can now use ChatGPT as a copilot in your Bing browser. This delivers superior search results, generates content, and can even transform your copywriting skills (read on to find out how).

Benefits mentioned include: Better Search, Complete Answers, and Creative Spark.

The new interactive chat feature means you can get the complete answer you are looking for by refining your search by asking for more details, clarity, and ideas.

From DSC:
I have to say that since the late 90’s, I haven’t been a big fan of web browsers from Microsoft. (I don’t like how Microsoft unfairly buried Netscape Navigator and the folks who had out-innovated them during that time.) As such, I don’t use Edge so I can’t fully comment on the above article.

But I do have to say that this is the type of thing that may make me reevaluate my stance regarding Microsoft’s browsers. Integrating GPT-4 into their search/chat functionalities seems like it would be a very solid, strategic move — at least as of late April 2023.


Speaking of new items coming from Microsoft, also see:

Microsoft makes its AI-powered Designer tool available in preview — from techcrunch.com by Kyle Wiggers

Excerpts:

[On 4/27/23], Microsoft Designer, Microsoft’s AI-powered design tool, launched in public preview with an expanded set of features.

Announced in October, Designer is a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. It leverages user-created content and DALL-E 2, OpenAI’s text-to-image AI, to ideate designs, with drop-downs and text boxes for further customization and personalization.

Designer will remain free during the preview period, Microsoft says — it’s available via the Designer website and in Microsoft’s Edge browser through the sidebar. Once the Designer app is generally available, it’ll be included in Microsoft 365 Personal and Family subscriptions and have “some” functionality free to use for non-subscribers, though Microsoft didn’t elaborate.

 

EdTech Is Going Crazy For AI — from joshbersin.com by Josh Bersin

Excerpts:

This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.

No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.

Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?

The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.


Speaking of the ASU+GSV Summit, see this posting from Michael Moe:

EIEIO…Brave New World
By: Michael Moe, CFA, Brent Peus, Owen Ritz

Excerpt:

Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…

***

Also see:

Imagining what’s possible in lifelong learning: Six insights from Stanford scholars at ASU+GSV — from acceleratelearning.stanford.edu by Isabel Sacks

Excerpt:

High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.



A guide to prompting AI (for what it is worth) — from oneusefulthing.org by Ethan Mollick
A little bit of magic, but mostly just practice

Excerpt (emphasis DSC):

Being “good at prompting” is a temporary state of affairs. The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.

The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.

From DSC:
Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.


From DSC:
Bizarre…at least for me in late April of 2023:


Excerpt from Lore Issue #28: Drake, Grimes, and The Future of AI Music — from lore.com

Here’s a summary of what you need to know:

  • The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
  • Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
  • Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
  • The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.

The Need for AI PD — from techlearning.com by Erik Ofgang
Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.

“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.”  This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI. 


Coursera Adds ChatGPT-Powered Learning Tools — from campustechnology.com by Kate Lucariello

Excerpt:

At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.

Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.

From DSC:
Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.


 

From DSC:
Check this confluence of emerging technologies out!

Also see:

How to spot AI-generated text — from technologyreview.com by Melissa Heikkilä
The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Excerpt:

This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning.

“A typo in the text is actually a really good indicator that it was human-written,” she adds.

7 Best Tech Developments of 2022 — from /thetechranch.comby

Excerpt:

As we near the end of 2022, it’s a great time to look back at some of the top technologies that have emerged this year. From AI and virtual reality to renewable energy and biotechnology, there have been a number of exciting developments that have the potential to shape the future in a big way. Here are some of the top technologies that have emerged in 2022:

 

The talent needed to adopt mobile AR in industry — from chieflearningofficer.com by Yao Huang Ph.D.

Excerpt:

Therefore, when adopting mobile AR to improve job performance, L&D professionals need to shift their mindset from offering training with AR alone to offering performance support with AR in the middle of the workflow.

The learning director from a supply chain industry pointed out that “70 percent of the information needed to build performance support systems already exists. The problem is it is all over the place and is available on different systems.”

It is the learning and development professional’s job to design a solution with the capability of the technology and present it in a way that most benefits the end users.

All participants revealed that mobile AR adoption in L&D is still new, but growing rapidly. L&D professionals face many opportunities and challenges. Understanding the benefits, challenges and opportunities of mobile AR used in the workplace is imperative.

A brief insert from DSC:
Augmented Reality (AR) is about to hit the mainstream in the next 1-3 years. It will connect the physical world with the digital world in powerful, helpful ways (and likely in negative ways as well). I think it will be far bigger and more commonly used than Virtual Reality (VR). (By the way, I’m also including Mixed Reality (MR) within the greater AR domain.) With Artificial Intelligence (AI) making strides in object recognition, AR could be huge.

Learning & Development groups should ask for funding soon — or develop proposals for future funding as the new hardware and software products mature — in order to upskill at least some members of their groups in the near future.

As within Teaching & Learning Centers within higher education, L&D groups need to practice what they preach — and be sure to train their own people as well.

 

6 trends are driving the use of #metaverse tech today. These trends and technologies will continue to drive its use over the next 3 to 5 years:

1. Gaming
2. Digital Humans
3. Virtual Spaces
4. Shared Experiences
5. Tokenized Assets
6. Spatial Computing
#GartnerSYM

.

“Despite all of the hype, the adoption of #metaverse tech is nascent and fragmented.” 

.

Also relevant/see:

According to Apple CEO Tim Cook, the Next Internet Revolution Is Not the Metaverse. It’s This — from inc.com by Nick Hobson
The metaverse is just too wacky and weird to be the next big thing. Tim Cook is betting on AR.

Excerpts:

While he might know a thing or two about radical tech, to him it’s unconvincing that the average person sufficiently understands the concept of the metaverse enough to meaningfully incorporate it into their daily life.

The metaverse is just too wacky and weird.

And, according to science, he might be on to something.

 

How Long Should a Branching Scenario Be?— from christytuckerlearning.com by Christy Tucker
How long should a branching scenario be? Is 45 minutes too long? Is there an ideal length for a branching scenario?

Excerpt:

Most of the time, the branching scenarios and simulations I build are around 10 minutes long. Overall, I usually end up at 5-15 minutes for branching scenarios, with interactive video scenarios being at the longer end.

From DSC:
This makes sense to me, as (up to) 6 minutes turned out to be an ideal length for videos.

Excerpt from Optimal Video Length for Student Engagement — from blog.edx.org

The optimal video length is 6 minutes or shorter — students watched most of the way through these short videos. In fact, the average engagement time of any video maxes out at 6 minutes, regardless of its length. And engagement times decrease as videos lengthen: For instance, on average students spent around 3 minutes on videos that are longer than 12 minutes, which means that they engaged with less than a quarter of the content. Finally, certificate-earning students engaged more with videos, presumably because they had greater motivation to learn the material. (These findings appeared in a recent Wall Street Journal article, An Early Report Card on Massive Open Online Courses and its accompanying infographic.)

The take-home message for instructors is that, to maximize student engagement, they should work with instructional designers and video producers to break up their lectures into small, bite-sized pieces.

 

DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

‘Accessibility is a journey’: A DEI expert on disability rights — from hrdive.com by Caroline Colvin
Employers can wait for a worker to request reasonable accommodation under the ADA, but Kelly Hermann asks: Why not be accommodating from the start?

Excerpt:

Often, employers jump to the obstacles that exist in physical spaces: nonexistent ramps for wheelchairs, manual doors that lack motion sensors, and the like. But the digital world presents challenges as well. Hermann and the U Phoenix accessibility team likes to “demystify” disability for campus members seeking their counsel, she said.

“Are you making those links descriptive and are you using keywords? Or are you just saying ‘click here’ and that’s your link?” Hermann asked. Like a sighted person, an individual with a disability can also scan a webpage for links with assistive technology, but this happens audibly, Hermann said, “They tell that tool to skip by link and this is what they hear: ‘Click here.’ ‘Click here.’ ‘Click here.’ ‘Click here.’ With four links on the page all hyperlinked with ‘click here,’ [they] don’t know where [they’re] going.”

 

AI Plus VR at Purdue University Global — from er.educause.edu by Abbey Elliott, Michele McMahon, Jerrica Sheridan, and Gregory Dobbin
Adding artificial intelligence to virtual reality provides nursing students with realistic, immersive learning experiences that prepare them to treat patients from diverse backgrounds.

Excerpt:

Adding artificial intelligence (AI) to immersive VR simulations can deepen the learning by enabling patient interactions that reflect a variety of patient demographics and circumstances, adjusting patient responses based on students’ questions and actions. In this way, the immersive learning activities become richer, with the goal of providing unique experiences that can help students make a successful transition from student to provider in the workforce. The use of AI and immersive learning techniques augments learning experiences and reinforces concepts presented in both didactic and clinical courses and coursework. The urgency of the pandemic prompted the development of a vision of such learning that would be sustainable beyond the pandemic as a tool for education on a relevant and scalable platform.


Speaking of emerging technologies and education/learning, also see:

NVIDIA's new AI magic turns 2d photos into 3D graphics

Best virtual tours of Ireland

 

From DSC:
I love the parts about seeing instant language translations — including sign language! Very cool indeed!
(With thanks to Ori Inbar out on Twitter for this resource.)

Realtime American Sign Language translation via potential set of AR glasses from Google

Also see:

 
© 2024 | Daniel Christian