How Early Adopters of Gen AI Are Gaining Efficiencies — from knowledge.wharton.upenn.edu by Prasanna (Sonny) Tambe and Scott A. Snyder; via Ray Schroeder on LinkedIn
Enterprises are seeing gains from generative AI in productivity and strategic planning, according to speakers at a recent Wharton conference.

Its unique strengths in translation, summation, and content generation are especially useful in processing unstructured data. Some 80% of all new data in enterprises is unstructured, he noted, citing research firm Gartner. Very little of that unstructured data that resides in places like emails “is used effectively at the point of decision making,” he noted. “[With gen AI], we have a real opportunity” to garner new insights from all the information that resides in emails, team communication platforms like Slack, and agile project management tools like Jira, he said.


6 YouTube Channels to Stay Up to Date with AI — from heaigirl.substack.com by Diana Dovgopol
Here are some cool AI YouTube channels.

Here are 6 YouTube channels I watch to stay up to date with AI. This list will be useful whether you’re a casual AI enthusiast or an experienced programmer.

1. Matt Wolfe: AI for non-coders
This is a fast-growing YouTube channel focused on artificial intelligence for non-coders. On this channel, you’ll find videos about ChatGPT, Midjourney, and any AI tool that it’s gaining popularity.


Top AI mobile apps, Stable Video 3D, & my AI film workflow — from by Heather Cooper
Plus 1-Click 3D animation and other cool AI tools

#3 Photomath
Photomath is a comprehensive math help app that provides step-by-step explanations for a wide range of math problems, from elementary to college level. Photomath is only available as a mobile app. (link)

Features:

  • Get step-by-step solutions with multiple methods to choose from
  • Scan any math problem, including word problems, using the app’s camera
  • Access custom visual aids and extra “how” and “why” tips for deeper understanding

Google researchers unveil ‘VLOGGER’, an AI that can bring still photos to life — from venturebeat.com by Michael Nuñez

Google researchers have developed a new artificial intelligence system that can generate lifelike videos of people speaking, gesturing and moving — from just a single still photo. The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation.



What We Risk By Automating Tasks We Loathe — from marcwatkins.substack.com by Marc Watkins

I’m fascinated by the potential of these tools to augment and enhance our work and creativity. There’s no denying the impressive capabilities we’re already seeing with text generation, image creation, coding assistance, and more. Used thoughtfully, AI can be a powerful productivity multiplier.

At the same time, I have significant concerns about the broader implications of this accelerating technology, especially for education and society at large. We’re traversing new ground at a breakneck pace, and it’s crucial that we don’t blindly embrace AI without considering the potential risks.

My worry is that by automating away too many tasks, even seemingly rote ones like creating slide decks, we risk losing something vital—humanity at the heart of knowledge work.


Nvidia Introduce AI Nurses — from wireprompt.substack.com | Weekkly AI Report from WirePrompt

Nvidia has announced a partnership with Hippocratic AI to introduce AI “agents” aimed at replacing nurses in hospitals. These AI “nurses” come at a significantly low cost compared to human nurses and are purportedly intended to address staffing issues by handling “low-risk,” patient-facing tasks via video calls. However, concerns are raised regarding the ethical implications and effectiveness of replacing human nurses with AI, particularly given the complex nature of medical care.



16 Changes to the Way Enterprises Are Building and Buying Generative AI — from a16z.com by Sarah Wang and Shangda Xu

TABLE OF CONTENTS

  • Resourcing: budgets are growing dramatically and here to stay
  • Models: enterprises are trending toward a multi-model, open source world
  • Use cases: more migrating to production
  • Size of total opportunity: massive and growing quickly

 

GTC March 2024 Keynote with NVIDIA CEO Jensen Huang


Also relevant/see:




 

From DSC:
I recently ran into the following item:


UK university opens VR classroom — from inavateonthenet.net

Students at the University of Nottingham will be learning through a dedicated VR classroom, enabling remote viewing and teaching for students and lecturers.

Based in the university’s Engineering Science and Learning Centre (ELSC), this classroom, believed to be the first in the UK to use a dedicated VR classroom, using 40 VR headsets, 35 of which are tethered overhead to individual PCs, with five available as traditional, desk-based systems with display screens.


I admit that I was excited to see this article and I congratulate the University of Nottingham on their vision here. I hope that they can introduce more use cases and applications to provide evidence of VR’s headway.

As I look at virtual reality…

  • On the plus side, I’ve spoken with people who love to use their VR-based headsets for fun workouts/exercises. I’ve witnessed the sweat, so I know that’s true. And I believe there is value in having the ability to walk through museums that one can’t afford to get to. And I’m sure that the gamers have found some incredibly entertaining competitions out there. The experience of being immersed can be highly engaging. So there are some niche use cases for sure.
  • But on the negative side, the technologies surrounding VR haven’t progressed as much as I thought they would have by now. For example, I’m disappointed Apple’s taken so long to put a product out there, and I don’t want to invest $3500 in their new product. From the reviews and items on social media that I’ve seen, the reception is lukewarm. At the most basic level, I’m not sure people want to wear a headset for more than a few minutes.

So overall, I’d like to see more use cases and less nausea.


Addendum on 2/27/24:

Leyard ‘wall of wonder’ wows visitors at Molecular Biology Lab — from inavateonthenet.net

 

CES 2024: Unveiling The Future Of Legal Through Consumer Innovations — from abovethelaw.com by Stephen Embry
The ripple effects on the legal industry are real.

The Emerging Role of Smart TVs
Boothe and Comiskey claim that our TVs will become even smarter and better connected to the web and the internet. Our TVs will become an intelligent center for a variety of applications powered through our smartphone. TVs will be able to direct things like appliances and security cameras. Perhaps even more importantly, our TVs can become e-commerce centers, allowing us to speak with them and conduct business.

This increased TV capability means that the TV could become a more dominant mode of working and computing for lawyers. As TVs become more integrated with the internet and capable of functioning as communication hubs, they could potentially replace traditional computing devices in legal settings. With features like voice control and pattern recognition, TVs could serve as efficient tools for such things as document preparation and client meetings.

From DSC:
Now imagine the power of voice-enabled chatbots and the like. We could be videoconferencing (or holograming) with clients, and be able to access information at the same time. Language translation — like that in the Timekettle product — will be built in.

I also wonder how this type of functionality will play out in lifelong learning from our living rooms.

Learning from the Living AI-Based Class Room

 


Also, some other legaltech-related items:


Are Tomorrow’s Lawyers Prepared for Legal’s Tech Future? 4 Recent Trends Shaping Legal Education | Legaltech News — from law.com (behind paywall)

Legal Tech Predictions for 2024: Embracing a New Era of Innovation — from jdsupra.com

As we step into 2024, the legal industry continues to be reshaped by technological advancements. This year promises to bring new developments that could revolutionize how legal professionals work and interact with clients. Here are key predictions for legal tech in 2024:

Miss the Legaltech Week 2023 Year-in-Review Show? Here’s the Recording — from lawnext.com by Bob Ambrogi

Last Friday was Legaltech Week’s year-end show, in which our panel of journalists and bloggers picked the year’s top stories in legal tech and innovation.

So what were the top stories? Well, if you missed it, no worries. Here’s the video:

 

OpenAI Is Slowly Killing Prompt Engineering With The Latest ChatGPT and DALL-E Updates — from artificialcorner.substack.com by
ChatGPT and DALL-E 3 now do most of the prompting for us. Does this mean the end of prompt engineering?

Prompt engineering is a must-have skill that any AI enthusiast should have … at least until OpenAI released GPTs and DALL-E 3.

OpenAI doesn’t want to force users to learn prompt engineering to get the most out of its tools.

It seems OpenAI’s goal is to make its tools as easy to use as possible allowing even non-tech people to create outstanding AI images and tailored versions of ChatGPT without learning prompting techniques or coding.

AI can now generate prompts for us, but is this enough to kill prompt engineering? To answer this, let’s see how good are these AI-generated prompts.

From DSC:
I agree with several others that prompt engineering will be drastically altered…for the majority of us, I wouldn’t spend a lot of time becoming a Prompt Engineer.


.


 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 



Adobe video-AI announcements for IBC — from provideocoalition.com by Rich Young

For the IBC 2023 conference, Adobe announced new AI and 3D features to Creative Cloud video tools, including Premiere Pro Enhance Speech for faster dialog cleanup, and filler word detection and removal in Text-Based Editing. There’s also new AI-based rotoscoping and a true 3D workspace in the After Effects beta, as well as new camera-to-cloud integrations and advanced storage options in Frame.io.

Though not really about AI, you might also be interested in this posting:


Airt AI Art Generator (Review) — from hongkiat.com
Turn your creative ideas into masterpieces using Airt’s AI iPad app.

The Airt AI Generator app makes it easy to create art on your iPad. You can pick an art style and a model to make your artwork. It’s simple enough for anyone to use, but it doesn’t have many options for customizing your art.

Even with these limitations, it’s a good starting point for people who want to try making art with AI. Here are the good and bad points we found.

Pros:

  • User-Friendly: The app is simple and easy to use, making it accessible for users of all skill levels.

Cons:

  • Limited Advanced Features: The app lacks options for customization, such as altering image ratios, seeds, and other settings.

 

Birmingham Royal Ballet launches VR programme to improve accessibility— from inavateonthenet.net

The Birmingham Royal Ballet (BRB) has announced the launch of its virtual stage, a tech-focused project designed to bring immersive technologies into ballet.

The BRB has received funding from Bloomberg Philanthropies’ Digital Accelerator Programme, allowing the institution to invest in equipment and staff training to allow its team to explore immersive technologies with its partners Canon and RiVR.

The virtual stage project aims to explore ways in which AR, VR, 3D mapping and motion capture can be used to enhance the BRB’s productions and experiences.

 

The Ready Player One Test: Systems for Personalized Learning — from gettingsmart.com by Dagan Bernstein

Key Points

  • The single narrative education system is no longer working.
  • Its main limitation is its inability to honor young people as the dynamic individuals that they are.
  • New models of teaching and learning need to be designed to center on the student, not the teacher.

When the opportunity arises to implement learning that uses immersive technology ask yourself if the learning you are designing passes the Ready Player One Test: 

  • Does it allow learners to immerse themselves in environments that would be too expensive or dangerous to experience otherwise?
  • Can the learning be personalized by the student?
  • Is it regenerative?
  • Does it allow for learning to happen non-linearly, at any time and place?
 

Apple’s $3,499 Vision Pro AR headset is finally here — from techcrunch.com by Brian Heater

Image of the Vision Pro AR headset from Apple

Image Credits: Apple

Excerpts:

“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”


Reflections from Scott Belsky re: the Vision Pro — from implications.com


Apple WWDC 2023: Everything announced from the Apple Vision Pro to iOS 17, MacBook Air and more — from techcrunch.com by Christine Hall



Apple unveils new tech — from therundown.ai (The Rundown)

Here were the biggest things announced:

  • A 15” Macbook Air, now the thinnest 15’ laptop available
  • The new Mac Pro workstation, presumably a billion dollars
  • M2 Ultra, Apple’s new super chip
  • NameDrop, an AirDrop-integrated data-sharing feature allowing users to share contact info just by bringing their phones together
  • Journal, an ML-powered personalized journalling app
  • Standby, turning your iPhone into a nightstand alarm clock
  • A new, AI-powered update to autocorrect (finally)
  • Apple Vision Pro


Apple announces AR/VR headset called Vision Pro — from joinsuperhuman.ai by Zain Kahn

Excerpt:

“This is the first Apple product you look through and not at.” – Tim Cook

And with those famous words, Apple announced a new era of consumer tech.

Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.

Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.


From DSC:
Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?

I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.


 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 

Brainyacts #57: Education Tech— from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts:

Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.

  1. Personalized learning
  2. Virtual tutors and coaches
  3. Interactive simulations
  4. Enhanced course materials
  5. Collaborative learning
  6. Automated assessment and feedback
  7. Continuous improvement
  8. Accessibility and inclusivity

AI Will Democratize Learning — from td.org by Julia Stiglitz and Sourabh Bajaj

Excerpts:

In particular, we’re betting on four trends for AI and L&D.

  1. Rapid content production
  2. Personalized content
  3. Detailed, continuous feedback
  4. Learner-driven exploration

In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.

Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.

The Next Evolution in Educational Technologies and Assisted Learning Enablement — from educationoneducation.substack.com by Jeannine Proctor

Excerpt:

Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.

How Generative AI Will Enable Personalized Learning Experiences — from campustechnology.com by Rhea Kelly

Excerpt:

With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different — from the74million.org by John Bailey; with thanks to GSV for this resource

Excerpts:

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

    1. Smarter capabilities
    2. Reasoning engines
    3. Language is the interface
    4. Unprecedented scale

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier — from blogs.nvidia.com by Aaron Lefohn
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.

Excerpt:

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

 

Also relevant to the item from Nvidia (above), see:

Unreal Engine’s Metahuman Creator — with thanks to Mr. Steven Chevalia for this resource

Excerpt:

MetaHuman is a complete framework that gives any creator the power to use highly realistic human characters in any way imaginable.

It includes MetaHuman Creator, a free cloud-based app that enables you to create fully rigged photorealistic digital humans in minutes.

From Unreal Engine -- Dozens of ready-made MetaHumans are at your fingertips.

 

35 Ways Real People Are Using A.I. Right Now — from nytimes.com by Francesca Paris and Larry Buchanan

From DSC:
It was interesting to see how people are using AI these days. The article mentioned things from planning Gluten Free (GF) meals to planning gardens, workouts, and more. Faculty members, staff, students, researchers and educators in general may find Elicit, Scholarcy and Scite to be useful tools. I put in a question at Elicit and it looks interesting. I like their interface, which allows me to quickly resort things.
.

Snapshot of a query result from a tool called Elicit


 

There Is No A.I. — from newyorker.com by Jaron Lanier
There are ways of controlling the new technology—but first we have to stop mythologizing it.

Excerpts:

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

 


 

Resource per Steve Nouri on LinkedIn


 

Meet Adobe Firefly. Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Meet Adobe Firefly. — from adobe.com
Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Generative AI made for creators.
With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Looking forward, Firefly has the potential to do much, much more.


Also relevant/see:

Gen-2 from runway Research -- the next step forward for generative AI

Gen-2: The Next Step Forward for Generative AI — from research.runwayml.com
A multi-modal AI system that can generate novel videos with text, images, or video clips.

Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.

 
© 2024 | Daniel Christian