(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 


How Early Adopters of Gen AI Are Gaining Efficiencies — from knowledge.wharton.upenn.edu by Prasanna (Sonny) Tambe and Scott A. Snyder; via Ray Schroeder on LinkedIn
Enterprises are seeing gains from generative AI in productivity and strategic planning, according to speakers at a recent Wharton conference.

Its unique strengths in translation, summation, and content generation are especially useful in processing unstructured data. Some 80% of all new data in enterprises is unstructured, he noted, citing research firm Gartner. Very little of that unstructured data that resides in places like emails “is used effectively at the point of decision making,” he noted. “[With gen AI], we have a real opportunity” to garner new insights from all the information that resides in emails, team communication platforms like Slack, and agile project management tools like Jira, he said.


6 YouTube Channels to Stay Up to Date with AI — from heaigirl.substack.com by Diana Dovgopol
Here are some cool AI YouTube channels.

Here are 6 YouTube channels I watch to stay up to date with AI. This list will be useful whether you’re a casual AI enthusiast or an experienced programmer.

1. Matt Wolfe: AI for non-coders
This is a fast-growing YouTube channel focused on artificial intelligence for non-coders. On this channel, you’ll find videos about ChatGPT, Midjourney, and any AI tool that it’s gaining popularity.


Top AI mobile apps, Stable Video 3D, & my AI film workflow — from by Heather Cooper
Plus 1-Click 3D animation and other cool AI tools

#3 Photomath
Photomath is a comprehensive math help app that provides step-by-step explanations for a wide range of math problems, from elementary to college level. Photomath is only available as a mobile app. (link)

Features:

  • Get step-by-step solutions with multiple methods to choose from
  • Scan any math problem, including word problems, using the app’s camera
  • Access custom visual aids and extra “how” and “why” tips for deeper understanding

Google researchers unveil ‘VLOGGER’, an AI that can bring still photos to life — from venturebeat.com by Michael Nuñez

Google researchers have developed a new artificial intelligence system that can generate lifelike videos of people speaking, gesturing and moving — from just a single still photo. The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation.



What We Risk By Automating Tasks We Loathe — from marcwatkins.substack.com by Marc Watkins

I’m fascinated by the potential of these tools to augment and enhance our work and creativity. There’s no denying the impressive capabilities we’re already seeing with text generation, image creation, coding assistance, and more. Used thoughtfully, AI can be a powerful productivity multiplier.

At the same time, I have significant concerns about the broader implications of this accelerating technology, especially for education and society at large. We’re traversing new ground at a breakneck pace, and it’s crucial that we don’t blindly embrace AI without considering the potential risks.

My worry is that by automating away too many tasks, even seemingly rote ones like creating slide decks, we risk losing something vital—humanity at the heart of knowledge work.


Nvidia Introduce AI Nurses — from wireprompt.substack.com | Weekkly AI Report from WirePrompt

Nvidia has announced a partnership with Hippocratic AI to introduce AI “agents” aimed at replacing nurses in hospitals. These AI “nurses” come at a significantly low cost compared to human nurses and are purportedly intended to address staffing issues by handling “low-risk,” patient-facing tasks via video calls. However, concerns are raised regarding the ethical implications and effectiveness of replacing human nurses with AI, particularly given the complex nature of medical care.



16 Changes to the Way Enterprises Are Building and Buying Generative AI — from a16z.com by Sarah Wang and Shangda Xu

TABLE OF CONTENTS

  • Resourcing: budgets are growing dramatically and here to stay
  • Models: enterprises are trending toward a multi-model, open source world
  • Use cases: more migrating to production
  • Size of total opportunity: massive and growing quickly

 

GTC March 2024 Keynote with NVIDIA CEO Jensen Huang


Also relevant/see:




 

From DSC:
I recently ran into the following item:


UK university opens VR classroom — from inavateonthenet.net

Students at the University of Nottingham will be learning through a dedicated VR classroom, enabling remote viewing and teaching for students and lecturers.

Based in the university’s Engineering Science and Learning Centre (ELSC), this classroom, believed to be the first in the UK to use a dedicated VR classroom, using 40 VR headsets, 35 of which are tethered overhead to individual PCs, with five available as traditional, desk-based systems with display screens.


I admit that I was excited to see this article and I congratulate the University of Nottingham on their vision here. I hope that they can introduce more use cases and applications to provide evidence of VR’s headway.

As I look at virtual reality…

  • On the plus side, I’ve spoken with people who love to use their VR-based headsets for fun workouts/exercises. I’ve witnessed the sweat, so I know that’s true. And I believe there is value in having the ability to walk through museums that one can’t afford to get to. And I’m sure that the gamers have found some incredibly entertaining competitions out there. The experience of being immersed can be highly engaging. So there are some niche use cases for sure.
  • But on the negative side, the technologies surrounding VR haven’t progressed as much as I thought they would have by now. For example, I’m disappointed Apple’s taken so long to put a product out there, and I don’t want to invest $3500 in their new product. From the reviews and items on social media that I’ve seen, the reception is lukewarm. At the most basic level, I’m not sure people want to wear a headset for more than a few minutes.

So overall, I’d like to see more use cases and less nausea.


Addendum on 2/27/24:

Leyard ‘wall of wonder’ wows visitors at Molecular Biology Lab — from inavateonthenet.net

 

CES 2024: Unveiling The Future Of Legal Through Consumer Innovations — from abovethelaw.com by Stephen Embry
The ripple effects on the legal industry are real.

The Emerging Role of Smart TVs
Boothe and Comiskey claim that our TVs will become even smarter and better connected to the web and the internet. Our TVs will become an intelligent center for a variety of applications powered through our smartphone. TVs will be able to direct things like appliances and security cameras. Perhaps even more importantly, our TVs can become e-commerce centers, allowing us to speak with them and conduct business.

This increased TV capability means that the TV could become a more dominant mode of working and computing for lawyers. As TVs become more integrated with the internet and capable of functioning as communication hubs, they could potentially replace traditional computing devices in legal settings. With features like voice control and pattern recognition, TVs could serve as efficient tools for such things as document preparation and client meetings.

From DSC:
Now imagine the power of voice-enabled chatbots and the like. We could be videoconferencing (or holograming) with clients, and be able to access information at the same time. Language translation — like that in the Timekettle product — will be built in.

I also wonder how this type of functionality will play out in lifelong learning from our living rooms.

Learning from the Living AI-Based Class Room

 


Also, some other legaltech-related items:


Are Tomorrow’s Lawyers Prepared for Legal’s Tech Future? 4 Recent Trends Shaping Legal Education | Legaltech News — from law.com (behind paywall)

Legal Tech Predictions for 2024: Embracing a New Era of Innovation — from jdsupra.com

As we step into 2024, the legal industry continues to be reshaped by technological advancements. This year promises to bring new developments that could revolutionize how legal professionals work and interact with clients. Here are key predictions for legal tech in 2024:

Miss the Legaltech Week 2023 Year-in-Review Show? Here’s the Recording — from lawnext.com by Bob Ambrogi

Last Friday was Legaltech Week’s year-end show, in which our panel of journalists and bloggers picked the year’s top stories in legal tech and innovation.

So what were the top stories? Well, if you missed it, no worries. Here’s the video:

 

OpenAI Is Slowly Killing Prompt Engineering With The Latest ChatGPT and DALL-E Updates — from artificialcorner.substack.com by
ChatGPT and DALL-E 3 now do most of the prompting for us. Does this mean the end of prompt engineering?

Prompt engineering is a must-have skill that any AI enthusiast should have … at least until OpenAI released GPTs and DALL-E 3.

OpenAI doesn’t want to force users to learn prompt engineering to get the most out of its tools.

It seems OpenAI’s goal is to make its tools as easy to use as possible allowing even non-tech people to create outstanding AI images and tailored versions of ChatGPT without learning prompting techniques or coding.

AI can now generate prompts for us, but is this enough to kill prompt engineering? To answer this, let’s see how good are these AI-generated prompts.

From DSC:
I agree with several others that prompt engineering will be drastically altered…for the majority of us, I wouldn’t spend a lot of time becoming a Prompt Engineer.


.


 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 



Adobe video-AI announcements for IBC — from provideocoalition.com by Rich Young

For the IBC 2023 conference, Adobe announced new AI and 3D features to Creative Cloud video tools, including Premiere Pro Enhance Speech for faster dialog cleanup, and filler word detection and removal in Text-Based Editing. There’s also new AI-based rotoscoping and a true 3D workspace in the After Effects beta, as well as new camera-to-cloud integrations and advanced storage options in Frame.io.

Though not really about AI, you might also be interested in this posting:


Airt AI Art Generator (Review) — from hongkiat.com
Turn your creative ideas into masterpieces using Airt’s AI iPad app.

The Airt AI Generator app makes it easy to create art on your iPad. You can pick an art style and a model to make your artwork. It’s simple enough for anyone to use, but it doesn’t have many options for customizing your art.

Even with these limitations, it’s a good starting point for people who want to try making art with AI. Here are the good and bad points we found.

Pros:

  • User-Friendly: The app is simple and easy to use, making it accessible for users of all skill levels.

Cons:

  • Limited Advanced Features: The app lacks options for customization, such as altering image ratios, seeds, and other settings.

 

Birmingham Royal Ballet launches VR programme to improve accessibility— from inavateonthenet.net

The Birmingham Royal Ballet (BRB) has announced the launch of its virtual stage, a tech-focused project designed to bring immersive technologies into ballet.

The BRB has received funding from Bloomberg Philanthropies’ Digital Accelerator Programme, allowing the institution to invest in equipment and staff training to allow its team to explore immersive technologies with its partners Canon and RiVR.

The virtual stage project aims to explore ways in which AR, VR, 3D mapping and motion capture can be used to enhance the BRB’s productions and experiences.

 

The Ready Player One Test: Systems for Personalized Learning — from gettingsmart.com by Dagan Bernstein

Key Points

  • The single narrative education system is no longer working.
  • Its main limitation is its inability to honor young people as the dynamic individuals that they are.
  • New models of teaching and learning need to be designed to center on the student, not the teacher.

When the opportunity arises to implement learning that uses immersive technology ask yourself if the learning you are designing passes the Ready Player One Test: 

  • Does it allow learners to immerse themselves in environments that would be too expensive or dangerous to experience otherwise?
  • Can the learning be personalized by the student?
  • Is it regenerative?
  • Does it allow for learning to happen non-linearly, at any time and place?
 

Apple’s $3,499 Vision Pro AR headset is finally here — from techcrunch.com by Brian Heater

Image of the Vision Pro AR headset from Apple

Image Credits: Apple

Excerpts:

“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”


Reflections from Scott Belsky re: the Vision Pro — from implications.com


Apple WWDC 2023: Everything announced from the Apple Vision Pro to iOS 17, MacBook Air and more — from techcrunch.com by Christine Hall



Apple unveils new tech — from therundown.ai (The Rundown)

Here were the biggest things announced:

  • A 15” Macbook Air, now the thinnest 15’ laptop available
  • The new Mac Pro workstation, presumably a billion dollars
  • M2 Ultra, Apple’s new super chip
  • NameDrop, an AirDrop-integrated data-sharing feature allowing users to share contact info just by bringing their phones together
  • Journal, an ML-powered personalized journalling app
  • Standby, turning your iPhone into a nightstand alarm clock
  • A new, AI-powered update to autocorrect (finally)
  • Apple Vision Pro


Apple announces AR/VR headset called Vision Pro — from joinsuperhuman.ai by Zain Kahn

Excerpt:

“This is the first Apple product you look through and not at.” – Tim Cook

And with those famous words, Apple announced a new era of consumer tech.

Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.

Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.


From DSC:
Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?

I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.


 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 

Brainyacts #57: Education Tech— from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts:

Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.

  1. Personalized learning
  2. Virtual tutors and coaches
  3. Interactive simulations
  4. Enhanced course materials
  5. Collaborative learning
  6. Automated assessment and feedback
  7. Continuous improvement
  8. Accessibility and inclusivity

AI Will Democratize Learning — from td.org by Julia Stiglitz and Sourabh Bajaj

Excerpts:

In particular, we’re betting on four trends for AI and L&D.

  1. Rapid content production
  2. Personalized content
  3. Detailed, continuous feedback
  4. Learner-driven exploration

In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.

Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.

The Next Evolution in Educational Technologies and Assisted Learning Enablement — from educationoneducation.substack.com by Jeannine Proctor

Excerpt:

Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.

How Generative AI Will Enable Personalized Learning Experiences — from campustechnology.com by Rhea Kelly

Excerpt:

With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different — from the74million.org by John Bailey; with thanks to GSV for this resource

Excerpts:

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

    1. Smarter capabilities
    2. Reasoning engines
    3. Language is the interface
    4. Unprecedented scale

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier — from blogs.nvidia.com by Aaron Lefohn
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.

Excerpt:

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

 

Also relevant to the item from Nvidia (above), see:

Unreal Engine’s Metahuman Creator — with thanks to Mr. Steven Chevalia for this resource

Excerpt:

MetaHuman is a complete framework that gives any creator the power to use highly realistic human characters in any way imaginable.

It includes MetaHuman Creator, a free cloud-based app that enables you to create fully rigged photorealistic digital humans in minutes.

From Unreal Engine -- Dozens of ready-made MetaHumans are at your fingertips.

 
© 2024 | Daniel Christian