What DICE does in this posting will be available 24x7x365 in the future [Christian]

From DSC:
First of all, when you look at the following posting:


What Top Tech Skills Should You Learn for 2025? — from dice.com by Nick Kolakowski


…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.

I assert that in the future, people will be able to see this information on a 24x7x365 basis.

  • Which jobs are in demand?
  • What skills do I need to do those jobs?
  • WHERE do I get/develop those skills?

And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.

BUT PEOPLE are the key! Often times, we need to — and prefer to — learn with others!


 

CES 2024: Unveiling The Future Of Legal Through Consumer Innovations — from abovethelaw.com by Stephen Embry
The ripple effects on the legal industry are real.

The Emerging Role of Smart TVs
Boothe and Comiskey claim that our TVs will become even smarter and better connected to the web and the internet. Our TVs will become an intelligent center for a variety of applications powered through our smartphone. TVs will be able to direct things like appliances and security cameras. Perhaps even more importantly, our TVs can become e-commerce centers, allowing us to speak with them and conduct business.

This increased TV capability means that the TV could become a more dominant mode of working and computing for lawyers. As TVs become more integrated with the internet and capable of functioning as communication hubs, they could potentially replace traditional computing devices in legal settings. With features like voice control and pattern recognition, TVs could serve as efficient tools for such things as document preparation and client meetings.

From DSC:
Now imagine the power of voice-enabled chatbots and the like. We could be videoconferencing (or holograming) with clients, and be able to access information at the same time. Language translation — like that in the Timekettle product — will be built in.

I also wonder how this type of functionality will play out in lifelong learning from our living rooms.

Learning from the Living AI-Based Class Room

 


Also, some other legaltech-related items:


Are Tomorrow’s Lawyers Prepared for Legal’s Tech Future? 4 Recent Trends Shaping Legal Education | Legaltech News — from law.com (behind paywall)

Legal Tech Predictions for 2024: Embracing a New Era of Innovation — from jdsupra.com

As we step into 2024, the legal industry continues to be reshaped by technological advancements. This year promises to bring new developments that could revolutionize how legal professionals work and interact with clients. Here are key predictions for legal tech in 2024:

Miss the Legaltech Week 2023 Year-in-Review Show? Here’s the Recording — from lawnext.com by Bob Ambrogi

Last Friday was Legaltech Week’s year-end show, in which our panel of journalists and bloggers picked the year’s top stories in legal tech and innovation.

So what were the top stories? Well, if you missed it, no worries. Here’s the video:

 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 



Adobe video-AI announcements for IBC — from provideocoalition.com by Rich Young

For the IBC 2023 conference, Adobe announced new AI and 3D features to Creative Cloud video tools, including Premiere Pro Enhance Speech for faster dialog cleanup, and filler word detection and removal in Text-Based Editing. There’s also new AI-based rotoscoping and a true 3D workspace in the After Effects beta, as well as new camera-to-cloud integrations and advanced storage options in Frame.io.

Though not really about AI, you might also be interested in this posting:


Airt AI Art Generator (Review) — from hongkiat.com
Turn your creative ideas into masterpieces using Airt’s AI iPad app.

The Airt AI Generator app makes it easy to create art on your iPad. You can pick an art style and a model to make your artwork. It’s simple enough for anyone to use, but it doesn’t have many options for customizing your art.

Even with these limitations, it’s a good starting point for people who want to try making art with AI. Here are the good and bad points we found.

Pros:

  • User-Friendly: The app is simple and easy to use, making it accessible for users of all skill levels.

Cons:

  • Limited Advanced Features: The app lacks options for customization, such as altering image ratios, seeds, and other settings.

 

The Ready Player One Test: Systems for Personalized Learning — from gettingsmart.com by Dagan Bernstein

Key Points

  • The single narrative education system is no longer working.
  • Its main limitation is its inability to honor young people as the dynamic individuals that they are.
  • New models of teaching and learning need to be designed to center on the student, not the teacher.

When the opportunity arises to implement learning that uses immersive technology ask yourself if the learning you are designing passes the Ready Player One Test: 

  • Does it allow learners to immerse themselves in environments that would be too expensive or dangerous to experience otherwise?
  • Can the learning be personalized by the student?
  • Is it regenerative?
  • Does it allow for learning to happen non-linearly, at any time and place?
 

Apple’s $3,499 Vision Pro AR headset is finally here — from techcrunch.com by Brian Heater

Image of the Vision Pro AR headset from Apple

Image Credits: Apple

Excerpts:

“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”


Reflections from Scott Belsky re: the Vision Pro — from implications.com


Apple WWDC 2023: Everything announced from the Apple Vision Pro to iOS 17, MacBook Air and more — from techcrunch.com by Christine Hall



Apple unveils new tech — from therundown.ai (The Rundown)

Here were the biggest things announced:

  • A 15” Macbook Air, now the thinnest 15’ laptop available
  • The new Mac Pro workstation, presumably a billion dollars
  • M2 Ultra, Apple’s new super chip
  • NameDrop, an AirDrop-integrated data-sharing feature allowing users to share contact info just by bringing their phones together
  • Journal, an ML-powered personalized journalling app
  • Standby, turning your iPhone into a nightstand alarm clock
  • A new, AI-powered update to autocorrect (finally)
  • Apple Vision Pro


Apple announces AR/VR headset called Vision Pro — from joinsuperhuman.ai by Zain Kahn

Excerpt:

“This is the first Apple product you look through and not at.” – Tim Cook

And with those famous words, Apple announced a new era of consumer tech.

Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.

Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.


From DSC:
Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?

I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.


 

Brainyacts #57: Education Tech— from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts:

Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.

  1. Personalized learning
  2. Virtual tutors and coaches
  3. Interactive simulations
  4. Enhanced course materials
  5. Collaborative learning
  6. Automated assessment and feedback
  7. Continuous improvement
  8. Accessibility and inclusivity

AI Will Democratize Learning — from td.org by Julia Stiglitz and Sourabh Bajaj

Excerpts:

In particular, we’re betting on four trends for AI and L&D.

  1. Rapid content production
  2. Personalized content
  3. Detailed, continuous feedback
  4. Learner-driven exploration

In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.

Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.

The Next Evolution in Educational Technologies and Assisted Learning Enablement — from educationoneducation.substack.com by Jeannine Proctor

Excerpt:

Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.

How Generative AI Will Enable Personalized Learning Experiences — from campustechnology.com by Rhea Kelly

Excerpt:

With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different — from the74million.org by John Bailey; with thanks to GSV for this resource

Excerpts:

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

    1. Smarter capabilities
    2. Reasoning engines
    3. Language is the interface
    4. Unprecedented scale

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier — from blogs.nvidia.com by Aaron Lefohn
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.

Excerpt:

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

 

Also relevant to the item from Nvidia (above), see:

Unreal Engine’s Metahuman Creator — with thanks to Mr. Steven Chevalia for this resource

Excerpt:

MetaHuman is a complete framework that gives any creator the power to use highly realistic human characters in any way imaginable.

It includes MetaHuman Creator, a free cloud-based app that enables you to create fully rigged photorealistic digital humans in minutes.

From Unreal Engine -- Dozens of ready-made MetaHumans are at your fingertips.

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 

From DSC:
I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.

It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets.  Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.

One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)).  If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.

For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.

And/or perhaps this is a feature in our future videoconferencing applications.

But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.

Along these lines, see:

.

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

 

From DSC:
I received an email the other day re: a TytoCare Exam Kit. It said (with some emphasis added by me):

With a TytoCare Exam Kit connected to Spectrum Health’s 24/7 Virtual Urgent Care, you and your family can have peace of mind and a quick, accurate diagnosis and treatment plan whenever you need it without having to leave your home.

Your TytoCare Exam Kit will allow your provider to listen to your lungs, look inside your ears or throat, check your temperature, and more during a virtual visit.

Why TytoCare?

    • Convenience – With a TytoCare Exam Kit and our 24/7/365 On-Demand Virtual Urgent Care there is no drive, no waiting room, no waiting for an appointment.
    • Peace of Mind – Stop debating about whether symptoms are serious enough to do something about them.
    • Savings – Without the cost of gas or taking off work, you get the reliable exams and diagnosis you need. With a Virtual Urgent Care visit you’ll never pay more than $50. That’s cheaper than an in-person urgent care visit, but the same level of care.

From DSC:
It made me reflect on what #telehealth has morphed into these days. Then it made me wonder (again), what #telelegal might become in the next few years…? Hmmm. I hope the legal field can learn from the healthcare industry. It could likely bring more access to justice (#A2J), increased productivity (for several of the parties involved), as well as convenience, peace of mind, and cost savings.


 

 

What does the ‘metaverse’ mean for education? — from hechingerreport.org by Javeria Salman
Experts warn educators to think twice before jumping on new technologies

Excerpt:

Sometime in the past year or two, you’ve likely heard the word “metaverse.” It’s the future, the next big frontier of the internet, if you ask technology CEOs or researchers.

While the term has become the latest buzzword in education circles, what it means for teaching and learning largely remains to be seen. Experts say much of what we see marketed as the metaverse from education technology companies isn’t actually the metaverse.

In a true metaverse experience, your digital identity travels between the physical and virtual worlds, Platt said. With the help of blockchain technology, that identity — your preferences, your achievements, your educational records, other elements of who you are — is maintained across platforms and applications.

 

How lawyers can unlock the potential of the metaverse — from abajournal.com by Victor Li

Excerpt:

One such firm is Grungo Colarulo, a personal injury law firm with offices in New Jersey and Pennsylvania. Last December, the firm announced that it had set up shop in the virtual world known as Decentraland.

Users can enter the firm’s virtual office, where they can interact with the firm’s avatar. They can talk to the avatar to see whether they might need legal representation and then take down a phone number to call the firm in the physical world. If they’re already clients, they can arrive for meetings or consultations.

Richard Grungo Jr., co-founder and name partner at Grungo Colarulo, told the ABA Journal in December 2021 that he could see the potential of the metaverse to allow his firm to host webinars, CLEs and other virtual educational opportunities, as well as hosting charity events.

Grungo joined the ABA Journal’s Victor Li to talk about how lawyers can use the metaverse to market themselves, as well as legal issues relating to the technology that all users should be aware of.

From DSC:
I post this to put this on the radars of legal folks out there. Law schools should join the legaltech folks in pulse-checking and covering/addressing emerging technologies. What the Metaverse and Web3 become is too early to tell. My guess is that we’ll see a lot more blending of the real world with the digital world — especially via Augmented Reality (AR).

We need to constantly be pulse-checking the landscapes out there and developing scenarios and solutions to such trends

 

6 trends are driving the use of #metaverse tech today. These trends and technologies will continue to drive its use over the next 3 to 5 years:

1. Gaming
2. Digital Humans
3. Virtual Spaces
4. Shared Experiences
5. Tokenized Assets
6. Spatial Computing
#GartnerSYM

.

“Despite all of the hype, the adoption of #metaverse tech is nascent and fragmented.” 

.

Also relevant/see:

According to Apple CEO Tim Cook, the Next Internet Revolution Is Not the Metaverse. It’s This — from inc.com by Nick Hobson
The metaverse is just too wacky and weird to be the next big thing. Tim Cook is betting on AR.

Excerpts:

While he might know a thing or two about radical tech, to him it’s unconvincing that the average person sufficiently understands the concept of the metaverse enough to meaningfully incorporate it into their daily life.

The metaverse is just too wacky and weird.

And, according to science, he might be on to something.

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Tearing the ‘paper ceiling’: McKinsey supports effort driving upward mobility for millions of workers — from mckinsey.com

Excerpt:

September 23, 2022There’s a hidden talent pool that most employers overlook—the more than 70 million workers in the US who are STARs, or workers ‘skilled through alternative routes.’ Whether through community college, workforce training, bootcamp or certificate programs, military service, or on-the-job learning, STARs have the skills for higher-wage jobs but often find themselves blocked from consideration.

This week, nonprofit Opportunity@Work and the Ad Council have launched a nationwide campaign to ‘Tear the Paper Ceiling’ and encourage employers to change hiring practices. McKinsey is providing pro bono support to the effort through data and analytics tools that enable recruiters to recognize STARs and their skills.

“While companies scramble to find talent amid a perceived skills gap, many of their job postings have needlessly excluded half of the workers in the country who have the skills for higher-wage work,” says Byron Auguste, founder of Opportunity@Work and a former senior partner at McKinsey. “Companies like the ones we’re proud to call partners in this effort—and those we hope will join—can lead the way by tapping into skilled talent from a far wider range of backgrounds.”

There are lots of reasons why someone might not begin or complete a degree that have nothing to do with their intrinsic abilities or potential. We know there are better ways to screen for talent and now we have the research and tools to back that up.

Carolyn Pierce, McKinsey partner

Also from McKinsey, see:

Latest McKinsey tech outlook identifies 14 key trends for business leaders

Excerpt:

October 4, 2022 The McKinsey Technology Council—a global group of over 100 scientists, entrepreneurs, researchers, and business leaders—has published its second annual Technology Trends Outlook. By assessing metrics of innovation, interest, investment, and adoption, the council has prioritized and synthesized 40 technologies into 14 leading trends.

Following on from last year, applied AI once again earned the highest score for innovation in the report. Sustainability, meanwhile, emerged as a major catalyst for tech around the world, with clean energy and sustainable consumption drawing the highest investment from private-equity and venture-capital firms. And five new trends were added to this year’s edition: industrializing machine learning, Web3, immersive-reality technologies, the future of mobility, and the future of space.

In this post, McKinsey senior partner Lareina Yee, expert partner Roger Roberts, and McKinsey Global Institute partner Michael Chui share their thoughts about what the findings may mean for leaders over the next few years.

 
© 2024 | Daniel Christian