World’s largest projection mapping snags Guinness World Record — from inavateonthenet.net
A nightly projection mapping display at the Tokyo metropolitan government headquarters has been recognised by Guinness World Records as the largest in the world.

 

From DSC:
I recently ran into the following item:


UK university opens VR classroom — from inavateonthenet.net

Students at the University of Nottingham will be learning through a dedicated VR classroom, enabling remote viewing and teaching for students and lecturers.

Based in the university’s Engineering Science and Learning Centre (ELSC), this classroom, believed to be the first in the UK to use a dedicated VR classroom, using 40 VR headsets, 35 of which are tethered overhead to individual PCs, with five available as traditional, desk-based systems with display screens.


I admit that I was excited to see this article and I congratulate the University of Nottingham on their vision here. I hope that they can introduce more use cases and applications to provide evidence of VR’s headway.

As I look at virtual reality…

  • On the plus side, I’ve spoken with people who love to use their VR-based headsets for fun workouts/exercises. I’ve witnessed the sweat, so I know that’s true. And I believe there is value in having the ability to walk through museums that one can’t afford to get to. And I’m sure that the gamers have found some incredibly entertaining competitions out there. The experience of being immersed can be highly engaging. So there are some niche use cases for sure.
  • But on the negative side, the technologies surrounding VR haven’t progressed as much as I thought they would have by now. For example, I’m disappointed Apple’s taken so long to put a product out there, and I don’t want to invest $3500 in their new product. From the reviews and items on social media that I’ve seen, the reception is lukewarm. At the most basic level, I’m not sure people want to wear a headset for more than a few minutes.

So overall, I’d like to see more use cases and less nausea.


Addendum on 2/27/24:

Leyard ‘wall of wonder’ wows visitors at Molecular Biology Lab — from inavateonthenet.net

 

Hologram lecturers thrill students at trailblazing UK university — from theguardian.com by Rachel Hall

Prof Vikki Locke and Prof Gary Burnett try out the hologram technology. Photograph: Christopher Thomond/The Guardian

Any university lecturer will tell you that luring students to a morning lecture is an uphill struggle. But even the most hungover fresher would surely be enticed by a physics lesson from Albert Einstein or a design masterclass from Coco Chanel.

This could soon be the reality for British students, as some universities start to beam in guest lecturers from around the globe using the same holographic technology that is used to bring dead or retired singers back to the stage.

 

Enter the New Era of Mobile AI With Samsung Galaxy S24 Series — from news.samsung.com

Galaxy AI introduces meaningful intelligence aimed at enhancing every part of life, especially the phone’s most fundamental role: communication. When you need to defy language barriers, Galaxy S24 makes it easier than ever. Chat with another student or colleague from abroad. Book a reservation while on vacation in another country. It’s all possible with Live Translate,2 two-way, real-time voice and text translations of phone calls within the native app. No third-party apps are required, and on-device AI keeps conversations completely private.

With Interpreter, live conversations can be instantly translated on a split-screen view so people standing opposite each other can read a text transcription of what the other person has said. It even works without cellular data or Wi-Fi.


Galaxy S24 — from theneurondaily.com by Noah Edelman & Pete Huang

Samsung just announced the first truly AI-powered smartphone: the Galaxy S24.


For us AI power users, the features aren’t exactly new, but it’s the first time we’ve seen them packaged up into a smartphone (Siri doesn’t count, sorry).


Samsung’s Galaxy S24 line arrives with camera improvements and generative AI tricks — from techcrunch.com by Brian Heater
Starting at $800, the new flagships offer brighter screens and a slew of new photo-editing tools

 

CES 2024: Unveiling The Future Of Legal Through Consumer Innovations — from abovethelaw.com by Stephen Embry
The ripple effects on the legal industry are real.

The Emerging Role of Smart TVs
Boothe and Comiskey claim that our TVs will become even smarter and better connected to the web and the internet. Our TVs will become an intelligent center for a variety of applications powered through our smartphone. TVs will be able to direct things like appliances and security cameras. Perhaps even more importantly, our TVs can become e-commerce centers, allowing us to speak with them and conduct business.

This increased TV capability means that the TV could become a more dominant mode of working and computing for lawyers. As TVs become more integrated with the internet and capable of functioning as communication hubs, they could potentially replace traditional computing devices in legal settings. With features like voice control and pattern recognition, TVs could serve as efficient tools for such things as document preparation and client meetings.

From DSC:
Now imagine the power of voice-enabled chatbots and the like. We could be videoconferencing (or holograming) with clients, and be able to access information at the same time. Language translation — like that in the Timekettle product — will be built in.

I also wonder how this type of functionality will play out in lifelong learning from our living rooms.

Learning from the Living AI-Based Class Room

 


Also, some other legaltech-related items:


Are Tomorrow’s Lawyers Prepared for Legal’s Tech Future? 4 Recent Trends Shaping Legal Education | Legaltech News — from law.com (behind paywall)

Legal Tech Predictions for 2024: Embracing a New Era of Innovation — from jdsupra.com

As we step into 2024, the legal industry continues to be reshaped by technological advancements. This year promises to bring new developments that could revolutionize how legal professionals work and interact with clients. Here are key predictions for legal tech in 2024:

Miss the Legaltech Week 2023 Year-in-Review Show? Here’s the Recording — from lawnext.com by Bob Ambrogi

Last Friday was Legaltech Week’s year-end show, in which our panel of journalists and bloggers picked the year’s top stories in legal tech and innovation.

So what were the top stories? Well, if you missed it, no worries. Here’s the video:

 

7thSense powers content on Sphere’s interior and exterior LED displays — from inavateonthenet.net

At 16K x 16K resolution, Sphere’s interior LED display plane is the highest resolution LED screen in the world. Soaring to a height of 240 feet, and with over 3 acres of display surface, the screen wraps up, over, and around the audience to create a fully immersive visual environment.


50-foot-high Hypervsn hologram launches at The Sphere in Las Vegas — from inavateonthenet.net

The pubic atrium of the recently opened MSG Sphere in Las Vegas features Hypervsn’s largest-ever 30 x 50-foot holographic display wall alongside real-life humanoid robots that greet visitors on arrival, a 360-degree avatar capture and a beam-forming sound display.

The 50-foot-high installation project included 420 individual SmartV Hypervsn displays fixed to a secure rack that hangs from the atrium wall.



 

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398


Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net

Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.

Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.

The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.


 

As AI Chatbots Rise, More Educators Look to Oral Exams — With High-Tech Twist — from edsurge.com by Jeffrey R. Young

To use Sherpa, an instructor first uploads the reading they’ve assigned, or they can have the student upload a paper they’ve written. Then the tool asks a series of questions about the text (either questions input by the instructor or generated by the AI) to test the student’s grasp of key concepts. The software gives the instructor the choice of whether they want the tool to record audio and video of the conversation, or just audio.

The tool then uses AI to transcribe the audio from each student’s recording and flags areas where the student answer seemed off point. Teachers can review the recording or transcript of the conversation and look at what Sherpa flagged as trouble to evaluate the student’s response.

 

Humane’s ‘Ai Pin’ debuts on the Paris runway — from techcrunch.com by Brian Heater

“The [Ai Pin is a] connected and intelligent clothing-based wearable device uses a range of sensors that enable contextual and ambient compute interactions,” the company noted at the time. “The Ai Pin is a type of standalone device with a software platform that harnesses the power of Ai to enable innovative personal computing experiences.”


Also relevant/see:

 



Adobe video-AI announcements for IBC — from provideocoalition.com by Rich Young

For the IBC 2023 conference, Adobe announced new AI and 3D features to Creative Cloud video tools, including Premiere Pro Enhance Speech for faster dialog cleanup, and filler word detection and removal in Text-Based Editing. There’s also new AI-based rotoscoping and a true 3D workspace in the After Effects beta, as well as new camera-to-cloud integrations and advanced storage options in Frame.io.

Though not really about AI, you might also be interested in this posting:


Airt AI Art Generator (Review) — from hongkiat.com
Turn your creative ideas into masterpieces using Airt’s AI iPad app.

The Airt AI Generator app makes it easy to create art on your iPad. You can pick an art style and a model to make your artwork. It’s simple enough for anyone to use, but it doesn’t have many options for customizing your art.

Even with these limitations, it’s a good starting point for people who want to try making art with AI. Here are the good and bad points we found.

Pros:

  • User-Friendly: The app is simple and easy to use, making it accessible for users of all skill levels.

Cons:

  • Limited Advanced Features: The app lacks options for customization, such as altering image ratios, seeds, and other settings.

 


The next phase of digital whiteboarding for Google Workspace— from workspaceupdates.googleblog.com

What’s changing

In late 2024, we will wind down the Jamboard whiteboarding app as well as continue with the previously planned end of support for Google Jamboard devices. For those who are impacted by this change, we are committed to help you transition:

    • We are integrating whiteboard tools such as FigJam, Lucidspark, and Miro across Google Workspace so you can include them when collaborating in Meet, sharing content in Drive, or scheduling in Calendar.

The Teacher’s Guide for Transitioning from Jamboard to FigJam — from tommullaney.com by Tom Mullaney


 

ChatGPT can now see, hear, and speak — from openai.com
We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.

Voice and image give you more ways to use ChatGPT in your life. Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it. When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you.

We’re rolling out voice and images in ChatGPT to Plus and Enterprise users over the next two weeks. Voice is coming on iOS and Android (opt-in in your settings) and images will be available on all platforms.





OpenAI Seeks New Valuation of Up to $90 Billion in Sale of Existing Shares — from wsj.com (behind paywall)
Potential sale would value startup at roughly triple where it was set earlier this year


The World’s First AI Cinema Experience Starring YOU Is Open In NZ And Buzzy Doesn’t Cover It — from theedge.co.nz by Seth Gupwell
Allow me to manage your expectations.

Because it’s the first-ever on Earth, it’s hard to label what kind of entertainment Hypercinema is. While it’s marketed as a “live AI experience” that blends “theatre, film and digital technology”, Dr. Gregory made it clear that it’s not here to make movies and TV extinct.

Your face and personality are how HyperCinema sets itself apart from the art forms of old. You get 15 photos of your face taken from different angles, then answer a questionnaire – mine started by asking what my fave vegetable was and ended by demanding to know what I thought the biggest threat to humanity was. Deep stuff, but the questions are always changing, cos that’s how AI rolls.

All of this information is stored on your cube – a green, glowing accessory that you carry around for the whole experience and insert into different sockets to transfer your info onto whatever screen is in front of you. Upon inserting your cube, the “live AI experience” starts.

The AI has taken your photos and superimposed your face on a variety of made-up characters in different situations.


Announcing Microsoft Copilot, your everyday AI companion — from blogs.microsoft.com by Yusuf Mehdi

We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology. With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action. At Microsoft, we think about this as having a copilot to help navigate any task. We have been building AI-powered copilots into our most used and loved products – making coding more efficient with GitHub, transforming productivity at work with Microsoft 365, redefining search with Bing and Edge and delivering contextual value that works across your apps and PC with Windows.

Today we take the next step to unify these capabilities into a single experience we call Microsoft Copilot, your everyday AI companion. Copilot will uniquely incorporate the context and intelligence of the web, your work data and what you are doing in the moment on your PC to provide better assistance – with your privacy and security at the forefront.


DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
DALL·E 3 is now in research preview, and will be available to ChatGPT Plus and Enterprise customers in October, via the API and in Labs later this fall.


 

The out-of-this-world project redefining ‘edutainment’ — from inavateonthenet.net by Reece Webb

A new planetarium project in the UK has the potential to revolutionise education and entertainment. Reece Webb reports.

Many integrators will work on a career defining project, and for Amir Khosh, a new, one-of-a-kind planetarium project nestled in the heart of Nottinghamshire, UK, has sat at the centre of his world.

A project more than five years in the making, ST Engineering Antycip will be part of the large-scale developmemt that is the Sherwood Observatory, which aims to drive education enrichment and visitor attraction in marginalised communities.

A new planetarium project in the UK has the potential to revolutionise education and entertainment. Reece Webb reports.


Also from inavateonthenet.net, see:

Digital Projection paints a picture at Vincent meets Rembrandt exhibition

 

A TV show with no ending — from joinsuperhuman.ai by Zain Kahn
ALSO: Turbocharged GPT is here

We’re standing on the cusp of artificially generated content that could theoretically never end. According to futurist Sinéad Bovell, “Generative artificial intelligence also means that say we don’t want a movie or a series to end. It doesn’t have to, you could use AI to continue to generate more episodes and other sequels and have this kind of ongoing storyline.”

If we take this logic further, we could also see hyper-personalized content that’s created just for us. Imagine getting an AI generated album from your favourite artist every week. Or a brand new movie starring actors who are no longer alive, like a new romcom with Marylin Monroe and Frank Sinatra.

While this sounds like a compelling proposition for consumers, it’s mostly bad news for actors, writers, and other professionals working in the media industry. Hollywood studios are already investing heavily in generative AI, and many professionals working in the industry are afraid to lose their jobs.



 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 
© 2024 | Daniel Christian