Hiperwall Introduces Cost-Effective ‘Essentials’ Video Wall Hardware and Software Packages — from hiperwall.com with thanks to Michael Farino for this resource
Hiperwall Essentials video wall bundles eliminate barriers to entry for organizations wanting enhanced collaboration, clearer communication, and the ability to make informed real-time decisions

Excerpt:

February 24, 2021 – IRVINE, Calif., – Hiperwall Inc., an industry-leader in commercialized, IP-based visualization technology, today introduces ‘Hiperwall Essentials,’ two all-inclusive video wall hardware and software bundles that get users started with a full-featured, control-room grade video wall powered by Hiperwall for just $9,995.

Most major decisions made in the public and private sectors are driven by vast amounts of data. Due to the volume of data sources, data complexity, and different analytics tools, video walls have become the perfect canvas for decision-makers to put all of this data together clearly to arrive at an informed decision faster and more confidently.

At a price point that effectively removes barriers to implementation for small to medium businesses, small government agencies, and local law enforcement, Hiperwall Essentials serves as a great baseline for integrating video wall technology into any organization. As dependence on the video wall grows, Hiperwall’s modular platform makes scaling the video wall footprint and capabilities seamless and cost-effective.


Below are some example settings:

For those interested in video walls, this is worth checking out. These pictures are example settings.

 

For those interested in video walls, this is worth checking out. These pictures are example settings.

 

For those interested in video walls, this is worth checking out. These pictures are example settings.

 

For those interested in video walls, this is worth checking out. These pictures are example settings.

 

For those interested in video walls, this is worth checking out. These pictures are example settings.

 

Learning from the Living [Class] Room: Adobe — via Behance — is already doing several pieces of this vision.

From DSC:
Talk about streams of content! Whew!

Streams of content

I received an email from Adobe that was entitled, “This week on Adobe Live: Graphic Design.”  (I subscribe to their Adobe Creative Cloud.) Inside the email, I saw and clicked on the following:

Below are some of the screenshots I took of this incredible service! Wow!

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

 


From DSC:
So Abobe — via Behance — is already doing several pieces of the “Learning from the Living [Class] Room” vision. I knew of Behance…but I didn’t realize the magnitude of what they’ve been working on and what they’re currently delivering. Very sharp indeed!

Churches are doing this as well — one device has the presenter/preacher on it (such as a larger “TV”), while a second device is used to communicate with each other in real-time.


 

 

Radio.Garden — with thanks to David Pogue for this resource

From DSC:
This is amazing! Some screenshots:

Radio.garden -- tune into thousands of live radio stations across the globe!

Radio.garden -- tune into thousands of live radio stations across the globe!

Several questions/reflections come to my mind:

  • What could those teachers and professors who are trying to teach someone a language do with this?!
  • If this can be done with radio stations, what can be done with learning-related streams of content?!
  • Talk about “More Choice. More Control.”  Man o’ man!

Streams of content


Addendum on 2/28/21:
Could this type of interface be used to navigate the world of work? Where instead of nations, you would have arenas of work?

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

From DSC:
THIS is incredible technology! Check out the Chroma-keying technology and the handwriting extraction feature of the Sony Analytics appliance.

#AR hits the active learning classroom! THIS in incredible technology/functionality! See through your instructor as they write on the board!

From Sony’s website (emphasis DSC):

No matter where the speaker is standing, the Handwriting Extraction feature ensures that any words and diagrams written on a board or screen remain in full view to the audience — via AR (augmented reality).

Even if the speaker is standing directly in front of the board, their ideas, thinking process, and even their animated presentation, are all accessible to the audience. It’s also easy for remote viewers and those playing back the presentation at a later date to become immersed in the content too, as the presenter is overlaid and the content is never compromised.

Also, the chroma keying tech can be useful/engaging as well.

Chroma keying hits the Active Learning Classroom as well

 

Grab your audience’s attention and increase their engagement with intelligent video analytics technology.

I saw this at IUPUI’s recent webinar/tour of their new facilities. Here’s further information on that webinar from last Friday, 1/29/21:

Designing Large Active Learning Classrooms webinar/tour on 1/29/21 from the Mosaic Program at Indiana University; also features rooms/staff at IUPUI.

 

From DSC:
I was reviewing an edition of Dr. Barbara Honeycutt’s Lecture Breakers Weekly, where she wrote:

After an experiential activity, discussion, reading, or lecture, give students time to write the one idea they took away from the experience. What is their one takeaway? What’s the main idea they learned? What do they remember?

This can be written as a reflective blog post or journal entry, or students might post it on a discussion board so they can share their ideas with their colleagues. Or, they can create an audio clip (podcast), video, or drawing to explain their One Takeaway.

From DSC:
This made me think of tools like VoiceThread — where you can leave a voice/audio message, an audio/video-based message, a text-based entry/response, and/or attach other kinds of graphics and files.

That is, a multimedia-based exit ticket. It seems to me that this could work in online- as well as blended-based learning environments.


Addendum on 2/7/21:

How to Edit Live Photos to Make Videos, GIFs & More! — from jonathanwylie.com


 

Flipping Virtual Classrooms for More Impact — from techlearning.com by Ray Bendici
Flipping virtual classrooms can help maximize teaching time and resources

Flipping Virtual Classrooms for More Impact

Excerpt:

The mantra of flipped learning is that you can reach every student in every class every day, said Bergman. So if you have less synchronous time, you need to provide more time with your students one-on-one to work on the hard stuff, and flipped mastery learning, in particular, accommodates that.

“Flipped learning teachers have been preparing for the pandemic for the past 10 years,” Bergman said. “It’s really a great way to amplify your reach to teach.”

When the pandemic hit, Bergman and his flipped learning team realized that the most important thing is connections with students and the physical time spent with them. “So what’s the best use of your face-to-face class time?” Bergman said. “I’m going to argue it’s not you standing up and then introducing new content, it’s giving students the new content first and allowing them to apply, analyze, and evaluate it.”

 

How to become a livestreaming teacher — from innovatemyschool.com by Bobbie Grennier

Excerpts:

What is an encoder?
The format that a video camera records content in has to be transcoded so that it can be livestreamed to a destination like Facebook Live, YouTube, Twitch and Periscope. This is accomplished using an encoder software. An encoder optimizes the video feed for the streaming platform. The key to using an encoder is to learn to set-up scenes.

From DSC:
It will be interesting to see how learning-related platforms develop in the future. I’m continually on the lookout for innovative ideas across the learning landscapes, especially due to the Learning from the Living [Class] Room vision that I’ve been tracking this last decade. The pieces continue to come together. This might be another piece to that puzzle.

An online-based teaching and learning marketplace — backed up by AI, cloud-based learning profiles, voice-driven interfaces, learning agents, and more. Feeds/streams of content into how to learn about any topic…supporting communities of practice as well as individuals. And people will be key in this platform — technology will serve the people, not the other way around.

Daniel Christian -- A technology is meant to be a tool, it is not meant to rule.

 

An important distance learning resource for teachers, students, & parents — from educatorstechnology.com

Excerpt:

Wide Open School (WOS) is a platform developed by the leading non-profit for kids and families Common Sense media. WOS provides access to a wide range of resources designed specifically to help enhance the quality of distance learning. The work of Wide Open School is a fruit of a partnership with more than 80 leading educational organizations and services including Kahoot, Google, Khan Academy, National Geographic, PBS, Scholastic, Smithsonian, TED Ed, and many more.

 

Also see:

The work of Wide Open School is a fruit of a partnership with more than 80 leading educational organizations and services including Kahoot, Google, Khan Academy, National Geographic, PBS, Scholastic, Smithsonian, TED Ed, and many more.

 

OpenAI’s text-to-image engine, DALL-E, is a powerful visual idea generator — from venturebeat.com by Gary Grossman; with thanks to Tim Holt for sharing this resource

OpenAI’s text-to-image engine, DALL-E, is a powerful visual idea generator

Excerpt:

OpenAI chose the name DALL-E as a hat tip to the artist Salvador Dalí and Pixar’s WALL-E. It produces pastiche images that reflect both Dalí’s surrealism that merges dream and fantasy with the everyday rational world, as well as inspiration from NASA paintings from the 1950s and 1960s and those for Disneyland Tomorrowland by Disney Imagineers.

From DSC:
I’m not a big fan of having AI create the music that I listen to, or the artwork that I take in. But I do think there’s potential here in giving creative artists some new fodder for thought! Perhaps marketers and/or journalists could also get their creative juices going from this type of service/offering.

Speaking of art, here are a couple of other postings that caught my eye recently:

This Elaborately Armored Samurai Was Folded From A Single Sheet of Paper
Also see:

 

From DSC:
I was thinking about projecting images, animation, videos, etc. from a device onto a wall for all in the room to see.

  • Will more walls of the future be like one of those billboards (that presents two or three different images) and could change surfaces?

One side of the surface would be more traditional (i.e., a sheet wall type of surface). The other side of the surface would be designed to be excellent for projecting images onto it and/or for use by Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR).

Along these lines, here’s another item related to Human-Computer Interaction (HCI):

Mercedes-Benz debuts dashboard that’s one giant touchscreen — from futurism.com

 

From DSC:
Videoconferencing vendors out there:

  • Have you done any focus group tests — especially within education — with audio-based or digital video-based versions of emoticons?
    .
  • So instead of clicking on an emoticon as feedback, one could also have some sound effects or movie clips to choose from as well!
    .

To the videoconferencing vendors out there -- could you give us what DJ's have access to?

I’m thinking here of things like DJ’s might have at their disposal. For example, someone tells a bad joke and you hear the drummer in the background:

Or a team loses the spelling-bee word, and hears:

Or a professor wants to get the classes attention as they start their 6pm class:

I realize this could backfire big time…so it would have to be an optional feature that a teacher, professor, trainer, pastor, or a presenter could turn on and off. (Could be fun for podcasters too!)

It seems to me that this could take
engagement to a whole new level!

 

From DSC:
For me the Socratic method is still a question mark, in terms of effectiveness. (I suppose it depends on who is yielding the tool and how it’s being utilized/implemented.)

But you have one student — often standing up and/or in the spotlight — who is being drilled on something. That student could be calm and collected, and their cognitive processing could actually get a boost from the adrenaline.

But there are other students who dread being called upon in such a public — sometimes competitive — setting. Their cognitive processing could shut down or become greatly diminished.

Also, the professor is working with one student at a time — hopefully the other students are trying to address each subsequent question, but some students may tune out once they know it’s not their turn in the spotlight.

So I was wondering…could the Socratic method be used with each student at the same time? Could a polling-like tool be used in real-time to guide the discussion?

For example, a professor could start out with a pre-created poll and ask the question of all students. Then they could glance through the responses and even scan for some keywords (using their voice to drive the system and/or using a Ctrl+F / Command+F type of thing).

Then in real-time / on-the-fly, could the professor use their voice to create another poll/question — again for each student to answer — based on one of the responses? Again, each student must answer the follow up question(s).

Are there any vendors out there working on something like this? Or have you tested the effectiveness of something like this?

Vendors: Can you help us create a voice-driven interface to offer the Socratic method to everyone to see if and how it would work? (Like a Mentimeter type of product on steroids…er, rather, using an AI-driven backend.)

Teachers, trainers, pastors, presenters could also benefit from something like this — as it could engage numerous people at once.

#Participation #Engagement #Assessment #Reasoning #CriticalThinking #CommunicationSkills #ThinkingOnOnesFeet #OnlineLearning #Face-to-Face #BlendedLearning #HybridLearning

Could such a method be used in language-related classes as well? In online-based tutoring?

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 

Sony to reveal all new Direct-View Display tech in January

Sony to reveal all new Direct-View Display tech in January — from provideocoalition.com by Jose Antunes

Excerpt:

Sony will start 2021 unveiling “the next Sony breakthrough in premium Direct-View Display technology to faithfully bring content to life as the creator intended”, announced the company.

Also see:

Sony Spatial Reality Display: 3D without glasses for creatives — from provideocoalition.com by Jose Antunes

 
© 2025 | Daniel Christian