Helping Neurodiverse Students Learn Through New Classroom Design — from insidehighered.com by Michael Tyre
Michael Tyre offers some insights into how architects and administrators can work together to create better learning environments for everyone.

We emerged with two guiding principles. First, we had learned that certain environments—in particular, those that cause sensory distraction—can more significantly impact neurodivergent users. Therefore, our design should diminish distractions by mitigating, when possible, noise, visual contrast, reflective surfaces and crowds. Second, we understood that we needed a design that gave neurodivergent users the agency of choice.

The importance of those two factors—a dearth of distraction and an abundance of choice—was bolstered in early workshops with the classroom committee and other stakeholders, which occurred at the same time we were conducting our research. Some things didn’t come up in our research but were made quite clear in our conversations with faculty members, students from the neurodivergent community and other stakeholders. That feedback greatly influenced the design of the Young Classroom.

We ended up blending the two concepts. The main academic space utilizes traditional tables and chairs, albeit in a variety of heights and sizes, while the peripheral classroom spaces use an array of less traditional seating and table configurations, similar to the radical approach.


On a somewhat related note, also see:

Unpacking Fingerprint Culture — from marymyatt.substack.com by Mary Myatt

This post summarises a fascinating webinar I had with Rachel Higginson discussing the elements of building belonging in our settings.

We know that belonging is important and one of the ways to make this explicit in our settings is to consider what it takes to cultivate an inclusive environment where each individual feels valued and understood.

Rachel has spent several years working with young people, particularly those on the periphery of education to help them back into mainstream education and participating in class, along with their peers.

Rachel’s work helping young people to integrate back into education resulted in schools requesting support and resources to embed inclusion within their settings. As a result, Finding My Voice has evolved into a broader curriculum development framework.

 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 


Speaking of AI-related items, also see:

OpenAI debuts Whisper API for speech-to-text transcription and translation — from techcrunch.com by Kyle Wiggers

Excerpt:

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Introducing ChatGPT and Whisper APIs — from openai.com
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.

Excerpt:

ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.



Everything you wanted to know about AI – but were afraid to ask — from theguardian.com by Dan Milmo and Alex Hern
From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence

Excerpt:

Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

 So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.


Nvidia CEO: “We’re going to accelerate AI by another million times” — from
In a recent earnings call, the boss of Nvidia Corporation, Jensen Huang, outlined his company’s achievements over the last 10 years and predicted what might be possible in the next decade.

Excerpt:

Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.

“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.

From DSC:
NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.


 

DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Digital Learning Definitions — from wcet.wiche.edu

Excerpt:

Higher education uses many variations of terms to describe slightly different digital learning modalities,  such as: “in-person,” “online,” “hybrid,” “hyflex,” “synchronous,” “asynchronous,” and many more. These variations have long confused students, faculty, administrators, and the general public,

WCET has worked on this issue in the past, and continues to advocate for simple, easy to understand terms that can bring consistent agreement to the use of these phrases. The WCET Steering Committee has made it a priority to attack this issue.

In 2022, WCET sponsored and led a partnership with Bay View Analytics and the Canadian Digital Learning Research Association to conduct a survey to explore the use of the terms by higher education professionals. The Online Learning Consortium (OLC), Quality Matters (QM), and the University Professional and Continuing Education Association (UPCEA) assisted with survey participation and promotion. The works published below highlight the findings of the study.

Also relevant/see:

 

Inclusive Education For Students With Hearing Impairment — from edtechreview.in by Priyanka Gupta

Excerpt:

The following may be difficult for a student with a hearing impairment:

  • The subjects of spelling, grammar, and vocabulary
  • Making notes while listening to lectures
  • Participate, engage or understand classroom discussions
  • Understand educational videos
  • Present oral reports
 

Instructional Audio: 4 Benefits to Improving It — from techlearning.com by Erik Ofgang
Ensuring every classroom has instructional audio capabilities helps all students hear what the teacher is saying.

Excerpt (emphasis DSC):

Sound is a key component of education. If students can’t hear their instructor well, they’re clearly not going to focus or learn as much. That’s why more and more schools are investing in instructional audio systems, which are high-tech public address systems designed with classrooms, teachers, and all students in mind.

Terri Meier is director of education technology for Rio Rancho Public Schools in New Mexico where all new classrooms are being built with voice amplification systems in place and many existing classrooms are being retrofitted with similar systems. These systems are key for schools in their accessibility efforts and in providing quality instruction overall, she says.

And speaking of accessibility-related postings/items, also see:

 

Course Awareness in HyFlex: Managing unequal participation numbers — from hyflexlearning.org by Candice Freeman

Excerpt:

How do you teach a HyFlex course when the number of students in various participation modes is very unequal? How do you teach one student in a mode – often in the classroom? Conversely, you could ask how do you teach 50 asynchronous students with very few in the synchronous mode(s)? Answers will vary greatly depending from teacher to teacher. This article suggests a strategy called Course Awareness, a mindfulness technique designed to help teachers envision each learner as being in the instructor’s presence and engaged in the instruction regardless of participation (or attendance) mode choice.

Teaching HyFlex in an active learning classroom

From DSC:
I had understood the hyflex teaching model as addressing both online-based (i.e., virtual/not on-site) and on-site/physically-present students at the same time — and that each student could choose the manner in which they wanted to attend that day’s class. For example, on one day, a student could take the course in room 123 of Anderson Hall. The next time the class meets, that same student could attend from their dorm room.

But this article introduces — at least to me — the idea that we have a third method of participating in the hyflex model — asynchronously (i.e., not at the same time). So rather than making their way to Anderson Hall or attending from their dorm, that same student does not attend at the same time as other students (either virtually or physically). That student will likely check in with a variety of tools to catch up with — and contribute to — the discussions. As the article mentions:

Strategically, you need to employ web-based resources designed to gather real-time information over a specified period of time, capturing all students and not just students engaging live. For example, Mentimeter, PollEverywhere, and Sli.do allow the instructor to pose engaging, interactive questions without limiting engagement time to the instance the question is initially posed. These tools are designed to support both synchronous and asynchronous participation. 

So it will be interesting to see how our learning ecosystems morph in this area. Will there be other new affordances, pedagogies, and tools that take into consideration that the faculty members are addressing synchronous and asynchronous students as well as online and physically present students? Hmmm…more experimentation is needed here, as well as more:

  • Research
  • Instructional design
  • Design thinking
  • Feedback from students and faculty members

Will this type of model work best in the higher education learning ecosystem but not the K-12 learning ecosystem? Will it thrive with employees within the corporate realm? Hmmm…again, time will tell.


And to add another layer to the teaching and learning onion, now let’s talk about multimodal learning. This article, How to support multimodal learningby Monica Burns, mentions that:

Multimodal learning is a teaching concept where using different senses simultaneously helps students interact with content at a deeper level. In the same way we learn through multiple types of media in our own journey as lifelong learners, students can benefit from this type of learning experience.

The only comment I have here is that if you think that throwing a warm body into a K12 classroom fixes the problem of teachers leaving the field, you haven’t a clue how complex this teaching and learning onion is. Good luck to all of those people who are being thrown into the deep end — and essentially being told to sink or swim.

 

‘Spaces Matter’ — from insidehighered.com by Colleen Flaherty
Limited access to active learning spaces may disproportionately hurt historically excluded groups, and institutions should build more of these spaces in the name of equity, according to a new study. Where does higher ed stand on next-generation learning spaces?

An interactive lecture hall at Rutgers University at New Brunswick, surrounded by active learning spaces across the U.S.

Excerpt (emphasis DSC):

A new study is therefore concerning—it found that limited access to active learning classrooms forced students to self-sort based on their social networks or their attitudes toward learning. The authors warn that limited access to active learning spaces may create a marginalizing force that pushes women, in particular, out of the sciences.

The solution? Invest in active learning spaces.

From DSC:
The groups I worked in over the last 15 years created several active learning spaces, but the number of rooms was definitely limited due to the expenses involved. Students liked these spaces and the feedback from faculty members was positive as well. Some students often staked their claims in these rooms so that they could study together (this was especially true for those majoring in Engineering).

 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

From DSC:
I wanted to pass this along to the learning space designers out there in case it’s helpful to them.

“The sound absorbing Sola felt is made from 50% post-consumer recycled PET.”

Source


Acoustic Static Links lighting by LightArt — from dezeen.com

 

After an AI bot wrote a scientific paper on itself, the researcher behind the experiment says she hopes she didn’t open a ‘Pandora’s box’ — from insider.com by Hannah Getahun

Excerpt:

  • An artificial-intelligence algorithm called GPT-3 wrote an academic thesis on itself in two hours.
  • The researcher who directed the AI to write the paper submitted it to a journal with the bot’s consent.
  • “We just hope we didn’t open a Pandora’s box,” the researcher wrote in Scientific American.

AI Empowers Scalable Personalized Learning and Knowledge Sharing — from learningsolutionsmag.com by Markus Bernhardt

Excerpt:

AI aids in providing true personalization
Automation through AI is providing us with the tools necessary to deploy fully personalized digital learning, extremely fast and at scale. With the advent of this technology, we will see a revolution in digital training; in addition, I predict that the impact the digital piece will have on human-led efforts will lead to a further revolution of education, training, workshops, mentoring, and coaching.

How A.I. Could Help You Design Your Perfect Office (or Store) — from inc.com by Ben Sherry
Artificial intelligence may soon help fill the gap between your interior design skills and your imagination.

Excerpt:

Boom Interactive is one of several companies attempting to streamline the interior design process using automation. The Salt Lake City-based startup’s free app, Bubbles, which is scheduled to soft launch in the third quarter of 2022, uses artificial intelligence to read floor plans and create a “digital twin” of your real-life space, according to CEO and founder Timber Barker. Once a “twin” has been created, users have full freedom to customize the space by adding doors, erasing walls, and placing furniture.

The Increasing Role of Artificial Intelligence in Our Lives — from rdene915.com by Rachelle Dene Poth

Excerpt:

All of this recent information has made me even more curious about the role artificial intelligence will play over the next few months as we hopefully get back to more of a normal life experience and can engage in work and learning but also in leisure activities. What can we learn from the recent uptick in AI information and how can it help us in the future?

Optical illusions could help us build the next generation of AI — from digitaltrends.com by Luke Dormehl

 

‘Accessibility is a journey’: A DEI expert on disability rights — from hrdive.com by Caroline Colvin
Employers can wait for a worker to request reasonable accommodation under the ADA, but Kelly Hermann asks: Why not be accommodating from the start?

Excerpt:

Often, employers jump to the obstacles that exist in physical spaces: nonexistent ramps for wheelchairs, manual doors that lack motion sensors, and the like. But the digital world presents challenges as well. Hermann and the U Phoenix accessibility team likes to “demystify” disability for campus members seeking their counsel, she said.

“Are you making those links descriptive and are you using keywords? Or are you just saying ‘click here’ and that’s your link?” Hermann asked. Like a sighted person, an individual with a disability can also scan a webpage for links with assistive technology, but this happens audibly, Hermann said, “They tell that tool to skip by link and this is what they hear: ‘Click here.’ ‘Click here.’ ‘Click here.’ ‘Click here.’ With four links on the page all hyperlinked with ‘click here,’ [they] don’t know where [they’re] going.”

 

What The Future Of Technology In The Workplace Means For Office Design And Operations — from workdesign.com by Mara Hauser

Excerpt:

Advances in technology continue to influence the workplace as corporate entities and coworking operators are confronted with modern challenges surrounding productivity and collaboration. We lead teams to execute intentional designs that reflect brand vision and produce lively, productive workspaces. With the growing demand from employees for workplace flexibility, these technological advancements must be reflected in both office design and business practices in order to add value and ultimately achieve operational excellence.

.

Podcasting studio at FUSE Workspace in Houston, TX.

 
© 2024 | Daniel Christian