From DSC:
Why aren’t we further along with lecture recording within K-12 classrooms?

That is, I as a parent — or much better yet, our kids themselves who are still in K-12 — should be able to go online and access whatever talks/lectures/presentations were given on a particular day. When our daughter is sick and misses several days, wouldn’t it be great for her to be able to go out and see what she missed? Even if we had the time and/or the energy to do so (which we don’t), my wife and I can’t present this content to her very well. We would likely explain things differently — and perhaps incorrectly — thus, potentially muddying the waters and causing more confusion for our daughter.

There should be entry level recording studios — such as the One Button Studio from Penn State University — in each K-12 school for teachers to record their presentations. At the end of each day, the teacher could put a checkbox next to what he/she was able to cover that day. (No rushing intended here — as education is enough of a run-away train often times!) That material would then be made visible/available on that day as links on an online-based calendar. Administrators should pay teachers extra money in the summer times to record these presentations.

Also, students could use these studios to practice their presentation and communication skills. The process is quick and easy:

 

 

 

 

I’d like to see an option — ideally via a brief voice-driven Q&A at the start of each session — that would ask the person where they wanted to put the recording when it was done: To a thumb drive, to a previously assigned storage area out on the cloud/Internet, or to both destinations?

Providing automatically generated close captioning would be a great feature here as well, especially for English as a Second Language (ESL) students.

 

 

 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

“Rise of the machines” — from January 2018 edition of InAVate magazine
AI is generating lots of buzz in other verticals, but what can AV learn from those? Tim Kridel reports.

 

 


From DSC:
Learning spaces are relevant as well in the discussion of AI and AV-related items.


 

Also in their January 2018 edition, see
an incredibly detailed project at the London Business School.

Excerpt:

A full-width frosted glass panel sits on the desk surface, above it fixed in the ceiling is a Wolfvision VZ-C12 visualiser. This means the teaching staff can write on the (wipeclean) surface and the text appears directly on two 94-in screens behind them, using Christie short-throw laser 4,000 lumens projectors. When the lecturer is finished or has filled up the screen with text, the image can be saved on the intranet or via USB. Simply wipe with a cloth and start again. Not only is the technology inventive, but it allows the teaching staff to remain in face-to-face contact with the students at all times, instead of students having to stare at the back of the lecturer’s head whilst they write.

 


 

Also relevant, see:

 


 

 

Alexa, how can you improve teaching and learning? — from edscoop.com by Kate Roddy with thanks to eduwire for their post on this
Special report:? Voice command platforms from Amazon, Google and Microsoft are creating new models for learning in K-12 and higher education — and renewed privacy concerns.

Excerpt:

We’ve all seen the commercials: “Alexa, is it going to rain today?” “Hey, Google, turn up the volume.” Consumers across the globe are finding increased utility in voice command technology in their homes. But dimming lights and reciting weather forecasts aren’t the only ways these devices are being put to work.

Educators from higher ed powerhouses like Arizona State University to small charter schools like New Mexico’s Taos Academy are experimenting with Amazon Echo, Google Home or Microsoft Invoke and discovering new ways this technology can create a more efficient and creative learning environment.

The devices are being used to help students with and without disabilities gain a new sense for digital fluency, find library materials more quickly and even promote events on college campuses to foster greater social connection.

Like many technologies, the emerging presence of voice command devices in classrooms and at universities is also raising concerns about student privacy and unnatural dependence on digital tools. Yet, many educators interviewed for this report said the rise of voice command technology in education is inevitable — and welcome.

“One example,” he said, “is how voice dictation helped a student with dysgraphia. Putting the pencil and paper in front of him, even typing on a keyboard, created difficulties for him. So, when he’s able to speak to the device and see his words on the screen, the connection becomes that much more real to him.”

The use of voice dictation has also been beneficial for students without disabilities, Miller added. Through voice recognition technology, students at Taos Academy Charter School are able to perceive communication from a completely new medium.

 

 

 

From DSC:
After reviewing the article below, I wondered...if we need to interact with content to learn it…how might mixed reality allow for new ways of interacting with such content? This is especially intriguing when we interact with that content with others as well (i.e., social learning).

Perhaps Mixed Reality (MR) will bring forth a major expansion of how we look at “blended learning” and “hybrid learning.”

 


Mixed Reality Will Transform Perceptions — from forbes.com by Alexandro Pando


Excerpts (emphasis DSC):

Changing How We Perceive The World One Industry At A Time
Part of the reason mixed reality has garnered this momentum within such short span of time is that it promises to revolutionize how we perceive the world without necessarily altering our natural perspective. While VR/AR invites you into their somewhat complex worlds, mixed reality analyzes the surrounding real-world environment before projecting an enhanced and interactive overlay. It essentially “mixes” our reality with a digitally generated graphical information.

All this, however, pales in comparison to the impact of mixed reality on the storytelling process. While present technologies deliver content in a one-directional manner, from storyteller to audience, mixed reality allows for delivery of content, then interaction between content, creator and other users. This mechanism cultivates a fertile ground for increased contact between all participating entities, ergo fostering the creation of shared experiences. Mixed reality also reinvents the storytelling process. By merging the storyline with reality, viewers are presented with a wholesome experience that’s perpetually indistinguishable from real life.

 

Mixed reality is without a doubt going to play a major role in shaping our realities in the near future, not just because of its numerous use cases but also because it is the flag bearer of all virtualized technologies. It combines VR, AR and other relevant technologies to deliver a potent cocktail of digital excellence.

 


 

 

 

The Section 508 Refresh and What It Means for Higher Education — from er.educause.edu by Martin LaGrow

Excerpts (emphasis DSC):

Higher education should now be on notice: Anyone with an Internet connection can now file a complaint or civil lawsuit, not just students with disabilities. And though Section 508 was previously unclear as to the expectations for accessibility, the updated requirements add specific web standards to adhere to — specifically, the Web Content Accessibility Guidelines (WCAG) 2.0 level AA developed by the World Wide Web Consortium (W3C).

Although WCAG 2.0 has been around since the early 2000s, it was developed by web content providers as a self-regulating tool to create uniformity for web standards around the globe. It was understood to be best practices but was not enforced by any regulating agency. The Section 508 refresh due in January 2018 changes this, as WCAG 2.0 level AA has been adopted as the standard of expected accessibility. Thus, all organizations subject to Section 508, including colleges and universities, that create and publish digital content — web pages, documents, images, videos, audio — must ensure that they know and understand these standards.

Reacting to the Section 508 Refresh
In a few months, the revised Section 508 standards become enforceable law. As stated, this should not be considered a threat or burden but rather an opportunity for institutions to check their present level of commitment and adherence to accessibility. In order to prepare for the update in standards, a number of proactive steps can easily be taken:

  • Contract a third-party expert partner to review institutional accessibility policies and practices and craft a long-term plan to ensure compliance.
  • Review all public-facing websites and electronic documents to ensure compliance with WCAG 2.0 Level AA standards.
  • Develop and publish a policy to state the level of commitment and adherence to Section 508 and WCAG 2.0 Level AA.
  • Create an accessibility training plan for all individuals responsible for creating and publishing electronic content.
  • Ensure all ICT contracts, ROIs, and purchases include provisions for accessibility.
  • Inform students of their rights related to accessibility, as well as where to address concerns internally. Then support the students with timely resolutions.

As always, remember that the pursuit of accessibility demonstrates a spirit of inclusiveness that benefits everyone. Embracing the challenge to meet the needs of all students is a noble pursuit, but it’s not just an adoption of policy. It’s a creation of awareness, an awareness that fosters a healthy shift in culture. When this is the approach, the motivation to support all students drives every conversation, and the fear of legal repercussions becomes secondary. This should be the goal of every institution of learning.

 

 


Als0 see:


How to Make Accessibility Part of the Landscape — from insidehighered.com by Mark Lieberman
A small institution in Vermont caters to students with disabilities by letting them choose the technology that suits their needs.

Excerpt:

Accessibility remains one of the key issues for digital learning professionals looking to catch up to the needs of the modern student. At last month’s Online Learning Consortium Accelerate conference, seemingly everyone in attendance hoped to come away with new insights into this thorny concern.

Landmark College in Vermont might offer some guidance. The private institution with approximately 450 students exclusively serves students with diagnosed learning disabilities, attention disorders or autism. Like all institutions, it’s still grappling with how best to serve students in the digital age, whether in the classroom or at a distance. Here’s a glimpse at the institution’s philosophy, courtesy of Manju Banerjee, Landmark’s vice president for educational research and innovation since 2011.

 

 

Amazon and Codecademy team up for free Alexa skills training — from venturebeat.com by Khari Johnson

Excerpt:

Amazon and tech training app Codecademy have collaborated to create a series of free courses. Available today, the courses are meant to train developers as well as beginners how to create skills, the voice apps that interact with Alexa.

Since opening Alexa to third-party developers in 2015, more than 20,000 skills have been made available in the Alexa Skills Store.

 

 

 

 

Google AR and VR: Get a closer look with Street View in Google Earth VR

Excerpt:

With Google Earth VR, you can go anywhere in virtual reality. Whether you want to stroll along the canals of Venice, stand at the summit of Mount Kilimanjaro or soar through the sky faster than a speeding bullet, there’s no shortage of things to do or ways to explore. We love this sense of possibility, so we’re bringing Street View to Earth VR to make it easier for you to see and experience the world.

This update lets you explore Street View imagery from 85 countries right within Earth VR. Just fly down closer to street level, check your controller to see if Street View is available and enter an immersive 360° photo. You’ll find photos from the Street View team and those shared by people all around the world.

 

 

 

 

 

 

The End of Typing: The Next Billion Mobile Users Will Rely on Video and Voice — from wsj.com by Eric Bellman
Tech companies are rethinking products for the developing world, creating new winners and losers

Excerpt:

The internet’s global expansion is entering a new phase, and it looks decidedly unlike the last one.

Instead of typing searches and emails, a wave of newcomers—“the next billion,” the tech industry calls them—is avoiding text, using voice activation and communicating with images.

 

 

From DSC:
The above article reminds me that our future learning platforms will be largely driven by our voices. That’s why I put it into my vision of a next generation learning platform.

 

 

 

 

 

 

Amazon’s Alexa is finally coming to wearable devices for the first time — from yahoo.com by Peter Newman

Excerpt:

Manufacturers have thus far incorporated the voice assistant into speakers, phones, thermostats, and more — but being incorporated into a wearable device is a first for Alexa.

The headphones will let users tap a button to launch the voice assistant, which will connect to the device through the user’s mobile phone and the Bragi app. They will let a wearer engage with the voice assistant while on the go, searching for basic information, shopping for goods on Amazon, or calling for a vehicle from ride-hailing services like Uber, among other possibilities. While all of these capabilities are already possible using a phone, enabling hands-free voice control brings a new level of convenience.

 

 


 

 

 

ARCore: Augmented reality at Android scale — from blog.google by Dave Burke

Excerpt:

With more than two billion active devices, Android is the largest mobile platform in the world. And for the past nine years, we’ve worked to create a rich set of tools, frameworks and APIs that deliver developers’ creations to people everywhere. Today, we’re releasing a preview of a new software development kit (SDK) called ARCore. It brings augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.

 

Google just announced its plan to match the coolest new feature coming to the iPhone –from cnbc.com by Todd Haselton

  • Google just announced its answer to Apple’s augmented reality platform
  • New tools called ARCore will let developers enable AR on millions of Android devices

 

AR Experiments

Description:

AR Experiments is a site that features work by coders who are experimenting with augmented reality in exciting ways. These experiments use various tools like ARCore, an SDK that lets Android developers create awesome AR experiences. We’re featuring some of our favorite projects here to help inspire more coders to imagine what could be made with AR.

 

Google’s ARCore hopes to introduce augmented reality to the Android masses — from androidauthority.com by Williams Pelegrin

Excerpt:

Available as a preview, ARCore is an Android software development kit (SDK) that lets developers introduce AR capabilities to, you guessed it, Android devices. Because of how ARCore works, there is no need for folks to purchase additional sensors or hardware – it will work on existing and future Android phones.

 

 

Artificial intelligence will transform universities. Here’s how. — from weforum.org by Mark Dodgson & David Gann

Excerpt:

The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.

We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.

Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.

As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.

The implications of AI for university research extend beyond science and technology.

When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.

Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.

 

 

Also see:

Digital audio assistants in teaching and learning — from blog.blackboard.com by Szymon Machajewski

Excerpts:

I built an Amazon Alexa skill called Introduction to Computing Flashcards. In using the skill, or Amazon Alexa app, students are able to listen to Alexa and then answer questions. Alexa helps students prepare for an exam by speaking definitions and then waiting for their identification. In addition to quizzing the student, Alexa is also keeping track of the correct answers. If a student answers five questions correctly, Alexa shares a game code, which is worth class experience points in the course gamification My Game app.

Certainly, exam preparation apps are one way to use digital assistants in education. As development and publishing of Amazon Alexa skills becomes easier, faculty will be able to produce such skills just as easily as they now create PowerPoints. Given the basic code available through Amazon tutorials, it takes 20 minutes to create a new exam preparation app. Basic voice experience Amazon Alexa skills can take as much as five minutes to complete.

Universities can publish their campus news through the Alexa Flash Briefing. This type of a skill can publish news, success stories, and other events associated with the campus.

If you are a faculty member, how can you develop your first Amazon Alexa skill? You can use any of the tutorials already available. You can also participate in an Amazon Alexa classroom training provided by Alexa Dev Days. It is possible that schools or maker spaces near you offer in-person developer sessions. You can use meetup.com to track these opportunities.

 

 

 

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 
© 2024 | Daniel Christian