From DSC:
Below are some questions and thoughts that are going through my mind:
Will “class be in session” soon on tools like Prysm & Bluescape?
Will this type of setup be the next platform that we’ll use to meet our need to be lifelong learners? That is, will what we know of today as Learning Management Systems (LMS) and Content Management Systems (CMS) morph into this type of setup?
Via platforms/operating systems like tvOS, will our connected TVs turn into much more collaborative devices, allowing us to contribute content with learners from all over the globe?
Prysm is already available on mobile devices and what we consider a television continues to morph
Will second and third screens be used in such setups? What functionality will be assigned to the main/larger screens? To the mobile devices?
Will colleges and universities innovate into such setups? Or will organizations like LinkedIn.com/Lynda.com lead in this space? Or will it be a bit of both?
How will training, learning and development groups leverage these tools/technologies?
Are there some opportunities for homeschoolers here?
Along these lines, are are some videos/images/links for you:
To enhance the Prysm Visual Workplace, Prysm today announced an integration with Microsoft OneDrive for Business and Office 365. Using the OneDrive for Business API from Microsoft, Prysm has made it easy for customers to connect Prysm to their existing OneDrive for Business environments to make it a seamless experience for end users to access, search for, and sync with content from OneDrive for Business. Within a Prysm Visual Workplace project, users may now access, work within and download content from Office 365 using Prysm’s built-in web capabilities.
Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
What new jobs/positions will be created by these new forms of HCI?
Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape? Will that be enough?
Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
Will colleges and universities build and offer more courses involving HCI?
Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
How will languages and language translation be impacted by voice recognition software?
Will new devices be introduced to our classrooms in the future?
In the corporate space, how will training departments handle these new needs and opportunities? How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?
As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot? What types of positions created it? Who all could benefit from it? What other platforms could these technologies be integrated into? Besides the home, where else might we find these types of devices?
Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.
Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.
Or how might students learn about the myriad of technologies involved withIBM’s Watson? What courses are out there today that address this type of thing? Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?
Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.
If you’ve ever wanted to try out the Amazon Echo before shelling out for one, you can now do just that right from your browser. Amazon has launched a dedicated website where you can try out an Echo simulation and put Alexa’s myriad of skills to the test.
From DSC: The use of the voice and gesture to communicate to some type of computing device or software program represent growing types of Human Computer Interaction (HCI). With the growth of artificial intelligence (AI), personal assistants, and bots, we should expect to see more voice recognition services/capabilities baked into an increasing amount of products and solutions in the future.
Given these trends, personnel working within K-12 and higher ed need to start building their knowledgebases now so that we can begin offering more courses in the near future to help students build their skillsets. Current user experience designers, interface designers, programmers, graphic designers, and others will also need to augment their skillsets.
The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.
…
Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?
… Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.
Forget about buttons — think actions.
Eliminate the need for a cursor as feedback, but provide an alternative.
Amazon’s $180 Echo and the new Google Home (due out later this year) promise voice-activated assistants that order groceries, check calendars and perform sundry tasks of your everyday life. Now, with a little initiative and some online instructions, you can build the devices yourself for a fraction of the cost. And that’s just fine with the tech giants.
At this weekend’s Bay Area Maker Faire, Arduino, an open-source electronics manufacturer, announced new hardware “boards”—bundles of microprocessors, sensors, and ports—that will ship with voice and gesture capabilities, along with wifi and bluetooth connectivity. By plugging them into the free voice-recognition services offered by Google’s Cloud Speech API and Amazon’s Alexa Voice Service, anyone can access world-class natural language processing power, and tap into the benefits those companies are touting. Amazon has even released its own blueprint and code repository to build a $60 version of its Echo using Raspberry Pi, another piece of open-source hardware.
From DSC: Perhaps this type of endeavor could find its way into some project-based learning out there, as well as in:
Graphic designers, rejoice – the hours upon hours of struggling to figure out what a type face is will finally be over as Adobe is adding an artificial intelligence tool to help detect and identify fonts from any type of picture, sketch or screenshot.
The DeepFont system features advanced machine-learning algorithms that send pictures of typefaces from Photoshop software on a user’s computer to be compared to a huge database of over 20,000 fonts in the cloud, and within seconds, results are sent back to the user, akin to the way the music discovery app Shazam works.
Per Jack Du Mez at Calvin College, use this app to randomly call on your students — while instilling a game-like environment into your active learning classroom (ALC)!
Description:
Randomly is an app made specifically for teachers and professors. It allows educators to enter their students into individual classes. They can then use the Random Name Selector feature to randomly call on a student to answer a question by one of two ways: Truly random, where repeated names are allowed, or a one pass – where all students are called once before they are called again. The device you’re using will even call out (vocally) the student’s name for you!
This app can also be used to randomly generate groups for you. You can split your class into groups by number of groups or by number of students per group. It intelligently knows what to do with any remaining students too!
This app supports Apple Watch, so you can call on your students with the use of your Apple Watch!
From DSC: In the future, given facial and voice recognition software, I could see an Augmented Reality (AR)-based application whereby a faculty member or a teacher could see icons hovering over the students — letting the faculty member/teacher know who has been called upon recently and who hasn’t been called upon recently (with settings to change the length of time for this type of tracking — i.e., this student has been called upon in this class session, or in the last week, or in the last month, etc.).
College students across the country — from the University of Southern California to the University of Minnesota to Southern Methodist University — are also experimenting with virtual reality applications via clubs, design labs and hackathons.
The tech industry is taking note.
Among the bigger showcases for the technology took place last month, when the University of Southern California’s Virtual Reality Club (VRSC) hosted its first annual Virtual Reality Festival and Demo Day, a showcase of projects and panels with The Walt Disney Company as its title sponsor.
Students traveled from the University of California-San Diego, UCLA, Chapman University, Loyola Marymount University and the University of Colorado-Boulder to attend the fest. The judges were industry professionals from companies including NVIDIA, Google, Maker Studios and Industrial Light and Magic’s X Lab.
Some $25,000 in prizes were split among winners in four categories: 360 Live-Action Videos, 360 Animation, Interactive VR Games and Immersive Technology/Augmented Reality (AR). VR/AR categories ranged from health care to games, journalism, interactive design and interpretive dance.
An idea/question from DSC: Looking at the article below, I wonder…“Why can’t the ‘One Day University‘ come directly into your living room — 24×7?”
This is why I’m so excited about the “The Living [Class] Room” vision. Because it is through that vision that people of all ages — and from all over the world — will be able to constantly learn, grow, and reinvent themselves (if need be) throughout their lifetimes. They’ll be able to access and share content, communicate and discuss/debate with one another, form communities of practice, go through digital learning playlists (likeLynda.com’s Learning Paths) and more. All from devices that represent the convergence of the television, the telephone, and the computer (and likely converging with the types of devices that are only now coming into view, such as Microsoft’s Hololens).
You won’t just be limited to going back to college for a day — you’ll be able to do that 24×7 for as many days of the year as you want to.
Then when some sophisticated technologies are integrated into this type of platform — such as artificial intelligence, cloud-based learner profiles, algorithms, and the ability to setup exchanges for learning materials — we’ll get some things that will blow our minds in the not too distant future! Heutagogy on steroids!
Have you ever thought about how nice it would be if you could go back to college, just for the sake of learning something new, in a field you don’t know much about, with no tests, homework or studying to worry about? And you won’t need to take the SAT or the ACT to be accepted? You can, at least for a day, with something called One Day University, the brainchild of a man named Steve Schragis, who about a decade ago brought his daughter to Bard College as a freshman and thought that he wanted to stay.
One Day University now financially partners with dozens of newspapers — including The Washington Post — and a few other organizations to bring lectures to people around the country. The vast majority of the attendees are over the age 50 and interested in continuing education, and One Day University offers them only those professors identified by college students as fascinating. As Schragis says, it doesn’t matter if you are famous; you have to be a great teacher. For example, Schragis says that since Bill Gates has never shown to be one, he can’t teach at One Day University.
…
We bring together these professors, usually four at at a time, to cities across the country to create “The Perfect Day of College.” Of course we leave out the homework, exams, and studying! Best if there’s real variety, both male and female profs, four different schools, four different subjects, four different styles, etc. There’s no one single way to be a great professor. We like to show multiple ways to our students.
Most popular classes are history, psychology, music, politics, and film. Least favorite are math and science.
We know the shelf-life of skills are getting shorter and shorter. So whether it’s to brush up on new skills or it’s to stay on top of evolving ones, Lynda.com can help you stay ahead of the latest technologies.
Leighton Wilks noticed a palpable difference when his class moved from a traditional lecture-style classroom to an active learning space. Not only did attendance increase, but students were more engaged and collaborative.
“I see a lot more team cohesion. They’re talking more to each other because they’re sitting with their teams. It’s nice to foster that teamwork throughout the semester.”
Wilks is an instructor in the Haskayne School of Business and teaches a second-year organizational behaviour course in the newly-renovated active learning classroom in Scurfield Hall. He found that the space breaks down the boundary between instructor and student.
“Instead of being up at the front, I’m walking around. I feel I get a lot more questions and get to know the students better, which is important.”
Creating Great Digital Spaces for Learning— from slideshare.net by Phil Vincent Professor Andrew Harrison, Professor of Practice at University of Wales Trinity St David and Director, Spaces That Work Ltd., from Jisc DigiFest 2016
Pedagogy
Preparation for the 21st-century workforce demands that educators shift the authority for learning to the students. After all, today’s workers are expected to function in collaborative and horizontal environments, as opposed to the “factory” driven, top-down, solitary worker spaces of yesterday. Therefore, contemporary learning environments should lean heavily on collaborative spaces, supported through personalized learning technologies. Good pedagogy encourages student engagement through complex collaborative projects based on real-world problems.
… Technology
Innovative learning should incorporate a true BYOD (bring your own device) environment that provides opportunities for student-centered learning, beginning with their own personalized technologies — from laptops and tablets to smartphones and wearable devices. This approach leverages student devices and reduces the need for institutionally provided equipment.
… Supporting Distance Learning
Strategies being used within Unified Communications and Collaboration solutions provide the means to support the involvement of remote participants, whether they are present on the WAN or solely connecting via Internet services. Since these solutions are moving to cloud-based topologies, they are mostly services that individuals subscribe to directly or have access to through campus-based subscription services. These features are also beginning to appear in social media environments, such as Facebook and LinkedIn, so the opportunity for use may become as easy as installing another app in the not-toodistant future.
Wireless presentation, lecture capture, online collaboration and active-learning methodologies all require the ability for any and all participants to engage the installed resources within the facility while they also access their personal content; whether local to their personal devices or within the cloud. With the video tools now available to the consumer, the use of conferencing apps will continue to rise. The environments that engage students and faculty will need to allow for any user to log in and access his or her content and presentation appliances without hurdles or roadblocks. Access to subject matter experts or other individuals will also need to be supported as well. With the deployment of video tools via social media, users will also rely more on their personal accounts for contact management instead of an address book. These changes in workflow are disruptors to the policies that many institutions have put in place as it relates to the BYOD usage surrounding their networks. Success of these communication and education solutions needs the networks to focus on and easily support three key technologies: wireless presentation, collaboration and participation by remote team members.
If you happen to have a fourth generation Apple TV, then there’s some good news: Apple’s “Live Tune-In” feature is officially live. This feature allows an Apple TV user to ask Siri to automatically transport them to the livestream of a tvOS app from the home screen,an idea that could go a long way in making the Apple TV’s app navigation less painful.
From DSC: What if these live streams were live lectures?
Two days of Facebook’s F8 Conference have come and gone, so here’s a look back at all the things you may have missed from the event. To learn more about each topic, click the links below for full stories.
In a wide-ranging keynote April 12, CEO Mark Zuckerberg laid out the company’s 10-year plan to “Give everyone the power to share anything with anyone.” To do so, Facebook plans to move far beyond its original role as a social network. The firm aims to launch new virtual reality projects, beam Internet across the world using drones and unleash complex artificial-intelligence bots that can fulfill our every digital need.
Before all that can happen, Facebook has to deal with the here and now of improving its current products. On that front, the company made several announcements that will reshape the way people and brands use Facebook and its constellation of apps this year.
Here’s a breakdown of Facebook’s biggest F8 announcements.
When Facebook bought Oculus VR in 2014 for $2 billion, many observers wondered what the world’s largest social networking company wanted with a virtual reality company whose then-unreleased system was pretty much all about single-user experiences. Today at F8, Facebook’s annual developers conference in San Francisco, the company showed off some of the most fleshed-out examples of how it sees VR as a rich social tool. During his F8 keynote address, CTO Mike Schroepfer talked at length about what Facebook explicitly calls “social VR.”
One of the key knocks on virtual reality, the gamer-heavy industry Facebook is betting big on, is that wearing a headset intended to block out the real world in favor of a virtual one isn’t a very social activity. Facebook, an inherently social company, thinks it can change that.
At its F8 developer conference on Wednesday Facebook demoed what it calls “social VR,” which is exactly what it sounds like: Connecting two or more real people in a virtual world.
During the second day keynote of Facebook’s F8 Developer Conference, Oculus showed off an entirely new way to get social in VR.
On stage, Facebook’s CTO Mike Schroepfer showed how 360-degree photos can instantly be shared with a friend in VR, with 360 photos appearing as handheld spheres. You can virtually grab the floating sphere and smash it against your face, you will then be instantly teleported into the content of the spherical photo.
[On 4/12/16], Facebook opened up Instant Articles to all publishers. If you don’t know, Instant Articles are Facebook’s new way to natively load articles within the app using an adapted RSS feed. These native articles, which have a lightning bolt in the top right corner, load in half a second?—?10x faster than if user was to click out to a website. From what I’ve seen so far, they really do load instantaneously and have a great layout and user experience. And if you’re paying attention, you’ll understand that this is their third push for native media consumption: first photos, then videos, and now written content.
…
However, as of [4/13/16], Instant Articles become available to anybody with a Facebook page and a blog. This is a key opportunity for small blogs and publications to get ahead of the game and really understand how best to use the new product.
… Has Facebook been able to achieve what AOL could have a generation ago? By that I mean: Has Facebook become a layer on top of the Internet itself?
From DSC: Let’s take some of the same powerful concepts (as mentioned below) into the living room; then let’s talk about learning-related applications.
MightyTV, which has raised more than $2 million in venture funding to date, launched today with a former Google exec at the helm. The startup’s technology incorporates machine learning with computer-generated recommendations in what is being touted as a “major step up” from other static list-making apps.
…
In this age of Roku and Apple TV, viewers can choose what to watch via the apps they’ve downloaded. MightyTV curates those programs — shows, movies and YouTube videos — into one app without constantly switching between Amazon, HBO, Netflix or Hulu.
…
Among the features included on MightyTV are:
* A Tinder-like interface that allows users to swipe through content, allowing the service to learn what you’d like to watch * An organizer tool that lists content via price range * A discovery tool to see what friends are watching * Allows for group viewings and binge watching
From DSC: What if your Apple TV could provide these sorts of functionalities for services and applications that are meant for K-12 education, higher education, and/or corporate training and development?
Instead of Amazon, HBO, Netflix or Hulu — what if the interface would present you with a series of learning modules, MOOCs, and/or courses from colleges and universities that had strong programs in the area(s) that you wanted to learn about?
That is, what if a tvOS-based system could learn more about you and what you are trying to learn about? It could draw upon IBM Watson-like functionality to provide you with a constantly morphing, up-to-date recommendation list of modules that you should look at. Think microlearning. Reinventing oneself. Responding to the exponential pace of change. Pursuing one’s passions. More choice/more control. Lifelong learning. Staying relevant. Surviving.
…all from a convenient, accessible room in your home…your living room.
A cloud-based marketplace…matching learners with providers.
Now tie those concepts in with where LinkedIn.com and Lynda.com are going and how people will get jobs in the future.
April 12, 2016 — Mountain View, CA—BlueJeans Network, the global leader in cloud-based video communication services, today unveiled the Enterprise Video Cloud, a comprehensive platform built for today’s globally distributed, modern workforce with video communications at the core. New global research shows that 85% of employees are already using video in the workplace and 72% believe that video will transform the way they communicate at work.
“There is a transformation happening among business today – face-to-face video is quickly rising as the preferred communications medium, offering new opportunities for deeper personal relations and outreach, as well as for improved internal and external collaboration,” said Krish Ramakrishnan, CEO of BlueJeans. “Once people experience the power of video, they ‘hang-up’ on traditional conference calling. We are seeing this happen with the emergence of video cultures that power the most innovative cultures—from Facebook and Netflix to Viacom and Del Monte.”
From DSC: I wonder if we’ll see video communication vendors such asBlueJeans orThe Video Call Centermerge with vendors likeBluescape,Mezzanine, orT1Vwith their collaboration tools. If so, some serious collaboration could all happen…again, right from within your living room!
I’m often asked about the set up I use to film my videos. Here’s a 360 spherical photo that I’ve annotated. Feel free to scroll and zoom around to check out my setup.