Top 10 future technology jobs: VR developer, IoT specialist and AI expert — from v3.co.uk; with thanks to Norma Owen for this resource
V3 considers some of the emerging technology jobs that could soon enter your business

Top 10 jobs:

10. VR developer
9.   Blockchain engineer/developer
8.   Security engineer
7.   Internet of Things architect
6.   UX designer
5.   Data protection officer
4.   Chief digital officer
3.   AI developer
2.   DevOps engineer
1.   Data scientist

 

What are micro-credentials? — from onlineschoolscenter.com; with thanks to Arabella Bronson for the resource

 

What Are Micro-credentials?
Source:
OnlineSchoolsCenter.com

Also see:

 

Amazon now lets you test drive Echo’s Alexa in your browser — from by Dan Thorp-Lancaster

Excerpt:

If you’ve ever wanted to try out the Amazon Echo before shelling out for one, you can now do just that right from your browser. Amazon has launched a dedicated website where you can try out an Echo simulation and put Alexa’s myriad of skills to the test.

 

Echosimio-Amazon-EchoMay2016

 

 

From DSC:
The use of the voice and gesture to communicate to some type of computing device or software program represent growing types of Human Computer Interaction (HCI).  With the growth of artificial intelligence (AI), personal assistants, and bots, we should expect to see more voice recognition services/capabilities baked into an increasing amount of products and solutions in the future.

Given these trends, personnel working within K-12 and higher ed need to start building their knowledgebases now so that we can begin offering more courses in the near future to help students build their skillsets.  Current user experience designers, interface designers, programmers, graphic designers, and others will also need to augment their skillsets.

 

 

 

“The flipped classroom is about moving the hard parts of learning into the classroom”
— Derek Bruff

 

Class Time Reconsidered: Flipping the Literature Class — from Derek Bruff

Excerpt:

Reading a text or commenting on a blog post are usually activities we ask students to do outside of class. But Helen and Humberto have flipped that idea by bringing those activities into the classroom, where they could be collaborative and communal. Why? Because close reading of a text and responding to another writer’s argument are both important skills in a literature course. Why not have students practice those skills during class, when they can receive feedback on that practice from both their instructor and their peers?

And that’s what the flipped classroom is actually all about. Never mind that bit about lecture videos and group work. The flipped classroom is about moving the hard parts of learning into the classroom, where they can benefit from what Helen Shin calls “shared temporal, spatial, and cognitive presence.” After all, most of us only have maybe 150 minutes a week with our students during class. Shouldn’t we spend that time engaging our students in the kind of practice and feedback that’s central to learning in our disciplines?

 

Also see:

 

Teaching has always been a learning experience for me. Likewise, theory and practice go hand in hand, each supplementing and enriching the other. What I learned by “flipping” the “flipped classroom” model with this particular experience, which I hope to carry over to more classes to follow, is that there is always more room for further conceptual, and perspectival flipping in the interactive and dynamic process of teaching and learning.

 

 

 

Beyond touch: designing effective gestural interactions — from blog.invisionapp.com by by Yanna Vogiazou; with thanks to Mark Pomeroy for the resource

 

The future of interaction is multimodal.

 

Excerpts:

The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.

Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?

Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.

 

 

Forget about buttons — think actions.

 

 

Eliminate the need for a cursor as feedback, but provide an alternative.

 

 

 

 

appPicker

Per Ivan Kreimer, “every day we feature the top 100 educational apps for iPhone and iPad, and also review many individual apps for teachers and students.”

 

AppPicker-May2016-EducationTop

 

AppPicker-May2016

 

Now you can build your own Amazon Echo at home—and Amazon couldn’t be happier — from qz.com by Michael Coren

Excerpt:

Amazon’s $180 Echo and the new Google Home (due out later this year) promise voice-activated assistants that order groceries, check calendars and perform sundry tasks of your everyday life. Now, with a little initiative and some online instructions, you can build the devices yourself for a fraction of the cost. And that’s just fine with the tech giants.

At this weekend’s Bay Area Maker Faire, Arduino, an open-source electronics manufacturer, announced new hardware “boards”—bundles of microprocessors, sensors, and ports—that will ship with voice and gesture capabilities, along with wifi and bluetooth connectivity. By plugging them into the free voice-recognition services offered by Google’s Cloud Speech API and Amazon’s Alexa Voice Service, anyone can access world-class natural language processing power, and tap into the benefits those companies are touting. Amazon has even released its own blueprint and code repository to build a $60 version of its Echo using Raspberry Pi, another piece of open-source hardware.

 

From DSC:
Perhaps this type of endeavor could find its way into some project-based learning out there, as well as in:

  • Some Computer Science-related courses
  • Some Engineering-related courses
  • User Experience Design bootcamps
  • Makerspaces
  • Programs targeted at gifted students
  • Other…??

 

 

 

GoldmanSachs-Jan2016Report
With thanks to Fred Steube for this resource

 

 

Virtual reality facilitates higher ed research and teaches high-risk skills — from edtechmagazine.com by Jacquelyn Bengfort
From neuroscience to ship navigation, virtual environments deliver real-world learning inside the classroom.

Excerpt:

Simulators are an important part of their education. Stepping into one of three full-mission bridge simulators replicates the experience of standing in an ocean liner’s pilothouse and lets students practice their skills in handling a ship — without risk.

“In the simulator, you can push the limits of the environment: increase the amount of current that’s there, go to the limits of the amount of wind that can be handled by the tugs,” says Capt. Victor Schisler, one of Cal Maritime’s simulator instructors.

 

 

 

 

Oculus Launches Virtual Reality Program in High Schools — from thejournal.com by Sri Ravipati
The new initiative provides students with VR equipment to create short films on social issues.

Excerpt:

Oculus has announced a new pilot program for high school students to use virtual reality as a tool for social change.

As part of the VR for Good initiative, the 360 Filmmaker Challenge will connect nine San Francisco Bay Area high schools with professional filmmakers to create three- to five- minute 360 degree films about their communities. Students will receive a Samsung Gear VR, a Galaxy S6, Ricoh Theta S 360 cameras and access to editing software to make their films, according to Oculus.

 

 

 

 

How Adobe is connecting virtual reality with the world of product placement: 360-degree video mixes atmosphere and ads — from adweek.com by Marty Swant

Excerpt:

Interested in watching the 2015 hit film The Martian from the surface of the moon? Adobe wants you to take you there.

Adobe isn’t entering the latest next-generation space race to compete with SpaceX, Blue Origin or Virgin Galactic anytime soon. But it is for the first time entering the worlds of virtual reality and augmented reality through new Adobe Primetime products.

[On May 17th] Adobe debuted Virtual Cinema, a feature that will allow Primetime clients to develop experiences for users to enter a virtual environment. According to Adobe, users will be able to view traditional video in a custom environment—a cinema, home theater or branded atmosphere—and watch existing TV and motion picture content in a new way. There’s also potential for product placement within the virtual/augmented reality experience.

 

 

 

From Samsung Gear 360 Unboxing and Video Test — from vrscout.com by Jonathan Nafarrete

 

360-degree-camera-comparisons-May2016

 

 

Could HoloLens’ augmented reality change how we study the human body? — from edtechmagazine.com by D. Frank Smith
Case Western Reserve University is helping to revolutionize medical-science studies with a new technology from Microsoft.

Excerpt:

While the technology world’s attention is on virtual reality, a team of researchers at Case Western Reserve University (CWRU) is fixated on another way to experience the world — augmented reality (AR).

Microsoft’s forthcoming AR headset, HoloLens, is at the forefront of this technology. The company calls it the first holographic computer. In AR, instead of being surrounded by a virtual world, viewers see virtual objects projected on top of reality through a transparent lens.

CWRU was among the first in higher education to begin working with HoloLens, back in 2014. They’ve since discovered new ways the tech could help transform education. One of their current focuses is changing how students experience medical-science courses.

 

 

 

 

How to make a mixed reality video and livestream from two realities — from uploadvr.com by Ian Hamilton

Excerpt:

Follow these steps to record or stream mixed reality footage with the HTC Vive
A mixed reality video is one of the coolest ways to show people what a virtual environment feels like. A green screen makes it easy for a VR-ready PC to automatically remove everything from a camera’s feed, except for your body movements.  Those movements are then seamlessly combined with a view from another camera in a virtual environment. As long as the two cameras are synced, you can seamlessly combine views of two realities into a single video. In essence, mixed reality capture is doing what Hollywood or your weatherman has been doing for years, except at a fraction of the cost and in real-time. The end result is almost magical.

 

 

 

 

Google-io-2016

 

9 most important things from the Google I/O keynote — from androidcentral.com by Jen Karner

Excerpt:
Here’s a breakdown of the nine big things Google brought to I/O 2016.

  1. Now on Steroids — Google Assistant
  2. Google Home — Amazon Who?
  3. Allo — A smarter messenger
  4. Duo — Standalone video chat
  5. Everything Android N
  6. Android Wear 2.0
  7. The future — Android Instant Apps
  8. New Android Studio
  9. New Firebase tools

 

CEO Sundar Pichai comes in at the 14:40 mark:

 

 

I/O: Building the next evolution of Google — from googleblog.blogspot.com

Excerpts:

Which is why we’re pleased to introduce…the Google assistant. The assistant is conversational—an ongoing two-way dialogue between you and Google that understands your world and helps you get things done. It makes it easy to buy movie tickets while on the go, to find that perfect restaurant for your family to grab a quick bite before the movie starts, and then help you navigate to the theater. It’s a Google for you, by you.

Google Home is a voice-activated product that brings the Google assistant to any room in your house. It lets you enjoy entertainment, manage everyday tasks, and get answers from Google—all using conversational speech. With a simple voice command, you can ask Google Home to play a song, set a timer for the oven, check your flight, or turn on your lights. It’s designed to fit your home with customizable bases in different colors and materials. Google Home will be released later this year.

 

 

 

Google takes a new approach to native apps with Instant Apps for Android — from techcrunch.com by Frederic Lardinois, Sarah Perez

Excerpt:

Mobile apps often provide a better user experience than browser-based web apps, but you first have to find them, download them, and then try not to forget you installed them. Now, Google wants us to rethink what mobile apps are and how we interact with them.

Instant Apps, a new Android feature Google announced at its I/O developer conference today but plans to roll out very slowly, wants to bridge this gap between mobile apps and web apps by allowing you to use native apps almost instantly — even when you haven’t previously installed them — simply by tapping on a URL.

 

 

Google isn’t launching a standalone VR headset…yet — from uploadvr.com

Excerpt:

To the disappointment of many, Google Vice President of Virtual Reality Clay Bavor did not announce the much-rumoured (and now discredited) standalone VR HMD at today’s Google I/O keynote.

Instead, the company announced a new platform for VR on the upcoming Android N to live on called Daydream. Much like Google’s pre-existing philosophy of creating specs and then pushing the job of building hardware to other manufacturers, the group is providing the boundaries for the initial public push of VR on Android, and letting third-parties build the phones for it.

.

 

 

Google’s Android VR Platform is Called ‘Daydream’ and Comes with a Controller — from vrguru.com by Constantin Sumanariu

Excerpt:

Speaking at the opening keynote for this week’s Google I/O developer conference, the company’s head of VR Clay Bavor announced that the latest version of Android, the unnamed Android N, would be getting a VR mode. Google calls the initiative to get the Android ecosystem ready for VR ‘Daydream’, and it sounds like a massive extension of the groundwork laid by Google Cardboard.

 

 

Conversational AI device: Google Home — from postscapes.com

Excerpt:

Google finally has its answer to Amazon’s voice-activated personal assistant device, Echo. It’s called Google Home, and it was announced today at the I/O developer conference.

 

 

Movies, TV Shows and More Comes to Daydream VR Platform — from vrguru.com by Constantin Sumanariu

 

 

 

 

Allo is Google’s new, insanely smart messaging app that learns over time — from androidcentral.com by Jared DiPane

Excerpt:

Google has announced a new smart messaging app, Allo. The app is based on your phone number, and it will continue to learn from you over time, making it smarter each day. In addition to this, you can add more emotion to your messages, in ways that you couldn’t before. You will be able to “whisper” or “shout” your message, and the font size will change depending on which you select. This is accomplished by pressing the send button and dragging up or down to change the level of emotion.

 

 

 

Google follows Facebook into chatbots — from marketwatch.com by Jennifer Booton
Google’s new home assistant and messenger service will be powered by AI

Excerpt:

Like Facebook’s bots, the Google assistant is designed to be conversational. It will play on the company’s investment in natural language processing, talking to users in a dialogue format that feels like normal conversation, and helping users buy movie tickets, make dinner reservations and get directions. The announcement comes one month after Facebook CEO Mark Zuckerberg introduced Messenger with chatbots, which serves basically the same function.

 

 

Also see:

 

 

Teaching while learning: What I learned when I asked my students to make video essays — from chronicle.com by Janine Utell, Professor of English at Widener University

Excerpt:

This is not exactly a post about how to teach the video essay (or the audiovisual essay, or the essay video, or the scholarly video).  At the end I share some resources for those interested in teaching the form: the different ways we might define the form, some of the theoretical/conceptual ideas undergirding the form, how it allows us to make different kinds of arguments, and some elements of design, assignment and otherwise.

What I’m interested in here is reflecting on what this particular teaching moment has taught me.  It’s a moment still in progress/process.  These reflections might pertain to any teaching moment where you’re trying something new, where you’re learning as the students are learning, where everyone in the room is slightly uncomfortable (in a good, stretching kind of way), where failure is possible but totally okay, and where you’re able to bring in a new interest of your own and share it with the students.

Take two:  I tried this again in an upper-level narrative film course, and the suggestions made by students in the previous semester paid off.  With the additional guidance, students felt comfortable enough being challenged with the task of making the video; a number of them shared that they liked having the opportunity to learn a new skill, and that it was stimulating to have to think about new ways of making choices around what they wanted to say.  Every step of realizing their storyboard and outline required some problem-solving, and they were able to articulate the work of critical thinking in surprising ways (I think they themselves were a little surprised, too).

Some resources on the video essay/scholarly video:

 

 

From DSC:

A couple of comments that I wanted to make here include:

  1. I greatly appreciate Janine’s humility, her wonderful spirit of experimentation, and her willingness to learn something right along with her students. She expressed to her students that she had never done this before and that they all were learning together. She asked them for feedback along the way and incorporated that feedback in subsequent attempts at using this exercise. Students and faculty members need to realize/acknowledge/accept that few people know it all these days — experts are a dying breed in many fields, as the pace of change renders it thus.
    .
  2. Helping students along with their new media literacy skills is critical these days. Janine did a great job in this regard! Unfortunately, she is in an enormous minority.  I run a Digital Studio on our campus, and so often I enter the room with dismay…a bit of sorrow creeps back into me again, as too many times our students are not learning some of the skills that will serve them so well once they graduate (not to mention how much they would benefit from being able to craft multimedia-based messages and put such messages online in their studies while in college). Such skills will serve students well in whatever future vocation they go into.  Knowing some of the tools of the trade, working with digital audio and video, storyboarding, working with graphics, typography, and more — are excellent skills and knowledge to have in order to powerfully communicate one’s message.

 

 

 

 

FLEXspace: Sharing the best of learning space design — from campustechnology.com by Mary Grush
A Q&A with Lisa Stephens and Rebecca Frazee

Excerpt:

FLEXspace is a repository and open online education resource institutions can use to research and share information about successful learning space design. A true grass roots collaboration, access is free to .edu domain owners. Using ARTstor Shared Shelf, the community is growing a rich resource that, after only about five years since its inception, is already demonstrating its potential to improve both the process of creating campus learning spaces and the science of using them. Campus Technology spoke with two FLEXspace team leaders (who work remotely from their home institutions): Lisa Stephens, Senior Strategist, SUNY Academic Innovation, SUNY System Administration and the University of Buffalo; and Rebecca Frazee, FLEXspace Manager and Lecturer, Learning Design and Technology Program, San Diego State University.

 

FlexSpace-March2014

 


Some other items regarding learning spaces:


  • The Learning Space Rating System where they mention:
    • The four categories of formal learning space they use in LSRS version 1 are:
      • Discussion-focused classrooms designed to support meetings of the full course cohort (example: seminar rooms)
      • Team-based classrooms with fixed furnishing (example: the step-up design)
      • Presentation-focused classrooms (examples: lecture halls, auditoria)
      • Versatile classrooms that support some combination of the above designs, or are slightly more specialized in the type of learning they support (example: a room with entirely mobile furnishings that can be set in a traditional or team-based fashion

 

pkall-dot-org-4-qs

 

 

 

 

Romans 11:33 New International Version (NIV)

Doxology

33 Oh, the depth of the riches of the wisdom and knowledge of God!
    How unsearchable his judgments,
    and his paths beyond tracing out!

 

The future of personalised education — from IBM.com

Excerpt:

From curriculum to career with cognitive systems
Data-driven cognitive technologies will enable personalised education and improve outcomes for students, educators and administrators. Ultimately, education experiences will be transformed and improved when data can accompany the students throughout their life-long learning journey.

In this research, we explore how educators are using digital education services to deliver personalised programs. Through inputs from a series of in-depth interviews, surveys and social listening, we paint a picture of how the world of work and learning might evolve over the next forty years.

 

IBM-PersonalizedLearning-1-May2016

 

IBM-PersonalizedLearning-2-May2016

 

 

IBM-PersonalizedLearning-3-May2016

 

 

 

The 2016 Higher Education Online Learning Landscape [Online Learning Consortium]
Today’s students are driving the online learning imperative

Excerpt:

Our new infographic illustrates the key topics and trends currently driving the infusion of online learning in higher education based on the most current research from across the field of higher education.

 

2016onlinelearninglandscape

Download our infographic
to learn more about our perspective
on the changing face of online learning.

 

The infographic highlights a number of trends that are affecting this changing landscape, including:

  • Digital learning opportunities being quickly embraced by today’s students
  • An evolving, growing higher education population seeking ease of access and affordable solutions
  • Rising tuition costs requiring innovative alternatives, particularly for low-income families
  • Growing acceptance and adoption of education technology
  • Academic leaders continued commitment to online learning
  • And the ongoing education conversation in the federal government

 

 

MomentumContinues2Build-2016OnlineLearning

 

 

Adobe Photoshop unveils artificial intelligence tool to identify fonts from 20,000 typefaces — from ibtimes.co.uk by Mary-Ann Russon

Excerpt:

Graphic designers, rejoice – the hours upon hours of struggling to figure out what a type face is will finally be over as Adobe is adding an artificial intelligence tool to help detect and identify fonts from any type of picture, sketch or screenshot.

The DeepFont system features advanced machine-learning algorithms that send pictures of typefaces from Photoshop software on a user’s computer to be compared to a huge database of over 20,000 fonts in the cloud, and within seconds, results are sent back to the user, akin to the way the music discovery app Shazam works.

 

 

 
© 2024 | Daniel Christian