Google leads $542 million funding of mysterious augmented reality firm Magic Leap — from theverge.com by Jacob Kastrenakes and Ben Popper

Excerpt:

Google is leading a huge $542 million round of funding for the secretive startup Magic Leap, which is said to be working on augmented reality glasses that can create digital objects that appear to exist in the world around you. Though little is known about what Magic Leap is working on, Google is placing a big bet on it: in addition to the funding, Android and Chrome leader Sundar Pichai will join Magic Leap’s board, as will Google’s corporate development vice-president Don Harrison. The funding is also coming directly from Google itself — not from an investment arm like Google Ventures — all suggesting this is a strategic move to align the two companies and eventually partner when the tech is more mature down the road.

.

Magic Leap also says that it may “positively transform the process of education.”

.

Also see:

magic-leap-oct2014

 

World-of-Comenius-Oct2014

 

‘World of Comenius’ demonstrates powerful educational interaction w/ Leap Motion & Oculus Rift and Tomas “Frooxius” Mariancik

Excerpt:

We recently covered a (at that point unnamed) VR project by developer Tomáš “Frooxius” Marian?ík, the mind behind the stunning ‘Sightline’ VR series of demos. The project fused Leap Motion skeletal hand tracking with Oculus Rift DK2 positional tracking to produce an impressively intuitive VR interface. Now, a new video of the interface  shows impressive progress. We catch up with Tomáš to find out some more about this mysterious VR project.

Enter the ‘World of Comenius’
The virtual reality resurgence that is currently underway, necessarily and predictably concentrates on bringing people new ways to consume and experience media and games. This is where virtual reality has the best chance of breaking through as a viable technology, one that will appeal to consumers worldwide. But virtual reality’s greatest impact, at least in terms of historical worth to society, could and probably will come in the form of non-entertainment based fields.

 

 

Also see:

 

The amazing ways new tech shapes storytelling — from stuff.tv by Stephen Graves

Excerpt:

From the moment some singer-poet livened up his verse performances with a musical instrument, technology has changed entertainment. The printing press, theatrical lighting, the cinema, radio, cinematic sound – they’ve all either impacted on existing storytelling forms, or created whole new ones.

In recent years, the arrival of digital formats and non-linear editing changed TV. Existing TV formats like drama benefited from the same level of technical polish as films; and at the same time, the ability to shoot and edit large amounts of footage quickly and cheaply created a whole new form of storytelling – reality TV.

Streaming media’s one thing – but the biggest tech leap in years is, of course, your smartphone. Texting during films may infuriate but whipping your phone out in the cinema may become an integral part of the story: the 2013 film App used a second-screen app to display extra layers of narrative, synced to the film’s soundtrack. There are books that use second-screen apps: last year’s Night Film lets you scan tags in the physical book to unlock extra content, including mocked-up websites and trailers.

 

 

The amazing ways new tech shapes storytelling

 

 

 

Also see:

 

From DSC:
I’m thinking out loud again…

What if were were to be able to take the “If This Then That (IFTTT)” concept/capabilities and combine it with sensor-based technologies?  It seems to me that we’re at the very embryonic stages of some very powerful learning scenarios, scenarios that are packed with learning potential, engagement, intrigue, interactivity, and opportunities for participation.

For example, what would happen if you went to one corner of the room, causing an app on your mobile device to launch and bring up a particular video to review?  Then, after the viewing of the video, a brief quiz appears after that to check your understanding of the video’s main points. Then, once you’ve submitted the quiz — and it’s been received by system ABC — this triggers an unexpected learning event for you.

Combining the physical with the digital…

Establishing IFTTT-based learning playlists…

Building learning channels…learning triggers…learning actions…

Setting a schedule of things to do for a set of iBeacons over a period of time (and being able to save that schedule of events for “next time”).

Hmmm…there’s a lot of potential here!

 

 

IfThisThenThat-Combined-With-iBeacons

 

 

IfThisThenThat

 

 

iBeaconsAndEducation-8-10-14

 

 

Now throw augmented reality, wearables, and intelligent tutoring into the equation! Whew!

We need to be watching out for how machine-to-machine (M2M) communications can be leveraged in the classrooms and training programs across the globe.

One last thought here…
How are we changing our curricula to prepare students to leverage the power of the Internet of Things (IoT)?

 

Beacons at the museum: Pacific Science Center to roll out location-based Mixby app next month — from geekwire.com by Todd Bishop

Excerpt:

Seattle’s Pacific Science Center has scheduled an Oct. 4 public launch for a new system that uses Bluetooth-enabled beacons and the Mixby smartphone app to offer new experiences to museum guests — presenting them with different features and content depending on where they’re standing at any given moment.

 

Also see:

 

From DSC:
The use of location-based apps & associated technologies (machine-to-machine (M2M) communications) should be part of all ed tech planning from here on out — and also applicable to the corporate world and training programs therein. 

Not only applicable to museums, but also to art galleries, classrooms, learning spaces, campus tours, and more.  Such apps could be used on plant floors in training-related programs as well.

Now mix augmented reality in with location-based technology.  Come up to a piece of artwork, and a variety of apps could be launched to really bring that piece to life! Some serious engagement.

Digital storytelling. The connection of the physical world with the digital world. Digital learning. Physical learning. A new form of blended/hybrid learning.  Active learning. Participation.

 

 

 

Addendum on 9/4/14 — also see:

Aerohive Networks Delivers World’s First iBeacon™ and AltBeacon™ – Enabled Enterprise Wi-Fi Access Points
New Partnership with Radius Networks Delivers IoT Solution to Provide Advanced Insights and Mobile Experience Personalization

Excerpt (emphasis DSC):

SUNNYVALE, Calif.–(BUSINESS WIRE)–Aerohive Networks® (NYSE:HIVE), a leader in controller-less Wi-Fi and cloud-managed mobile networking for the enterprise market today announced that it is partnering with Radius Networks, a market leader in proximity services and proximity beacons with iBeacon™ and AltBeacon™ technology, to offer retailers, educators and healthcare providers a cloud-managed Wi-Fi infrastructure enabled with proximity beacons. Together, Aerohive and Radius Networks provide complementary cloud platforms for helping these organizations meet the demands of today’s increasingly connected customers who are seeking more personalized student education, patient care and shopper experiences.

 

Also:

 

 

 

Does Studying Fine Art = Unemployment? Introducing LinkedIn’s Field of Study Explorer — from LinkedIn.com by Kathy Hwang

Excerpt:

[On July 28, 2014], we are pleased to announce a new product – Field of Study Explorer – designed to help students like Candice explore the wide range of careers LinkedIn members have pursued based on what they studied in school.

So let’s explore the validity of this assumption: studying fine art = unemployment by looking at the careers of members who studied Fine & Studio Arts at Universities around the world. Are they all starving artists who live in their parents’ basements?

 

 

LinkedInDotCom-July2014-FieldofStudyExplorer

 

 

Also see:

The New Rankings? — from insidehighered.com by Charlie Tyson

Excerpt:

Who majored in Slovak language and literature? At least 14 IBM employees, according to LinkedIn.

Late last month LinkedIn unveiled a “field of study explorer.” Enter a field of study – even one as obscure in the U.S. as Slovak – and you’ll see which companies Slovak majors on LinkedIn work for, which fields they work in and where they went to college. You can also search by college, by industry and by location. You can winnow down, if you desire, to find the employee who majored in Slovak at the Open University and worked in Britain after graduation.

 

 

 

Ideas4Libraries-DanielChristian-SpringSummer2014

 

 

 

 

 

 

Augmented Reality: 32 resources about using it in education — from mediaspecialistsguide.blogspot.com by Julie Greller

Excerpt:

According to Webster’s Dictionary, augmented reality is “an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device (as a smartphone camera); also :  the technology used to create augmented reality.”  Think of it as a type of virtual reality, using the computer to copy your world. You are probably familiar with a tool created by Google which falls into this category: Google Glass. Although augmented reality has existed for a long time, we as teachers are only now grasping how to use it in the classroom. Let’s take a look below.

 
 

Mark Zuckerberg talks about the purchase of Oculus VR

Excerpt (emphasis DSC):

But this is just the start. After games, we’re going to make Oculus a platform for many other experiences. Imagine enjoying a court side seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face — just by putting on goggles in your home.

This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures.

These are just some of the potential uses. By working with developers and partners across the industry, together we can build many more. One day, we believe this kind of immersive, augmented reality will become a part of daily life for billions of people.

Virtual reality was once the dream of science fiction. But the internet was also once a dream, and so were computers and smartphones. The future is coming and we have a chance to build it together. I can’t wait to start working with the whole team at Oculus to bring this future to the world, and to unlock new worlds for all of us.

 

If you like immersion, you’ll love this reality — from nytimes.com by Farhad Manjoo

Excerpt:

Virtual reality is coming, and you’re going to jump into it.
That’s because virtual reality is the natural extension of every major technology we use today — of movies, TV, videoconferencing, the smartphone and the web. It is the ultra-immersive version of all these things, and we’ll use it exactly the same ways — to communicate, to learn, and to entertain ourselves and escape.
The only question is when.

 

Oculus Rift just put Facebook in the movie business — from variety.com by Andrew Wallenstein

Excerpt:

The surprise $2 billion acquisition of virtual-reality headset maker Oculus Rift by a social network company might seem to have nothing to do with movie theaters or films. But in the long term, this kind of technology is going to have a place in the entertainment business. It’s just a matter of time.

While Cinemacon-ites grapple with how best to keep people coming to movie theaters, Oculus Rift will be part of the first wave of innovation capable of bringing an incredible visual environment to people wherever they choose. But this probably won’t require people congregating in theaters for an optimal experience the way 3D does.

Virtual reality is thought of primarily as a vehicle for gaming, but there are applications in the works that utilize the technology for storytelling. There’s already a production company, Condition One, out with a trailer for a documentary, “Zero Point” (see video above), about VR, told through VR. Media companies are just starting to get their hands on how to use VR, which to date has been for marketing stunts like one HBO did at SXSW for “Game of Thrones.” One developer even recreated the apartment from “Seinfeld.”

 

An Oculus Rift hack that lets you draw in 3-D — from wired.com by Joseph Flaherty

Excerpt:

Facebook’s $2 billion acquisition of Oculus Rift has reopened conversation about the potential of virtual reality, but a key question remains: How will we actually interact with these worlds? Minecraft creator Markus Persson noted that while these tools can enable amazing experiences, moving around and creating in them is far from a solved problem.

A group of Royal College of Art students–Guillaume Couche, Daniela Paredes Fuentes, Pierre Paslier, and Oluwaseyi Sosanya–has developed a tool called GravitySketch that starts tracing an outline of how these systems could work as creative tools.

 

Gravity – 3D Sketching from GravitySketch on Vimeo.

Addendum later on 4/8/14:

 
 

Layar’s industry leading Augmented Reality app now available on Google Glass — from layar.com

Excerp:

AMSTERDAM, NEW YORK, TORONTO – March 19th, 2014 – Layar, the world’s number one provider of Augmented Reality (AR) and Interactive Print products and services, today announced the availability of its industry leading mobile app on Google Glass. Glass users can go to Layar.com/Glass to download the app and see instructions for how to install it. By just saying “Ok Glass, scan this,” users can easily experience any of the platform’s over 200,000 Interactive Print pages and 6,000 location-based Geo Layers.

With Interactive Print, static print content comes alive with videos, photo slideshows, links to buy and share and immersive 3D experiences. Glass users can now access Layar’s rapidly growing platform of Interactive Print campaigns, including magazines like Men’s Health, Inc. and Glamour, as well as newspapers, advertising, art and more. Geo Layers allow users to see location-based information – including points-of-interest like local real restate listings, geotagged media like nearby photos and tweets, 3D art and more – in an augmented, “heads up” view using the camera on the Glass device.

 

Excerpt of video:

LayarOnGoogleGlass-March2014

 

From DSC:
Using Layar’s Creator  app, there could be numerous and creative applications of these technologies within the realm of education.  For example, in a Chemistry class, one could have printouts of some of the types of equipment one would use in an experiment.

 

TypesOfChemEquipment

 

Looking at a particular piece of paper (and having loaded the app) would trigger a pop-up with that piece of equipment’s name, function, and/or other information as well as which step(s) of the experiment that you will be using that piece of equipment on.

Or, one could see instructions for how to put things together using this combination of tools. A set of printed directions could pop up a quick video for how to execute that step of the directions. (I sure could have used that sort of help in putting together our daughter’s crib I tell ya!)

 

 

 

 

Digital life in 2025 — from by Janna Anderson and Lee Rainie
Experts predict the Internet will become ‘like electricity’ — less visible, yet more deeply embedded in people’s lives for good and ill

 

Report here.

 

Excerpts from report:

One striking pattern is that these experts agree to a large extent on the trends that will shape digital technology in the next decade. Among those expected to extend through 2025 are:

  • A global, immersive, invisible, ambient networked computing environment built through the continued proliferation of smart sensors, cameras, software, databases, and massive data center s in a world-spanning information fabric known as the Internet of Things.
  • “Augmented reality” enhancements to the real-world input that people perceive through the use of portable/wearable/implantable technologies.
  • A continuing evolution of artificial intelligence-equipped tools allowing anyone to connect to a globe-spanning information network nearly anywhere, anytime.
  • Disruption of business models established in the 20th century (most notably impacting finance, entertainment, publishers of all sorts, and education).
  • Tagging, databasing, and intelligent analytical mapping of the physical and social realms

 

 

DigitalLifeIn2025

 

 

Also see:

  • The Rise of the Digital Silhouette — from shift2future.com by Brian Kuhn
    Excerpt:
    Switching gears for a moment…  there are some significant benefits to education systems in the ‘Internet of things’ movement.  Imagine that students, wearing various data logging technologies, including Google Glasses, interacting with each other, with ‘text books’, human teachers, each other, and other learning resources, along with a host of educational apps, are continuously digitally documented.  Imagine that there are ‘intelligent’ algorithms (think IBM’s Watson but even more advanced) that look for patterns, provide real-time recommendations and coaching that adjust the student’s personalized learning plan, directly interacting with and advising the students like a personal learning coach.  Imagine that when a report card is due, the student’s ‘digital learning guide’ automatically produces a summative report card complete with a ‘live’ info graphic on the student’s learning and generates it directly in the student’s online learning portfolio and sends an alert to the parents.
 

Google Glass: Not the only eye candy in town — from huffingtonpost.com by Robin Raskin; with special thanks to Mr. Rob Bobeldyk [Asst. Dir. Teaching & Learning at Calvin College] for this resource

Excerpted applications:

  • Manipulate objects in virtual space with your “real” hands… pulling, tugging, tapping and stretching
  • Creating a solid gamer and entertainment experience
  • A great training app/factory tool
  • Recognize the world in front of you and then overlay information atop of it
  • Play a game just by moving your eyes
  • A mature suite of apps can work with your calendar and make phone calls. A built in HD display camera takes and shares photos and videos.
  • Immersive/ultra-realistic entertainment experiences
  • Wearable headset aimed at the sports/active lifestyle enthusiast
  • Enterprise applications

 

 

 

Also, some excerpted applications from  Google Glass is Transforming Wearable Technology — from Daniel Burrus

  • Access information and a camera to capture activities in front of you
  • Use voice recognition to have it type messages or to send commands, like you do with Apple’s Siri or Google’s version of Siri, called Google Now
  • doctors are using Google Glass during surgery so they don’t have to take their eyes off of the operating table to view things like blood pressure, pulse, and temperature readings
  • Help train future surgeons
  • Google Glass would show not only the stores in your line of sight, but also overlay the types or names of products in each store

 

Also see:

 

 

Bye bye second screen? The InAIR lets you browse the web and watch TV all in one place — from techcrunch.com by Colleen Taylor (@loyalelectron)

Excerpt:

Nowadays, many people browse the web at the same time that they’re watching TV — the phenomenon is called the “second screen.” But a new gadget called InAIR from a startup called SeeSpace wants to bring our attention back to just one screen by putting the best of the laptop, the smart phone, and the TV all together in one place.

 

InAIR-ByeByeSecondScreen-March2014

 

From DSC:
I like what Nam Do, the CEO of InAir is saying about one’s attention and how it’s divided when you are using a second screen.  That is, when you are trying to process some information from the large screen and some information from the smaller/more mobile screen on your lap or desk.  You keep looking up…then looking down. Looking up…then looking down.  By putting information in closer proximity in one’s visual channel, perhaps processing information will be more intuitive and less mentally taxing.

This fits with an app I saw yesterday called Spritz.

 

spritz-march2014

In this posting, it said:

To understand Spritz, you must understand Rapid Serial Visual Presentation (RSVP). RSVP is a common speed-reading technique used today. However, RSVP was originally developed for psychological experiments to measure human reactions to content being read. When RSVP was created, there wasn’t much digital content and most people didn’t have access to it anyway. The internet didn’t even exist yet.  With traditional RSVP, words are displayed either left-aligned or centered. Figure 1 shows an example of a center-aligned RSVP, with a dashed line on the center axis.

When you read a word, your eyes naturally fixate at one point in that word, which visually triggers the brain to recognize the word and process its meaning. In Figure 1, the preferred fixation point (character) is indicated in red. In this figure, the Optimal Recognition Position (ORP) is different for each word. For example, the ORP is only in the middle of a 3-letter word. As the length of a word increases, the percentage that the ORP shifts to the left of center also increases. The longer the word, the farther to the left of center your eyes must move to locate the ORP.

word_positioning_blog3

In the Science behind this app, it says (emphasis DSC):

Reading Basics
Traditional reading involves publishing text in lines and moving your eyes sequentially from word to word. For each word, the eye seeks a certain point within the word, which we call the “Optimal Recognition Point” or ORP. After your eyes find the ORP, your brain starts to process the meaning of the word that you’re viewing. With each new word, your eyes move, called a “saccade”, and then your eyes seek out the ORP for that word. Once the ORP is found, processing the word for meaning and context occurs and your eyes move to the next word. When your eyes encounter punctuation within and between sentences, your brain is prompted to assemble all of the words that you have read and processes them into a coherent thought.

When reading, only around 20% of your time is spent processing content. The remaining 80% is spent physically moving your eyes from word to word and scanning for the next ORP. With Spritz we help you get all that time back.

 

So, if the convergence of the television, the telephone, and the computer continues, I think Nam Do’s take on things might be very useful. If I were watching a lecture on the large screen for example, I could get some additional information in closer proximity to the professor.  If the content I wanted to see was more closely aligned with each other, perhaps my mind could take in more things…more efficiently.

 

CloserProximity-DanielChristian-LearningFromLivingClassRoom

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 
© 2024 | Daniel Christian