A title from DSC: How Amazon uses its vast data resources to reinvent the bookstore
Excerpt (emphasis DSC):
Amazon’s First Foray into Physical Retail — While Utilitarian — Takes Discovery to New Levels
As a long time city dweller living in a neighborhood full of history, I had mixed feelings about the arrival of Amazon’s first bricks-and-mortar bookstore in a city neighborhood (the first four are located in malls). Like most of my neighbors around Chicago’s Southport Corridor, I prefer the charm of owner operated boutiques. Yet as a tech entrepreneur who holds Amazon founder Jeff Bezos in the highest esteem, I was excited to see how Amazon would reimagine the traditional bookstore given their customer obsession and their treasure trove of user data. Here’s what I discovered…
…
The Bottom Line:
I will still go to Amazon.com for the job of ordering a book that I already know that I want (and to the local Barnes and Noble if I need it today).But when I need to discover a book for gifts (Father’s Day is coming up soon enough) or for my own interest, nothing that I have seen compares to Amazon Books. We had an amazing experience and discovered more books in 20 minutes than we had in the past month or two.
The physical manifestation of the “if you like…then you’ll love…”
The ultra metric combining insights from disparate sources seems more compelling than standard best seller lists
Google has recently released a brand new version of Google Earth for both Chrome and Android. This new version has come with a slew of nifty features teachers can use for educational purposes with students in class. Following is a quick overview of the most fascinating features…
From DSC: In terms of learning, having to be in the same physical place as others continues to not be a requirement nearly as much as it used to be. But I’m not just talking about online learning here. I’m talking about a new type of learning environment that involves both hardware and software to facilitate collaboration (and it was designed that way from day 1). These new types of setups can provide us with new opportunities and affordances that we should begin experimenting with immediately.
Check out the following products — all of which allow a person to contribute to a discussion or conversation from anywhere they can get Internet access:
When you go to those sites, you will see words and phrase such as:
Visual collaboration software
Virtual workspace
Develop
Share
Inspire
Design
Global teams
A visual collaboration solution that links locations, teams, content, and devices in an immersive, shared workspace
Teamwork
Create and brainstorm with others
Digital workplace platform
Eliminate the distance between in-office and remote employees
Jumpstart spontaneous brainstorms and working sessions
So using these types of software and hardware setups, I can contribute regardless of where I’m located. Remote learning — from anywhere in the world — being combined with our face-to-face based classrooms.
Also, the push for Active Learning Classrooms (ALCs) continues across higher education. Such hands-on, project-learning based, student-centered approaches fit extremely well with the collaboration setups mentioned above.
“…video conferencing is increasingly an application within in a larger workflow…”
Lastly, if colleges and universities don’t have the funds to maintain their physical plants, look for higher education to move increasingly online — and these types of solutions could play a significant role in that environment. Plus, for working adults who need to reinvent themselves, this is an extremely efficient means of picking up some new skills and competencies.
So the growth of these types of setups — where the software and hardware work together to support worldwide collaboration — will likely create a powerful, new, emerging piece of our learning ecosystems.
Remote learning — from anywhere in the world — being combined with our face-to-face based classrooms.
One example of how this might work is at a restaurant. Your friend will be able to leave an augmented reality sticky note on the menu, letting you know which menu item is the best or which one’s the worst when you hold your camera up to it.
Another example is if you’re at a celebration, like New Year’s Eve or a birthday party. Facebook could use an augmented reality filter to fill the scene with confetti or morph the bar into an aquarium or any other setting corresponding with the team’s mascot. The basic examples are similar to Snapchat’s geo-filters—but the more sophisticated uses because it will actually let you leave digital objects behind for your friends to discover. Very cool!
“We’re going to make the camera the first mainstream AR platform,” said Zuckerberg.
On Tuesday, Facebook kicked off its annual F8 developer conference with a keynote address. CEO Mark Zuckerberg and others on his executive team made a bunch of announcements aimed at developers, but the implications for Facebook’s users was pretty clear. The apps that billions of us use daily—Facebook, Messenger, WhatsApp, Instagram—are going to be getting new camera tricks, new augmented reality capabilities, and more bots. So many bots!
Facebook’s most fascinating virtual reality experiment, a VR hangout session where you can interact with friends as if you were sitting next to one another, is now ready for the public. The company is calling the product Facebook Spaces, and it’s being released today in beta form for the Oculus Rift.
From DSC:
Is this a piece of the future of distance education / online learning-based classrooms?
In 2014, Facebook launched its FbStart program, which has helped several thousand early stage apps build and grow their apps through a set of free tools and mentorship meetings. On Tuesday, Facebook unveiled a new program to reach a broader range of developers, as well as students interested in technology.
The program, called “Developer Circles,” is intended to bring developers in local communities together offline as well as online in Facebook groups to encourage the sharing of technical know-how, discuss ideas and build new projects. The program is also designed to serve students who may not yet be working on an app, but who are interested in building skills to work in computer science.
Facebook will rely on an army of outside developers to contribute augmented reality image filters and interactive experiences to its new Camera Effects platform. After today’s Facebook F8 conference, the first effects will become available inside Facebook’s Camera feature on smartphones, but the Camera Effects platform is designed to eventually be compatible with future augmented reality hardware, such as eyeglasses.
While critics thought Facebook was just mindlessly copying Snapchat with its recent Stories and Camera features in Facebook, Messenger, Instagram and WhatsApp, Mark Zuckerberg tells TechCrunch his company was just laying the groundwork for today’s Camera Effects platform launch.
On Tuesday, Mr. Zuckerberg introduced what he positioned as the first mainstream augmented reality platform, a way for people to view and digitally manipulate the physical world around them through the lens of their smartphone cameras.
From DSC: First of all, let me say again that I’m notsuggesting that we replace professors with artificial intelligence, algorithms, and such.
However, given a variety of trends, we need to greatly lower the price of obtaining a degree and these types of technologies will help us do just that — while at the same time significantly increasing the productivity of each professor and/or team of specialists offering an online-based course (something institutions of higher education are currently attempting to do…big time). Not only will these types of technologies find their place in the higher education landscape, I predict that they will usher in a “New Amazon.com of Higher Education” — a new organization that will cause major disruption for traditional institutions of higher education. AI-powered MOOCs will find their place on the higher ed landscape; just how big they become remains to be seen, but this area of the landscape should be on our radars from here on out.
This type of development again points the need for team-based approaches; such approaches will likely dominate the future.
HAYWARD, Calif., April 14, 2017 /PRNewswire/ — Cal State East Bay, a top-tier public university, and Cognii Inc., a leading provider of artificial intelligence-based educational technologies, today announced a partnership. Cognii will work with Cal State East Bay to develop a new learning and assessment experience, powered by Cognii’s Virtual Learning Assistant technology.
Winner of the 2016 EdTech Innovation of the Year Award from Mass Technology Leadership Council for its unique use of conversational AI and Natural Language Processing technologies in education, Cognii VLA provides automatic grading to students’ open-response answers along with qualitative feedback that guides them towards conceptual mastery. Compared to the multiple choice tests, open-response questions are considered pedagogically superior for measuring students’ critical thinking and problem solving skills, essential for 21st century jobs.
Students at Cal State East Bay will use the Cognii-powered interactive tutorials starting in summer as part of the online transfer orientation course. The interactive questions and tutorials will be developed collaboratively by Cognii team and the eLearning specialists from the university’s office of the Online Campus. Students will interact with the questions in a chatbot-style natural language conversation during the formative assessment stage. As students practice the tutorials, Cognii will generate rich learning analytics and proficiency measurements for the course leaders.
“Innovations in physical space must be made to accommodate demands for accessibility, flexibility and affordability,” according to The State of Higher Education in 2017, a report from professional services firm Grant Thornton.
…
Changes in infrastructure are being driven by a handful of trends, including:
Digital technology is decoupling access to the classroom and information from any specific geographic location.
Learning is becoming more “modular,” credentialing specific competencies, such as certificates and badges,, rather than the model of four years to a degree via fixed-class schedules. This requires a less broad range of academic buildings on campus.
Students will engage with their coursework at their own time and pace, as they do in every other aspect of their lives.
Price pressure on colleges will create incentives for cost efficiencies, discouraging the fixed-cost commitment embodied in physical structures.
Deferred maintenance is a problem so large that it can’t be solved by most colleges within their available resources; the result may be reducing the physical plant footprint or just letting it deteriorate further.
These developments will prompt physical space transformation that will lead to a new kind of campus.
Head mounted virtual reality devices have an enormous potential to provide non-destructive immersive experiences for visitors to archaeological sites and museums as well as for researchers and educators. By creating a model that suggests an anastylosis of a building, the user can simply put on a headset and view the streetscape and its suggested reconstruction.
VR simulations are a much cheaper and more flexible solution than on-site physical reconstruction. Of course, necessary checks and measures are important to signify what is certain and what is guesswork, and as such, Lithodomos adheres to section 2.4 of the ICOMOS Charter for the Interpretation and Presentation of Cultural Heritage Sites.
By using photogrammetry, texturing and mesh modelling, Lithodomos VR creates immersive experiences of the Greek and Roman worlds for viewing on Virtual Reality Head Mounted devices, for example: the Oculus Rift, Samsung Gear VR, Google Cardboard and many others. Unlike many VR content creators, our point of difference is that we specialse in VR content for the Greek and Roman worlds. Our reconstructions stem from years of research and firsthand knowledge, and they reflect the best academic practices to ensure that the end product is both as accurate as can be possible and informative for the viewer.
[On 4/3/17] a little history was made. Verizon and Korean Telecom (KT) unveiled the world’s first live hologram international call service via the companies’ trial 5G networks established in Seoul and in New Jersey, respectively. Our cover graphic shows Verizon CEO Lowell McAdam (left) and KT CEO Hwang Chang-gyu demonstrate a hologram video call on a tablet PC at the KT headquarters in central Seoul Monday.
In the demonstration, a KT employee held a meeting with a Verizon employee in New Jersey who appeared as a hologram image on a monitor in the KT headquarters building.
With today’s revelations from South Korea, it’s easy to imagine that we’ll see Apple’s FaceTime offer a holographic experience in the not-too-distant future with added AR experiences as Apple’s CEO has conveyed.
From DSC: The use of virtual reality in industries such as architecture, construction, and real estate is growing. Below are some articles that speak to this trend.
In the future, it’s highly likely we’ll be able to get a nice VR-based tour of a space before building it, or renting it, or moving into it. Schools and universities will benefit from this as well, as they can use VR to refine the vision for a space with the appropriate stakeholders and donors.
Coming Soon: A Virtual Reality Revolution — from builderonline.com by Jennifer Goodman American consumers will soon expect homes to be viewable before they are built. Are you ready?
Excerpt:
In what ways are builders using VR today?
There are two primary uses of the panoramic style VR that I mentioned above being used: 1) photography based experiences and 2) computer generated (CG) experiences. The former is getting quite a bit of traction right now through technologies like Matterport. They are what I consider a modern version of iPix, using a camera to photograph an existing environment and special software to move through the space. But it is limited to real world environments. The CG experiences don’t require the environments to be built which gives builders a huge advantage to pre-market their properties. And since it is computer generated, there is a tremendous amount of flexibility in what is presented, such as various structural options or cabinet selections. And not only homes! Developers are using the technology to market the amenities of a new master planned community.
While 3D modeling and online virtual tours have become more commonplace in the home design industry, at least one local builder is taking the custom home building and buying process into a new dimension.
At a recent preview event for this year’s Homearama, an annual home design showcase to be held this May at Chesterfield County’s NewMarket Estates, Midlothian-based Lifestyle Home Builders let attendees virtually walk through and look around a completed version of the house it is building – while standing within the same unfinished home under construction.
Participants were invited to wear virtual reality (VR) headsets for a full immersion, 360-degree experience, or they could navigate the finished product via a virtual tour on a computer screen. LifeStyle is using the technology, which it adapted from building information modeling (BIM) and off-the-shelf software, to allow homebuyers a chance to see their custom home before it is built and make any changes prior to construction starting.
Consider the top two hurdles of the average real estate agent:
Agents have to manage the time it takes to go from one visit to the other, dealing with traffic among other elements out of their control.
The most commonly heard phrase in real estate is, “It doesn’t look like the pictures.”
Virtual reality can help immediately resolve both of these issues. It offers the possibility to virtually visit a lot more homes in a lot less time. This will naturally increase sales efficiency, as well as allow the ability to see more potential buyers.
Here are three different options you can explore using virtual reality to heighten real estate experiences:
Below are five noteworthy Amazon Alexa skills worth trying, chosen from New, Most Enabled Skills, Food and Drink, and Customer Favorites categories in the Alexa Skills Marketplace.
From DSC: I’d like to see how the Verse of the Dayskill performs.
Samsung has published details of its Bixby personal assistant, which will debut on its Galaxy S8 smartphone in New York next week.
Bixby will go head-to-head with Google Assistant, Microsoft Cortana, Amazon Echo and Apple Siri, in a battle to lure you into their artificial intelligence world.
In future, the personal assistant that you like may not only influence which phone you buy, also the home automation system that you adopt.
This is because these personal assistants cross over into home use, which is why Samsung would bother with one of its own.
Given that the S8 will run Android Nougat, which includes Google Assistant, users will have two personal assistants on their phone, unless somehow one is disabled.
There’s Siri. And Alexa. And Google Assistant. And Cortana. Now add another one of those digital assistants to the mix: Bixby, the new helper that lives inside Samsung’s latest phone, the Galaxy S8. But out of all the assistants that have launched so far, Bixby is the most curious and the most limited.
…
Samsung’s goal with Bixby was to create an assistant that can mimic all the functions you’re used to performing by tapping on your screen through voice commands. The theory is that phones are too hard to manage, so simply letting users tell their phone what they want to happen will make things a lot easier.
The S8 will also feature Bixby, Samsung’s new intelligent assistant. The company says Bixby is a bigger deal than Siri or Google Assistant – as well as simply asking for the weather, it will be deeply integrated with the phone’s everyday functions such as taking photos and sending them to people. Samsung has put a dedicated Bixby button on the S8 on the left hand side, but I wasn’t able to try it out because it won’t launch in the UK until later this year.
Recent years have brought some rapid development in the area of artificially intelligent personal assistants. Future iterations of the technology could fully revamp the way we interact with our devices.
2016 was a huge year for bots, with major platforms like Facebook launching bots for Messenger, and Amazon and Google heavily pushing their digital assistants. Looking forward to 2017, we asked 21 bot experts, entrepreneurs, and executives to share their predictions for how bots will continue to evolve in the coming year.
… From Jordi Torras, founder and CEO, Inbenta:
“Chatbots will get increasingly smarter, thanks to the adoption of sophisticated AI algorithms and machine learning. But also they will specialize more in specific tasks, like online purchases, customer support, or online advice. First attempts of chatbot interoperability will start to appear, with generalist chatbots, like Siri or Alexa, connecting to specialized enterprise chatbots to accomplish specific tasks. Functions traditionally performed by search engines will be increasingly performed by chatbots.”
From DSC: For those of us working within higher education, chatbots need to be on our radars. Here are 2 slides frommy NGLS 2017 presentation.
IBMandRicohhave partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.
The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.
Features of the new cognitive-enabled whiteboard solution include:
Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
Easy-to-join meetings:With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
Ability to capture side discussions:During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.
From DSC:
Holy smokes!
If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remotely or working with others in the same physical space) — and you have one incredibly powerful platform!
Just like a video game, users of the GPS need only follow green arrows projected as if onto the road in front of the car providing visual directions. More importantly, because the system displays on the windscreen, it does not require a cumbersome headset or eyewear worn by the driver. It integrates directly into the dashboard of the car.
The system also recognizes simple voice and gesture commands from the driver — eschewing turning of knobs or pressing buttons. The objective of the system is to allow the driver to spend more time paying attention to the road, with hands on the wheel. Many modern-day onboard GPS systems also recognize voice commands but require the driver to glance over at a screen.
Viro Media is supplying a platform of their own and their hope is to be the simplest experience where companies can code once and have their content available on multiple mobile platforms. We chatted with Viro Media CEO Danny Moon about the tool and what creators can expect to accomplish with it.
Virtual reality can transport us to new places, where we can experience new worlds and people, like no other. It is a whole new medium poised to change the future of gaming, education, health care and enterprise. Today we are starting a new series to help you discover what this new technology promises. With the help of our friends at RadioPublic, we are curating a quick library of podcasts related to virtual reality technology.
AUSTIN (KXAN) — Virtual reality is no longer reserved for entertainment and gamers, its helping solve real-world problems. Some of the latest advancements are being demonstrated at South by Southwest.
Dr. Skip Rizzo directs the Medical Virtual Reality Lab at the University of Southern California’s Institute for Creative Technologies. He’s helping veterans who suffer from post-traumatic stress disorder (PTSD). He’s up teamed with Dell to develop and spread the technology to more people.
At NVIDIA Jetson TX2 launch [on March 7, 2017], in San Francisco, [NVIDIA] showed how the platform not only accelerates AI computing, graphics and computer vision, but also powers the workflows used to create VR content. Artec 3D debuted at the event the first handheld scanner offering real-time 3D capture, fusion, modeling and visualization on its own display or streamed to phones and tablets.
Project Empathy A collection of virtual reality experiences that help us see the world through the eyes of another
Excerpt:
Benefit Studio’s virtual reality series, Project Empathy is a collection of thoughtful, evocative and surprising experiences by some of the finest creators in entertainment, technology and journalism.
Each film is designed to create empathy through a first-person experience–from being a child inside the U.S. prison system to being a widow cast away from society in India. Individually, each of the films in this series presents its filmmaker’s unique vision, portraying an intimate experience through the eyes of someone whose story has been lost or overlooked and yet is integral to the larger story of our global society. Collectively, these creatively distinct films weave together a colorful tapestry of what it means to be human today.
Most introductory geology professors teach students about earthquakes by assigning readings and showing diagrams of tectonic plates and fault lines to the class. But Paul Low is not most instructors.
“You guys can go wherever you like,” he tells a group of learners. “I’m going to go over to the epicenter and fly through and just kind of get a feel.”
Low is leading a virtual tour of the Earth’s bowels, directly beneath New Zealand’s south island, where a 7.8 magnitude earthquake struck last November. Outfitted with headsets and hand controllers, the students are “flying” around the seismic hotbed and navigating through layers of the Earth’s surface.
Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education.
“As virtual reality moves more towards the mainstream through the development of new, more affordable consumer technologies, a way needs to be found for students to translate what they learn in academic situations into careers within the industry,” says Frankie Cavanagh, a lecturer at Northumbria University. He founded a company called Somniator last year with the aim not only of developing VR games, but to provide a bridge between higher education and the technology sector. Over 70 students from Newcastle University, Northumbria University and Gateshead College in the UK have been placed so far through the program, working on real games as part of their degrees and getting paid for additional work commissioned.
Working with VR already translates into an extraordinarily diverse range of possible career paths, and those options are only going to become even broader as the industry matures in the next few years.
Customer service just got a lot more interesting. Construction equipment manufacturer Caterpillar just announced official availability of what they’re calling the CAT LIVESHARE solution to customer support, which builds augmented reality capabilities into the platform. They’ve partnered with Scope AR, a company who develops technical support and training documentation tools using augmented reality. The CAT LIVESHARE support system uses Scope AR’s Remote AR software as the backbone.
User interfaces have come a long way since the design of typewriters to prevent people from typing too quickly and then jamming the device. Current technology has users viewing monitors and wiggling mouse, or tapping on small touchscreens to activate commands or to interact with a virtual keyboard. But is this the best method of interaction?
Designers are asking themselves if it [is] better to talk to a mobile device to get information, or should a wearable vibrate and then feed information into an augmented reality display. Is having an artificial intelligence modify an interface on the fly, depending on how a user interacts, the best course of action for applications or websites? And how human should the AIs’ interaction be with users?
Eleven experts on the Forbes Technology Council offer their predictions on how UX design will be changing in the next few years. Here’s what they have to say…