From DSC:
Check out the 2 items below regarding the use of voice as it pertains to using virtual assistants: 1 involves healthcare and the other involves education (Canvas).


1) Using Alexa to go get information from Canvas:

“Alexa Ask Canvas…”

Example questions as a student:

  • What grades am I getting in my courses?
  • What am I missing?

Example question as a teacher:

  • How many submissions do I need to grade?

See the section on asking Alexa questions…roughly between http://www.youtube.com/watch?v=e-30ixK63zE &t=38m18s through http://www.youtube.com/watch?v=e-30ixK63zE &t=46m42s

 

 

 

 


 

2) Why voice assistants are gaining traction in healthcare — from samsungnext.com by Pragati Verma

Excerpt (emphasis DSC):

The majority of intelligent voice assistant platforms today are built around smart speakers, such as the Amazon Echo and Google Home. But that might change soon, as several specialized devices focused on the health market are slated to be released this year.

One example is ElliQ, an elder care assistant robot from Samsung NEXT portfolio company Intuition Robotics. Powered by AI cognitive technology, it encourages an active and engaged lifestyle. Aimed at older adults aging in place, it can recognizing their activity level and suggest activities, while also making it easier to connect with loved ones.

Pillo is an example of another such device. It is a robot that combines machine learning, facial recognition, video conferencing, and automation to work as a personal health assistant. It can dispense vitamins and medication, answer health and wellness questions in a conversational manner, securely sync with a smartphone and wearables, and allow users to video conference with health care professionals.

“It is much more than a smart speaker. It is HIPAA compliant and it recognizes the user; acknowledges them and delivers care plans,” said Rogers, whose company created the voice interface for the platform.

Orbita is now working with toSense’s remote monitoring necklace to track vitals and cardiac fluids as a way to help physicians monitor patients remotely. Many more seem to be on their way.

“Be prepared for several more devices like these to hit the market soon,” Rogers predicted.

 

 


From DSC:

I see the piece about Canvas and Alexa as a great example of where a piece of our future learning ecosystems are heading towards — in fact, it’s been a piece of my Learning from the Living [Class] Room vision for a while now. The use of voice recognition/NLP is only picking up steam; look for more of this kind of functionality in the future. 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


 

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

2018 TECH TRENDS REPORT — from the Future Today Institute
Emerging technology trends that will influence business, government, education, media and society in the coming year.

Description:

The Future Today Institute’s 11th annual Tech Trends Report identifies 235 tantalizing advancements in emerging technologies—artificial intelligence, biotech, autonomous robots, green energy and space travel—that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. Our annual report has garnered more than six million cumulative views, and this edition is our largest to date.

Helping organizations see change early and calculate the impact of new trends is why we publish our annual Emerging Tech Trends Report, which focuses on mid- to late-stage emerging technologies that are on a growth trajectory.

In this edition of the FTI Tech Trends Report, we’ve included several new features and sections:

  • a list and map of the world’s smartest cities
  • a calendar of events that will shape technology this year
  • detailed near-future scenarios for several of the technologies
  • a new framework to help organizations decide when to take action on trends
  • an interactive table of contents, which will allow you to more easily navigate the report from the bookmarks bar in your PDF reader

 


 

01 How does this trend impact our industry and all of its parts?
02 How might global events — politics, climate change, economic shifts – impact this trend, and as a result, our organization?
03 What are the second, third, fourth, and fifth-order implications of this trend as it evolves, both in our organization and our industry?
04 What are the consequences if our organization fails to take action on this trend?
05 Does this trend signal emerging disruption to our traditional business practices and cherished beliefs?
06 Does this trend indicate a future disruption to the established roles and responsibilities within our organization? If so, how do we reverse-engineer that disruption and deal with it in the present day?
07 How are the organizations in adjacent spaces addressing this trend? What can we learn from their failures and best practices?
08 How will the wants, needs and expectations of our consumers/ constituents change as a result of this trend?
09 Where does this trend create potential new partners or collaborators for us?
10 How does this trend inspire us to think about the future of our organization?

 


 

 

From DSC:
Why aren’t we further along with lecture recording within K-12 classrooms?

That is, I as a parent — or much better yet, our kids themselves who are still in K-12 — should be able to go online and access whatever talks/lectures/presentations were given on a particular day. When our daughter is sick and misses several days, wouldn’t it be great for her to be able to go out and see what she missed? Even if we had the time and/or the energy to do so (which we don’t), my wife and I can’t present this content to her very well. We would likely explain things differently — and perhaps incorrectly — thus, potentially muddying the waters and causing more confusion for our daughter.

There should be entry level recording studios — such as the One Button Studio from Penn State University — in each K-12 school for teachers to record their presentations. At the end of each day, the teacher could put a checkbox next to what he/she was able to cover that day. (No rushing intended here — as education is enough of a run-away train often times!) That material would then be made visible/available on that day as links on an online-based calendar. Administrators should pay teachers extra money in the summer times to record these presentations.

Also, students could use these studios to practice their presentation and communication skills. The process is quick and easy:

 

 

 

 

I’d like to see an option — ideally via a brief voice-driven Q&A at the start of each session — that would ask the person where they wanted to put the recording when it was done: To a thumb drive, to a previously assigned storage area out on the cloud/Internet, or to both destinations?

Providing automatically generated close captioning would be a great feature here as well, especially for English as a Second Language (ESL) students.

 

 

 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

What is Artificial Intelligence, Machine Learning and Deep Learning — from geospatialworld.net by Meenal Dhande

 

 

 

 

 


 

What is the difference between AI, machine learning and deep learning? — from geospatialworld.net by Meenal Dhande

Excerpt:

In the first part of this blog series, we gave you simple and elaborative definitions of what is artificial intelligence (AI), machine learning and deep learning. This is the second part of the series; here we are elucidating our readers with – What is the difference between AI, machine learning, and deep learning.

You can think of artificial intelligence (AI), machine learning and deep learning as a set of a matryoshka doll, also known as a Russian nesting doll. Deep learning is a subset of machine learning, which is a subset of AI.

 

 

 

 

 


Chatbot for College Students: 4 Chatbots Tips Perfect for College Students — from chatbotsmagazine.com by Zevik Farkash

Excerpts:

1. Feed your chatbot with information your students don’t have.
Your institute’s website can be as elaborate as it gets, but if your students can’t find a piece of information on it, it’s as good as incomplete. Say, for example, you offer certain scholarships that students can voluntarily apply for. But the information on these scholarships are tucked away on a remote page that your students don’t access in their day-to-day usage of your site.

So Amy, a new student, has no idea that there’s a scholarship that can potentially make her course 50% cheaper. She can scour your website for details when she finds the time. Or she can ask your university’s chatbot, “Where can I find information on your scholarships?”

And the chatbot can tell her, “Here’s a link to all our current scholarships.”

The best chatbots for colleges and universities tend to be programmed with even more detail, and can actually strike up a conversation by saying things like:

“Please give me the following details so I can pull out all the scholarships that apply to you.
“Which department are you in? (Please select one.)
“Which course are you enrolled in? (Please select one.)
“Which year of study are you in? (Please select one.)
“Thank you for the details! Here’s a list of all applicable scholarships. Please visit the links for detailed information and let me know if I can be of further assistance.”

2. Let it answer all the “What do I do now?” questions.

3. Turn it into a campus guide.

4. Let it take care of paperwork.

 

From DSC:
This is the sort of thing that I was trying to get at last year at the NGLS 2017 Conference:

 

 

 

 


18 Disruptive Technology Trends For 2018 — from disruptionhub.com by Rob Prevett

Excerpts:

1. Mobile-first to AI-first
A major shift in business thinking has placed Artificial Intelligence at the very heart of business strategy. 2017 saw tech giants including Google and Microsoft focus on an“AI first” strategy, leading the way for other major corporates to follow suit. Companies are demonstrating a willingness to use AI and related tools like machine learning to automate processes, reduce administrative tasks, and collect and organise data. Understanding vast amounts of information is vital in the age of mass data, and AI is proving to be a highly effective solution. Whilst AI has been vilified in the media as the enemy of jobs, many businesses have undergone a transformation in mentalities, viewing AI as enhancing rather than threatening the human workforce.

7. Voice based virtual assistants become ubiquitous
Google HomeThe wide uptake of home based and virtual assistants like Alexa and Google Home have built confidence in conversational interfaces, familiarising consumers with a seamless way of interacting with tech. Amazon and Google have taken prime position between brand and customer, capitalising on conversational convenience. The further adoption of this technology will enhance personalised advertising and sales, creating a direct link between company and consumer.

 


 

5 Innovative Uses for Machine Learning — from entrepreneur.com
They’ll be coming into your life — at least your business life — sooner than you think.

 


 

Philosophers are building ethical algorithms to help control self-driving cars – from qz.com by Olivia Goldhill

 


 

Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It — from nytimes.com by Natasha Singerfeb

Excerpt:

PALO ALTO, Calif. — The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.

This semester, Harvard University and the Massachusetts Institute of Technology are jointly offering a new course on the ethics and regulation of artificial intelligence. The University of Texas at Austin just introduced a course titled “Ethical Foundations of Computer Science” — with the idea of eventually requiring it for all computer science majors.

And at Stanford University, the academic heart of the industry, three professors and a research fellow are developing a computer science ethics course for next year. They hope several hundred students will enroll.

The idea is to train the next generation of technologists and policymakers to consider the ramifications of innovations — like autonomous weapons or self-driving cars — before those products go on sale.

 


 

 

 

The Future of Design, Part II — from 99u.adobe.com by Madeleine Morley; with thanks to Keesa V. Johnson for posting this on Twitter
For the second straight year, we asked 10 creatives to predict what is coming up in the world of design and how they will prepare for it. This year’s installment includes designing for voice-controlled tech, holograms, and the rise of the hybrid designer.

 

Design is always changing, and wider changes are often spearheaded by design itself. Now with tech and the creative industry increasingly aligning, we’re on the precipice of a truly momentous period in the history of design, something unprecedented that is difficult to predict and prepare for.

 

Excerpts:

With quickly evolving tools, tumultuous shifts in the economy, the relentless growth of the gig and freelance lifestyle, and global networks, the working landscape for young designers is a tremendously uncertain one. There’s no model to follow: The known and well-trodden career path of previous generations is overgrown.

It’s an uncertain time for design, but in its difficulty and complexity, it is an inspiring and crucial one: Those with the skills will help decide the way that innovations in tech not only look but function, too, and influence our daily lives.

Although we can’t predict the future, we can speak to those with experience who think about what’s in store. We asked each participant to give us their advice: What does their future of design look like? What will it do to the very idea of design. And how can we prepare for it?

 

Design will be for ears and not eyes.

 

We’re always getting our heads around designing for the latest technology, methodology, application, media, or format. It’s a fascinating time to be a designer. There will always be space for experts, for those who specialize in the things they are really, really good at, but for others there is the need to diversify.

 

We won’t tell stories; we’ll live them.

 

 

 

You can now build Amazon Music playlists using voice commands on Alexa devices — from theverge.com by Natt Garun

Excerpt:

Amazon today announced that Amazon Music listeners can now build playlists using voice commands via Alexa. For example, if they’re streaming music from an app or listening to the radio on an Alexa-enabled device, they can use voice commands to add the current song to a playlist, or start a new playlist from scratch.

 

From DSC:
I wonder how long it will be before we will be able to create and share learning-based playlists for accessing digitally-based resources…? Perhaps AI will be used to offer a set of playlists on any given topic…?

With the exponential pace of change that we’re starting to experience — plus the 1/2 lives of information shrinking — such features could come in handy.

 

 

 

 

 

Where You’ll Find Virtual Reality Technology in 2018 — from avisystems.com by Alec Kasper-Olson

Excerpt:

The VR / AR / MR Breakdown
This year will see growth in a variety of virtual technologies and uses. There are differences and similarities between virtual, augmented, and mixed reality technologies. The technology is constantly evolving and even the terminology around it changes quickly, so you may hear variations on these terms.

Augmented reality is what was behind the Pokémon Go craze. Players could see game characters on their devices superimposed over images of their physical surroundings. Virtual features seemed to exist in the real world.

Mixed reality combines virtual features and real-life objects. So, in this way it includes AR but it also includes environments where real features seem to exist in a virtual world.

The folks over at Recode explain mixed reality this way:

In theory, mixed reality lets the user see the real world (like AR) while also seeing believable, virtual objects (like VR). And then it anchors those virtual objects to a point in real space, making it possible to treat them as “real,” at least from the perspective of the person who can see the MR experience.

And, virtual reality uses immersive technology to seemingly place a user into a simulated lifelike environment.

Where You’ll Find These New Realities
Education and research fields are at the forefront of VR and AR technologies, where an increasing number of students have access to tools. But higher education isn’t the only place you see this trend. The number of VR companies grew 250 percent between 2012 and 2017. Even the latest iPhones include augmented reality capabilities. Aside from the classroom and your pocket, here are some others places you’re likely to see VR and AR pop up in 2018.

 

 

 

Top AR apps that make learning fun — from bmsinnolabs.wordpress.com

Excerpt:

Here is a list of a few amazing Augmented Reality mobile apps for children:

  • Jigspace
  • Elements 4D
  • Arloon Plants
  • Math alive
  • PlanetAR Animals
  • FETCH! Lunch Rush
  • Quiver
  • Zoo Burst
  • PlanetAR Alphabets & Numbers

Here are few of the VR input devices include:

  • Controller Wands
  • Joysticks
  • Force Balls/Tracking Balls
  • Data Gloves
  • On-Device Control Buttons
  • Motion Platforms (Virtuix Omni)
  • Trackpads
  • Treadmills
  • Motion Trackers/Bodysuits

 

 

 

HTC VIVE and World Economic Forum Partner For The Future Of The “VR/AR For Impact” Initiative — from blog.vive.com by Matthew Gepp

Excerpt:

VR/AR for Impact experiences shown this week at WEF 2018 include:

  • OrthoVR aims to increase the availability of well-fitting prosthetics in low-income countries by using Virtual Reality and 3D rapid prototyping tools to increase the capacity of clinical staff without reducing quality. VR allows current prosthetists and orthosists to leverage their hands-on and embodied skills within a digital environment.
  • The Extraordinary Honey Bee is designed to help deepen our understanding of the honey bee’s struggle and learn what is at stake for humanity due to the dying global population of the honey bee. Told from a bee’s perspective, The Extraordinary Honey Bee harnesses VR to inspire change in the next generation of honey bee conservationists.
  • The Blank Canvas: Hacking Nature is an episodic exploration of the frontiers of bioengineering as taught by the leading researchers within the field. Using advanced scientific visualization techniques, the Blank Canvas will demystify the cellular and molecular mechanisms that are being exploited to drive substantial leaps such as gene therapy.
  • LIFE (Life-saving Instruction For Emergencies) is a new mobile and VR platform developed by the University of Oxford that enables all types of health worker to manage medical emergencies. Through the use of personalized simulation training and advanced learning analytics, the LIFE platform offers the potential to dramatically extend access to life-saving knowledge in low-income countries.
  • Tree is a critically acclaimed virtual reality experience to immerse viewers in the tragic fate that befalls a rainforest tree. The experience brings to light the harrowing realities of deforestation, one of the largest contributors to global warming.
  • For the Amazonian Yawanawa, ‘medicine’ has the power to travel you in a vision to a place you have never been. Hushuhu, the first woman shaman of the Yawanawa, uses VR like medicine to open a portal to another way of knowing. AWAVENA is a collaboration between a community and an artist, melding technology and transcendent experience so that a vision can be shared, and a story told of a people ascending from the edge of extinction.

 

 

 

Everything You Need To Know About Virtual Reality Technology — from yeppar.com

Excerpt:

Types of Virtual Reality Technology
We can segregate the type of Virtual Reality Technology according to their user experience

Non-Immersive
Non-immersive simulations are the least immersion implementation of Virtual Reality Technology.
In this kind of simulation, only a subset of the user’s senses is replicated, allowing for marginal awareness of the reality outside the VR simulation. A user enters into 3D virtual environments through a portal or window by utilizing standard HD monitors typically found on conventional desktop workstations.

Semi Immersive
In this simulation, users experience a more rich immersion, where a user partly, not fully involved in a virtual environment. Semi immersive simulations are based on high-performance graphical computing, which is often coupled with large screen projector systems or multiple TV projections to properly simulate the user’s visuals.

Fully immersive
Offers the full immersive experience to the user of Virtual Reality Technology, in this phase of VR head-mounted displays and motion sensing devices are used to simulate all of the user senses. In this situation, a user can experience the realistic virtual environment, where a user can experience a wide view field, high resolutions, increased refresh rates and a high quality of visualization through HMD.

 

 

 

 

 

 

This Is What A Mixed Reality Hard Hat Looks Like — from vrscout.com by  Alice Bonasio
A Microsoft-endorsed hard hat solution lets construction workers use holograms on site.

Excerpt:

These workers already routinely use technology such as tablets to access plans and data on site, but going from 2D to 3D at scale brings that to a whole new level. “Superimposing the digital model on the physical environment provides a clear understanding of the relations between the 3D design model and the actual work on a jobsite,” explained Olivier Pellegrin, BIM manager, GA Smart Building.

The application they are using is called Trimble Connect. It turns data into 3D holograms, which are then mapped out to scale onto the real-world environment. This gives workers an instant sense of where and how various elements will fit and exposes mistakes early on in the process.

 

Also see:

Trimble Connect for HoloLens is a mixed reality solution that improves building coordination by combining models from multiple stakeholders such as structural, mechanical and electrical trade partners. The solution provides for precise alignment of holographic data on a 1:1 scale on the job site, to review models in the context of the physical environment. Predefined views from Trimble Connect further simplify in-field use with quick and easy access to immersive visualizations of 3D data. Users can leverage mixed reality for training purposes and to compare plans against work completed. Advanced visualization further enables users to view assigned tasks and capture data with onsite measurement tools. Trimble Connect for HoloLens is available now through the Microsoft Windows App Store. A free trial option is available enabling integration with HoloLens. Paid subscriptions support premium functionality allowing for precise on-site alignment and collaboration. Trimble’s Hard Hat Solution for Microsoft HoloLens extends the benefits of HoloLens mixed reality into areas where increased safety requirements are mandated, such as construction sites, offshore facilities, and mining projects. The solution, which is ANSI-approved, integrates the HoloLens holographic computer with an industry-standard hard hat. Trimble’s Hard Hat Solution for HoloLens is expected to be available in the first quarter of 2018. To learn more, visit mixedreality.trimble.com.

 

From DSC:
Combining voice recognition / Natural Language Processing (NLP) with Mixed Reality should provide some excellent, powerful user experiences. Doing so could also provide some real-time understanding as well as highlight potential issues in current designs. It will be interesting to watch this space develop. If there were an issue, wouldn’t it be great to remotely ask someone to update the design and then see the updated design in real-time? (Or might there be a way to make edits via one’s voice and/or with gestures?)

I could see where these types of technologies could come in handy when designing / enhancing learning spaces.

 

 

 

Web-Powered Augmented Reality: a Hands-On Tutorial — from medium.com by Uri Shaked
A Guided Journey Into the Magical Worlds of ARCore, A-Frame, 3D Programming, and More!

Excerpt:

There’s been a lot of cool stuff happening lately around Augmented Reality (AR), and since I love exploring and having fun with new technologies, I thought I would see what I could do with AR and the Web?—?and it turns out I was able to do quite a lot!

Most AR demos are with static objects, like showing how you can display a cool model on a table, but AR really begins to shine when you start adding in animations!

With animated AR, your models come to life, and you can then start telling a story with them.

 

 

 

Art.com adds augmented reality art-viewing to its iOS app — from techcrunch.com by Lucas Matney

Excerpt:

If you’re in the market for some art in your house or apartment, Art.com will now let you use AR to put digital artwork up on your wall.

The company’s ArtView feature is one of the few augmented reality features that actually adds a lot to the app it’s put in. With the ARKit-enabled tech, the artwork is accurately sized so you can get a perfect idea of how your next purchase could fit on your wall. The feature can be used for the two million pieces of art on the site and can be customized with different framing types.

 

 

 

 

Experience on Demand is a must-read VR book — from venturebeat.com by Ian Hamilton

Excerpts:

Bailenson’s newest book, Experience on Demand, builds on that earlier work while focusing more clearly — even bluntly — on what we do and don’t know about how VR affects humans.

“The best way to use it responsibly is to be educated about what it is capable of, and to know how to use it — as a developer or a user — responsibly,” Bailenson wrote in the book.

Among the questions raised:

  • “How educationally effective are field trips in VR? What are the design principles that should guide these types of experiences?”
  • How many individuals are not meeting their potential because they lack the access to good instruction and learning tools?”
  • “When we consider that the subjects were made uncomfortable by the idea of administering fake electric shocks, what can we expect people will feel when they are engaging all sorts of fantasy violence and mayhem in virtual reality?”
  • “What is the effect of replacing social contact with virtual social contact over long periods of time?”
  • “How do we walk the line and leverage what is amazing about VR, without falling prey to the bad parts?”

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian