Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

Skype chats are coming to Alexa devices — from engadget.com by Richard Lawlor
Voice controlled internet calls to or from any device with Amazon’s system in it.

Excerpt:

Aside from all of the Alexa-connected hardware, there’s one more big development coming for Amazon’s technology: integration with Skype. Microsoft and Amazon said that voice and video calls via the service will come to Alexa devices (including Microsoft’s Xbox One) with calls that you can start and control just by voice.

 

 

Amazon Hardware Event 2018
From techcrunch.com

 

Echo HomePod? Amazon wants you to build your own — by Brian Heater
One of the bigger surprises at today’s big Amazon event was something the company didn’t announce. After a couple of years of speculation that the company was working on its own version of the Home…

 

 

The long list of new Alexa devices Amazon announced at its hardware event — by Everyone’s favorite trillion-dollar retailer hosted a private event today where they continued to…

 

Amazon introduces APL, a new design language for building Alexa skills for devices with screensAlong with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia…

Excerpt:

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

 

From DSC:
This is a great move by Amazon — as NLP and our voices become increasingly important in how we “drive” and utilize our computing devices.

 

 

Amazon launches an Echo Wall Clock, because Alexa is gonna be everywhere — by Sarah Perez

 

 

Amazon’s new Echo lineup targets Google, Apple and Sonos — from engadget.com by Nicole Lee
Alexa, dominate the industry.

The business plan from here is clear: Companies pay a premium to be activated when users pose questions related to their products and services. “How do you cook an egg?” could pull up a Food Network tutorial; “How far is Morocco?” could enable the Expedia app.
Also see how Alexa might be a key piece of smart classrooms in the future:
 

Skype launches call recording across desktop, iOS, and Android — from windowscentral.com by Dan Thorp-Lancaster
Recording your Skype calls will now be much, much easier.

 

Microsoft's Skype now allows you to record your sessions

 

Excerpt:

Skype has been testing integrated call recording with preview users for some time, but it looks like the feature is now ready for primetime.  The Skype team announced today that call recording is now rolling out across its Android, iOS, and desktop apps, allowing you to capture your calls with a tap. “Call recording is completely cloud-based and is now available on the latest version of Skype and on most platforms, except Windows 10,” Microsoft says. “Call recording is coming to Windows 10 with the latest version of Skype releasing in the coming weeks.”

 

Also see:

 

 

 

San Diego’s Nanome Inc. releases collaborative VR-STEM software for free — from vrscout.com by Becca Loux

Excerpt:

The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.

The open-source tool is free for download now on Oculus and Steam.

Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.

 

“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.

 

San Diego’s Nanome Inc. Releases Collaborative VR-STEM Software For Free

 

 

10 ways VR will change life in the near future — from forbes.com

Excerpts:

  1. Virtual shops
  2. Real estate
  3. Dangerous jobs
  4. Health care industry
  5. Training to create VR content
  6. Education
  7. Emergency response
  8. Distraction simulation
  9. New hire training
  10. Exercise

 

From DSC:
While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.

 

How MR turns firstline workers into change agents — from virtualrealitypop.com by Charlie Finkand
Mixed Reality, a new dimension of work — from Microsoft and Harvard Business Review

Excerpts:

Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.

 

Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way. 

 

 

Let’s Speak: VR language meetups — from account.altvr.com

 

 

 

 

Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings — from alphr.com by Bobby Hellard
Microsoft Released on GitHub, Microsoft’s AI-powered developer tool can shave hours off web and app building

Excerpt:

Microsoft has developed an AI-powered web design tool capable of turning sketches of websites into functional HTML code.

Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.

 

 

 

 

 

Adobe Announces the 2019 Release of Adobe Captivate, Introducing Virtual Reality for eLearning Design — from theblog.adobe.com

Excerpt:

  • Immersive learning with VR experiences: Design learning scenarios that your learners can experience in Virtual Reality using VR headsets. Import 360° media assets and add hotspots, quizzes and other interactive elements to engage your learners with near real-life scenarios
  • Interactive videos: Liven up demos and training videos by making them interactive with the new Adobe Captivate. Create your own or bring in existing YouTube videos, add questions at specific points and conduct knowledge checks to aid learner remediation
  • Fluid Boxes 2.0: Explore the building blocks of Smart eLearning design with intelligent containers that use white space optimally. Objects placed in Fluid Boxes get aligned automatically so that learners always get fully responsive experience regardless of their device or browser.
  • 360° learning experiences: Augment the learning landscape with 360° images and videos and convert them into interactive eLearning material with customizable overlay items such as information blurbs, audio content & quizzes.

 

 

Blippar unveils indoor visual positioning system to anchor AR — from martechtoday.com by Barry Levine
Employing machine vision to recognize mapped objects, the company says it can determine which way a user is looking and can calculate positioning down to a centimeter.

A Blippar visualization of AR using its new indoor visual positioning system

 

The Storyteller’s Guide to the Virtual Reality Audience — from medium.com by Katy Newton

Excerpt:

To even scratch the surface of these questions, we need to better understand the audience’s experience in VR — not just their experience of the technology, but the way that they understand story and their role within it.

 

 

Hospital introducing HoloLens augmented reality into the operating room — from medgadget.com

Excerpt:

HoloLens technology is being paired with Microsoft’s Surface Hub, a kind of digital whiteboard. The idea is that the surgical team can gather together around a Surface Hub to review patient information, discuss the details of a procedure, and select what information should be readily accessible during surgery. During the procedure, a surgeon wearing a HoloLens would be able to review a CT or MRI scan, access other data in the electronic medical records, and to be able to manipulate these so as to get a clear picture of what is being worked on and what needs to be done.

 

 

Raleigh Fire Department invests in virtual reality to enrich training — from vrfocus.com by Nikholai Koolon
New system allows department personnel to learn new skills through immersive experiences.

Excerpt:

The VR solution allows emergency medical services (EMS) personnel to dive into a rich and detailed environment which allows them to pinpoint portions of the body to dissect. This then allows them then see each part of the body in great detail along with viewing it from any angle. The goal is to allow for users to gain the experience to diagnose injuries from a variety of vantage points all where working within an virtual environment capable of displaying countless scenarios.

 

 

For another emerging technology, see:

Someday this tiny spider bot could perform surgery inside your body — from fastcompany.com by Jesus Diaz
The experimental robots could also fix airplane engines and find disaster victims.

Excerpt:

A team of Harvard University researchers recently achieved a major breakthrough in robotics, engineering a tiny spider robot using tech that could one day work inside your body to repair tissues or destroy tumors. Their work could not only change medicine–by eliminating invasive surgeries–but could also have an impact on everything from how industrial machines are maintained to how disaster victims are rescued.

Until now, most advanced, small-scale robots followed a certain model: They tend to be built at the centimeter scale and have only one degree of freedom, which means they can only perform one movement. Not so with this new ‘bot, developed by scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering, the John A. Paulson School of Engineering and Applied Sciences, and Boston University. It’s built at the millimeter scale, and because it’s made of flexible materials–easily moved by pneumatic and hydraulic power–the critter has an unprecedented 18 degrees of freedom.

 


Plus some items from a few weeks ago


 

After almost a decade and billions in outside investment, Magic Leap’s first product is finally on sale for $2,295. Here’s what it’s like. — from

Excerpts (emphasis DSC):

I liked that it gave a new perspective to the video clip I’d watched: It threw the actual game up on the wall alongside the kind of information a basketball fan would want, including 3-D renderings and stats. Today, you might turn to your phone for that information. With Magic Leap, you wouldn’t have to.

Abovitz also said that intelligent assistants will play a big role in Magic Leap’s future. I didn’t get to test one, but Abovitz says he’s working with a team in Los Angeles that’s developing high-definition people that will appear to Magic Leap users and assist with tasks. Think Siri, Alexa or Google Assistant, but instead of speaking to your phone, you’d be speaking to a realistic-looking human through Magic Leap. Or you might be speaking to an avatar of someone real.

“You might need a doctor who can come to you,” Abovitz said. “AI that appears in front of you can give you eye contact and empathy.”

 

And I loved the idea of being able to place a digital TV screen anywhere I wanted.

 

 

Magic Leap One Available For Purchase, Starting At $2,295 — from vrscout.com by Kyle Melnick

Excerpt:

December of last year U.S. startup Magic Leap unveiled its long-awaited mixed reality headset, a secretive device five years and $2.44B USD in the making.

This morning that same headset, now referred to as the Magic Leap One Creator Edition, became available for purchase in the U.S. On sale to creators at a hefty starting price of $2,275, the computer spatial device utilizes synthetic lightfields to capture natural lightwaves and superimpose interactive, 3D content over the real-world.

 

 

 

Magic Leap One First Hands-On Impressions for HoloLens Developers — from magic-leap.reality.news

Excerpt:

After spending about an hour with the headset running through set up and poking around its UI and a couple of the launch day apps, I thought it would be helpful to share a quick list of some of my first impressions as someone who’s spent a lot of time with a HoloLens over the past couple years and try to start answering many of the burning questions I’ve had about the device.

 

 

World Campus researches effectiveness of VR headsets and video in online classes — from news.psu.edu

Excerpt:

UNIVERSITY PARK, Pa. — Penn State instructional designers are researching whether using virtual reality and 360-degree video can help students in online classes learn more effectively.

Designers worked with professors in the College of Nursing to incorporate 360-degree video into Nursing 352, a class on Advanced Health Assessment. Students in the class, offered online through Penn State World Campus, were offered free VR headsets to use with their smartphones to create a more immersive experience while watching the video, which shows safety and health hazards in a patient’s home.

Bill Egan, the lead designer for the Penn State World Campus RN to BSN nursing program, said students in the class were surveyed as part of a study approved by the Institutional Review Board and overwhelmingly said that they enjoyed the videos and thought they provided educational value. Eighty percent of the students said they would like to see more immersive content such as 360-degree videos in their online courses, he said.

 

 

7 Practical Problems with VR for eLearning — from learnupon.com

Excerpt:

In this post, we run through some practical stumbling blocks that prevent VR training from being feasible for most.

There are quite a number of practical considerations which prevent VR from totally overhauling the corporate training world. Some are obvious, whilst others only become apparent after using the technology a number of times. It’s important to be made aware of these limitations so that a large investment isn’t made in tech that isn’t really practical for corporate training.

 

Augmented reality – the next big thing for HR? — from hrdconnect.com
Augmented reality (AR) could have a huge impact on HR, transforming long-established processes into engaging and exciting something. What will this look like? How can we shape this into our everyday working lives?

Excerpt (emphasis DSC):

AR also has the potential to revolutionise our work lives, changing the way we think about office spaces and equipment forever.

Most of us still commute to an office every day, which can be a time-consuming and stressful experience. AR has the potential to turn any space into your own customisable workspace, complete with digital notes, folders and files – even a digital photo of your loved ones. This would give you access to all the information and tools that you would typically find in an office, but wherever and whenever you need them.

And instead of working on a flat, stationary, two-dimensional screen, your workspace would be a customisable three-dimensional space, where objects and information are manipulated with gestures rather than hardware. All you would need is an AR headset.

AR could also transform the way we advertise brands and share information. Imagine if your organisation had an AR stand at a conference – how engaging would that be for potential customers? How much more interesting and fun would meetings be if we used AR to present information instead of slides on a projector?

AR could transform the on-boarding experience into something fun and interactive – imagine taking an AR tour of your office, where information about key places, company history or your new colleagues pops into view as you go from place to place. 

 

 

RETINA Are Bringing Augmented Reality To Air Traffic Control Towers — from vrfocus.com by Nikholai Koolonavi

Excerpt:

A new project is aiming to make it easier for staff in airport control towers to visualize information to help make their job easier by leveraging augmented reality (AR) technology. The project, dubbed RETINA, is looking to modernise Europe’s air traffic management for safer, smarter and even smoother air travel.

 

 

 

25 skills LinkedIn says are most likely to get you hired in 2018 — and the online courses to get them — from businessinsider.com by Mara Leighton

Excerpt:

With the introduction of far-reaching and robust technology, the job market has experienced its own exponential growth, adaptation, and semi-metamorphosis. So much so that it can be difficult to guess what skills employer’s are looking for and what makes your résumé — and not another — stand out to recruiters.

Thankfully, LinkedIn created a 2018 “roadmap”— a list of hard and soft skills that companies need the most.

LinkedIn used data from their 500+ million members to identify the skills companies are currently working the hardest to fill. They grouped the skills members add to their profiles into several dozen categories (for example, “Android” and “iOS” into the “Mobile Development” category). Then, the company looked at all of the hiring and recruiting activity that happened on LinkedIn between January 1 and September 1 (billions of data points) and extrapolated the skill categories that belonged to members who were “more likely to start a new role within a company and receive interest from companies.”

LinkedIn then coupled those specific skills with related jobs and their average US salaries — all of which you can find below, alongside courses you can take (for free or for much less than the cost of a degree) to support claims of aptitude and stay ahead of the curve.

The online-learning options we included — LinkedIn Learning, Udemy, Coursera, and edX— are among the most popular and inexpensive.

 

 

Also see:

 

 

 

Six tech giants sign health data interoperability pledge — from medicaldevice-network.com by GlobalData Healthcare

Excerpt:

Google, Amazon, and IBM joined forces with Microsoft, Salesforce, and Oracle to pledge to speed up the progress of health data standards and interoperability.

This big new alliance’s pledge will have a very positive impact on healthcare as it will become easier to share medical data among hospitals. Both physicians and patients will have easier access to information, which will lead to faster diagnosis and treatment.

The companies claim that this project will lead to better outcomes, higher patient satisfaction, and lower costs—a so-called “Triple Aim.”

 

From DSC:
No doubt that security will have to be very tight around these efforts.

 

 

Three AI and machine learning predictions for 2019 — from forbes.com by Daniel Newman

Excerpt:

What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.

 

2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.

 

 

DeepMind AI matches health experts at spotting eye diseases — from endgadget.com by Nick Summers

Excerpt:

DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.

 

 

Microsoft and Amazon launch Alexa-Cortana public preview for Echo speakers and Windows 10 PCs — from venturebeat.com by Khari Johnson

Excerpt:

Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.

The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.

Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.

 

 

What Alexa can and cannot do on a PC — from venturebeat.com by Khari Johnson

Excerpt:

Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.

Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.

 

 

The title of this article being linked to is: Augmented and virtual reality mean business: Everything you need to know

 

Augmented and virtual reality mean business: Everything you need to know — from zdnet by Greg Nichols
An executive guide to the technology and market drivers behind the hype in AR, VR, and MR.

Excerpt:

Overhyped by some, drastically underestimated by others, few emerging technologies have generated the digital ink like virtual reality (VR), augmented reality (AR), and mixed reality (MR).  Still lumbering through the novelty phase and roller coaster-like hype cycles, the technologies are only just beginning to show signs of real world usefulness with a new generation of hardware and software applications aimed at the enterprise and at end users like you. On the line is what could grow to be a $108 billion AR/VR industry as soon as 2021. Here’s what you need to know.

 

The reason is that VR environments by nature demand a user’s full attention, which make the technology poorly suited to real-life social interaction outside a digital world. AR, on the other hand, has the potential to act as an on-call co-pilot to everyday life, seamlessly integrating into daily real-world interactions. This will become increasingly true with the development of the AR Cloud.

The AR Cloud
Described by some as the world’s digital twin, the AR Cloud is essentially a digital copy of the real world that can be accessed by any user at any time.

For example, it won’t be long before whatever device I have on me at a given time (a smartphone or wearable, for example) will be equipped to tell me all I need to know about a building just by training a camera at it (GPS is operating as a poor-man’s AR Cloud at the moment).

What the internet is for textual information, the AR Cloud will be for the visible world. Whether it will be open source or controlled by a company like Google is a hotly contested issue.

 

Augmented reality will have a bigger impact on the market and our daily lives than virtual reality — and by a long shot. That’s the consensus of just about every informed commentator on the subject.

 

 

 

Mixed reality will transform learning (and Magic Leap joins act one) — from edsurge.com by Maya Georgieva

Excerpt:

Despite all the hype in recent years about the potential for virtual reality in education, an emerging technology known as mixed reality has far greater promise in and beyond the classroom.

Unlike experiences in virtual reality, mixed reality interacts with the real world that surrounds us. Digital objects become part of the real world. They’re not just digital overlays, but interact with us and the surrounding environment.

If all that sounds like science fiction, a much-hyped device promises some of those features later this year. The device is by a company called Magic Leap, and it uses a pair of goggles to project what the company calls a “lightfield” in front of the user’s face to make it look like digital elements are part of the real world. The expectation is that Magic Leap will bring digital objects in a much more vivid, dynamic and fluid way compared to other mixed-reality devices such as Microsoft’s Hololens.

 

The title of the article being linked to here is Mixed reality will transform learning (and Magic Leap joins act one)

 

Now think about all the other things you wished you had learned this way and imagine a dynamic digital display that transforms your environment and even your living room or classroom into an immersive learning lab. It is learning within a highly dynamic and visual context infused with spatial audio cues reacting to your gaze, gestures, gait, voice and even your heartbeat, all referenced with your geo-location in the world. Unlike what happens with VR, where our brain is tricked into believing the world and the objects in it are real, MR recognizes and builds a map of your actual environment.

 

 

 

Also see:

virtualiteach.com
Exploring The Potential for the Vive Focus in Education

 

virtualiteach.com

 

 

 

Digital Twins Doing Real World Work — from stambol.com

Excerpt:

On the big screen it’s become commonplace to see a 3D rendering or holographic projection of an industrial floor plan or a mechanical schematic. Casual viewers might take for granted that the technology is science fiction and many years away from reality. But today we’re going to outline where these sophisticated virtual replicas – Digital Twins – are found in the real world, here and now. Essentially, we’re talking about a responsive simulated duplicate of a physical object or system. When we first wrote about Digital Twin technology, we mainly covered industrial applications and urban infrastructure like transit and sewers. However, the full scope of their presence is much broader, so now we’re going to break it up into categories.

 

The title of the article being linked to here is Digital twins doing real world work

 

Digital twin — from Wikipedia

Digital twin refers to a digital replica of physical assets (physical twin), processes and systems that can be used for various purposes.[1] The digital representation provides both the elements and the dynamics of how an Internet of Things device operates and lives throughout its life cycle.[2]

Digital twins integrate artificial intelligence, machine learning and software analytics with data to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition or position. This learning system, learns from itself, using sensor data that conveys various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment in which it may be a part of. A digital twin also integrates historical data from past machine usage to factor into its digital model.

In various industrial sectors, twins are being used to optimize the operation and maintenance of physical assets, systems and manufacturing processes.[3] They are a formative technology for the Industrial Internet of Things, where physical objects can live and interact with other machines and people virtually.[4]

 

 

Disney to debut its first VR short next month — from techcrunch.com by Sarah Wells

Excerpt:

Walt Disney Animation Studio is set to debut its first VR short film, Cycles, this August in Vancouver, the Association for Computing Machinery announced today. The plan is for it to be a headliner at the ACM’s computer graphics conference (SIGGRAPH), joining other forms of VR, AR and MR entertainment in the conference’s designated Immersive Pavilion.

This film is a first for both Disney and its director, Jeff Gipson, who joined the animation team in 2013 to work as a lighting artist on films like Frozen, Zootopia and Moana. The objective of this film, Gipson said in the statement released by ACM, is to inspire a deep emotional connection with the story.

“We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story,” said Gipson.

 

 

 

 

 

The title of this article is: Schools can not get facial recognition tech for free. Should they?

Schools can not get facial recognition tech for free. Should they? — from wired.com by Issie Lapowsky

Excerpt:

Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.

 

From DSC:
Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example, look at what’s happening in China already these days!

The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.

Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.).  What’s the unspoken message behind those cameras?  What’s being stated by their very presence around us?

No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.

 

 

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”

 

 

 

The title of this article being linked here is: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras
Per this week’s Next e-newsletter from edsurge.com

Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.

The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.

It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.

 

 

Also see:

  • As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber
    Excerpt:
    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

 

 

 

 
 

Computers that never forget a face — from Future Today Institute

Excerpts:

In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.

Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.

Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.

It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.

Near-Futures Scenarios (2018 – 2028):

OptimisticFaceprints make us safer, and they bring us back to physical offices and stores.  

Pragmatic: As faceprint adoption grows, legal challenges mount. 
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.

CatastrophicFaceprints are used for widespread surveillance and authoritative control.

 

 

 

How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent
Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.

 

 

 

Preparing students for workplace of the future  — from educationdive.com by Shalina Chatlani

Excerpt:

The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.

This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.

In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.

“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.

 

 

Amazon’s A.I. camera could help people with memory loss recognize old friends and family — from cnbc.com by Christina Farr

  • Amazon’s DeepLens is a smart camera that can recognize objects in front of it.
  • One software engineer, Sachin Solkhan, is trying to figure out how to use it to help people with memory loss.
  • Users would carry the camera to help them recognize people they know.

 

 

Microsoft acquired an AI startup that helps it take on Google Duplex — from qz.com by Dave Gershgorn

Excerpt:

We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.

Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.

 

 

Researchers developed an AI to detect DeepFakes — from thenextweb.com by Tristan Greene

Excerpt:

A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.

The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.

 

 

Bringing It Down To Earth: Four Ways Pragmatic AI Is Being Used Today — from forbes.com by Carlos Melendez

Excerpt:

Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.

Below are four key categories of pragmatic AI and ways they are being applied today.

1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots

 

 

Billable Hour ‘Makes No Sense’ in an AI World — from biglawbusiness.com by Helen Gunnarsson

Excerpt:

Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

 

 

Deep Learning Tool Tops Dermatologists in Melanoma Detection — from healthitanalytics.com
A deep learning tool achieved greater accuracy than dermatologists when detecting melanoma in dermoscopic images.

 

 

Apple’s plans to bring AI to your phone — from wired.com by Tom Simonite

Excerpt:

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

 

 

 

Seek launches world’s first mobile Augmented Reality (AR) creation studio — from globenewswire.com
All Mobile Phone Users Can Now Create, Publish and Discover AR Experiences

Excerpt:

Lehi, UT, May 29, 2018 (GLOBE NEWSWIRE) — Today, fast-growing augmented reality startup, Seek, is launching Seek Studio, the world’s first mobile augmented reality studio, allowing anybody with a phone and no coding expertise required, to create their own AR experiences and publish them for the world to see. With mobile AR now made more readily available, average consumers are beginning to discover the magic that AR can bring to the palm of their hand, and Seek Studio turns everyone into a creator.

To make the process incredibly easy, Seek provides templates for users to create their first AR experiences. As an example, a user can select a photo on their phone, outline the portion of the image they want turned into a 3D object and then publish it to Seek. They will then be able to share it with their friends through popular social networks or text. A brand could additionally upload a 3D model of their product and publish it to Seek, providing an experience for their customers to easily view that content in their own home. Seek Studio will launch with 6 templates and will release new ones every few days over the coming months to constantly improve the complexity and types of experiences possible to create within the platform.

 

Apple unveils new AR file format and ARKit 2.0 — from enturebeat.com by Stephanie Chan

Excerpt:

Apple unveiled its new augmented reality file format, as well as ARKit 2.0, at its annual WWDC developer conference today. Both will be available to users later this year with iOS 12.

The tech company partnered with Pixar to develop the AR file format Universal Scene Description (USDZ) to streamline the process of sharing and accessing augmented reality files. USDZ will be compatible with tools like Adobe, Autodesk, Sketchfab, PTC, and Quixel. Adobe CTO Abhay Parasnis spoke briefly on stage about how the file format will have native Adobe Creative Cloud support, and described it as the first time “you’ll be able to have what you see is what you get (WYSIWYG) editing” for AR objects.

 

HTC’s New Vive Focus Headset Locker Aims to Put VR at the Forefront of Education in China — from oadtovr.com by Scott Hayden

With a starting focus on University-level education and vocational schools in sectors such as mechanical engineering, VivEdu branched out to K-12 education in 2018, boasting a comprehensive VR approach to learning science, technology, engineering, mathematics, and art for kids.

 

Apple takes augmented-reality gaming to the ‘next level’ with Lego and slingshot apps — from businessinsider.com by Isobel Asher Hamilton

Excerpt:

  • Apple hopes to take augmented-reality gaming to the “next level” with multiplayer apps.
  • The company used its developer’s conference to showcase Lego and slingshot games, built using its new and improved AR building software ARKit 2.
  • The Lego game allows you to create virtual worlds around your real-life Lego builds.

 

Apple Swift Shot hands-on — augmented reality goes multiplayer with ARKit 2.0 — from venturebeat.com by Dean Takahasjo

 

 

Apple’s new AR features are proof that wearables are coming — from wired.com by Peter Rubin

Excerpt:

That roadmap, of course, is just beginning. Which is where the developers—and those arm’s-length iPads—come in. “They’re pushing AR onto phones to make sure they’re a winner when the headsets come around,” Miesnieks says of Apple. “You can’t wait for headsets and then quickly do 10 years’ worth of R&D on the software.”

 

Adobe’s Project Aero will let designers easily create AR content using existing Creative Cloud tools — from 9to5mac.comby Michael Steeber

Excerpt (emphasis DSC):

To fully realize the potential will require a broad ecosystem. Adobe is partnering with technology leaders to standardize interaction models and file formats in the rapidly growing AR ecosystem. We’re also working with leading platform vendors, open standards efforts like usdz and glTF as well as media companies and the creative community to deliver a comprehensive AR offering. usdz is now supported by Apple, Adobe, Pixar and many others while glTF is supported by Google, Facebook, Microsoft, Adobe and other industry leaders.

 

Create Floor Plans With IStaging VR Maker — from vrfocus.com by Rebecca Hills-Duty
Virtual tour app utilises ARKit technology to easily create floor plans.

Excerpt:

There are a number of professionals who would find the ability to quickly and easily create floor plans to be extremely useful. Estate agents, interior designers and event organisers would all no doubt find such a capability to be extremely valuable. For those users, the new feature added to iStaging’s VR Maker app might be of considerable interest.

The new VR Maker feature utilises Apple’s ARKit toolset to recognise spaces, such as walls and floors and can provide accurate measurements. By scanning each wall of a space, a floor plan can be produced quickly and easily.

 

 

Where is VR headed? Investors share insights on the industry’s trajectory — fromventurebeat.com by Michael Park

Excerpt:

I’ve interviewed nine investors who have provided their insights on where the VR industry has come, as well as the risks and opportunities that exist in 2018 and beyond. We’ve asked them what opportunities are available in the space — and what tips they have for startups.

 

Can this explosion-proof AR headset change how industries do business? — from digitaltrends.com by Christian de Looper

Excerpt:

Augmented reality (AR) hasn’t truly permeated the mainstream consciousness yet, but the technology is swiftly being adopted by global industries. It’ll soon be unsurprising to find a pair of AR glasses strapped to a helmet sitting on the heads of service workers, and RealWear, a company at the forefront on developing these headsets, thinks it’s on the edge of something big.

VOICE ACTIVATION
What’s most impressive about the RealWear HMT-1Z1 is how you control it. There’s no touch-sensitive gestures you need to learn — it’s all managed with voice, and better yet, there’s no need for a hotword like “Hey Google.” The headset listens for certain commands. For example, from the home screen just say “show my files” to see files downloaded to the device, and you can go back to the home screen by saying “navigate home.” When you’re looking at documents — like schematics — you can say “zoom in” or “zoom out” to change focus. It worked almost flawlessly, even in a noisy environment like the AWE show floor.

 

How Augmented and Virtual Reality (AVR) Can Benefit the Aviation Industry — from eonreality.com

Excerpt:

David Scowsill‘s experience in the aviation industry spans over 30 years. He has worked for British Airways, American Airlines, Easy Jet, Manchester Airport, and most recently the World Travel and Tourism Council, giving him a unique perspective on how Augmented and Virtual Reality (AVR) can impact the aviation industry.

These technologies have the power to transform the entire aviation industry, providing benefits to companies and consumers. From check-in, baggage drop, ramp operations and maintenance, to pilots and flight attendants, AVR can accelerate training, improve safety, and increase efficiency.

 

This VR project shows us how animals see the world — from thenextweb.com by Ailsa Sherrington

Excerpt:

London-based design studio Marshmallow Laser Feast is using VR to let us reconnect with nature. With headsets, you can see a forest through the eyes of different animals and experience the sensations they feel. Creative Director Ersinhan Ersin took the stage at TNW Conference last week to show us how and why they created the project, titled In the Eyes of the Animal.

 

The Future of AR/VR Headset Design is Hybrid — from medium.com by Christine Hart

Excerpt:

Have you already taken a side when it comes to XR wearables? Whether you prefer AR glasses or VR headsets likely depends on the application you need. But wouldn’t it be great to have a device that could perform as both? As XR tech advances, we think crossovers will start popping up around the world.

A Beijing startup called AntVR recently rocketed past its Kickstarter goal for an AR/VR visor. Their product, the Mix, uses tinted lenses to toggle between real world overlay and full immersion. It’s an exciting prospect. But rather than digging into the tech (or the controversy surrounding their name, their marketing, and a certain Marvel character) we’re looking at what this means for how XR devices are developed and sold.

 

Google Expeditions app now offers augmented reality tours — from techcrunch.com by Lucas Matney

Excerpt:

Google is bringing AR tech to its Expeditions app with a new update going live today. Last year, the company introduced its GoogleExpeditions AR Pioneer Program, which brought the app into classrooms across the country; with this launch the functionality is available to all.

Expeditions will have more than 100 AR tours in addition to the 800 VR tours already available. Examples include experiences that let users explore Leonardo Da Vinci’s inventions and ones that let you interact with the human skeletal system.

 

VR Wave Breaking Outside The Home — from forbes.com by Charlie Fink

Excerpt:

At four recent VR conferences and events there was a palpable sense that despite new home VR devices getting the majority of marketing and media attention this year, the immediate promise and momentum is in the location-based VR (LBVR) attractions industry. The VR Arcade Conference (April 29th and 30th), VRLA (May 4th and 5th), the Digital Entertainment Group’s May meeting (May 1), and FoIL (Future of Immersive Leisure, May 16th and 17th) all highlighted a topic that suddenly no one can stop talking about: location-based VR (LBVR). With hungry landlords giving great deals for empty retail locations, VRcades, which are inexpensive to open (like Internet Cafes), are popping up all over the country. As a result, VRcade royalties for developers are on the rise, so they are shifting their attention accordingly to shorter experiences optimized for LBVR, which is much less expensive than building a VR app for the home.

 

 

 

 

 

Below are some excerpted slides from her presentation…

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Also see:

  • 20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
    Excerpt:
    Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.

 

“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.

 

 

 

 

 

 

10 Big Takeaways From Mary Meeker’s Widely-Read Internet Report — from fortune.com by  Leena Rao

 

 

 

 
© 2024 | Daniel Christian