Virtual digital assistants in the workplace: Still nascent, but developing — from cisco.com by Pat Brans
As workers get overwhelmed with daily tasks, they want virtual digital assistants in the workplace that can alleviate some of the burden.

Excerpts:

As life gets busier, knowledge workers are struggling with information overload.

They’re looking for a way out, and that way, experts say, will eventually involve virtual digital assistants (VDAs). Increasingly, workers need to complete myriad tasks, often seemingly simultaneously. And as the pace of business continues to drive ever faster, hands-free, intelligent technology that can speed administrative tasks holds obvious appeal.

So far, scenarios in which digital assistants in the workplace enhance productivity fall into three categories: scheduling, project management, and improved interfaces to enterprise applications. “Using digital assistants to perform scheduling has clear benefits,” Beccue said.

“Scheduling meetings and managing calendars takes a long time—many early adopters are able to quantify the savings they get when the scheduling is performed by a VDA. Likewise, when VDAs are used to track project status through daily standup meetings, project managers can easily measure the time saved.”

 

Perhaps the most important change we’ll see in future generations of VDA technology for workforce productivity will be the advent of general-purpose VDAs that help users with all tasks. These VDAs will be multi-channel (providing interfaces through mobile apps, messaging, telephone, and so on) and they will be bi-modal (enlisting text and voice).

 

 

 

 

Reflections on “Are ‘smart’ classrooms the future?” [Johnston]

Are ‘smart’ classrooms the future? — from campustechnology.com by Julie Johnston
Indiana University explores that question by bringing together tech partners and university leaders to share ideas on how to design classrooms that make better use of faculty and student time.

Excerpt:

To achieve these goals, we are investigating smart solutions that will:

  • Untether instructors from the room’s podium, allowing them control from anywhere in the room;
  • Streamline the start of class, including biometric login to the room’s technology, behind-the-scenes routing of course content to room displays, control of lights and automatic attendance taking;
  • Offer whiteboards that can be captured, routed to different displays in the room and saved for future viewing and editing;
  • Provide small-group collaboration displays and the ability to easily route content to and from these displays; and
  • Deliver these features through a simple, user-friendly and reliable room/technology interface.

Activities included collaborative brainstorming focusing on these questions:

  • What else can we do to create the classroom of the future?
  • What current technology exists to solve these problems?
  • What could be developed that doesn’t yet exist?
  • What’s next?

 

 

 

From DSC:
Though many peoples’ — including faculty members’ — eyes gloss over when we start talking about learning spaces and smart classrooms, it’s still an important topic. Personally, I’d rather be learning in an engaging, exciting learning environment that’s outfitted with a variety of tools (physically as well as digitally and virtually-based) that make sense for that community of learners. Also, faculty members have very limited time to get across campus and into the classroom and get things setup…the more things that can be automated in those setup situations the better!

I’ve long posted items re: machine-to-machine communications, voice recognition/voice-enabled interfaces, artificial intelligence, bots, algorithms, a variety of vendors and their products including Amazon’s Alexa / Apple’s Siri / Microsoft’s Cortana / and Google’s Home or Google Assistant, learning spaces, and smart classrooms, as I do think those things are components of our future learning ecosystems.

 

 

 

logo.

Global installed base of smart speakers to surpass 200 million in 2020, says GlobalData

The global installed base for smart speakers will hit 100 million early next year, before surpassing the 200 million mark at some point in 2020, according to GlobalData, a leading data and analytics company.

The company’s latest report: ‘Smart Speakers – Thematic Research’ states that nearly every leading technology company is either already producing a smart speaker or developing one, with Facebook the latest to enter the fray (launching its Portal device this month). The appetite for smart speakers is also not limited by geography, with China in particular emerging as a major marketplace.

Ed Thomas, Principal Analyst for Technology Thematic Research at GlobalData, comments: “It is only four years since Amazon unveiled the Echo, the first wireless speaker to incorporate a voice-activated virtual assistant. Initial reactions were muted but the device, and the Alexa virtual assistant it contained, quickly became a phenomenon, with the level of demand catching even Amazon by surprise.”

Smart speakers give companies like Amazon, Google, Apple, and Alibaba access to a vast amount of highly valuable user data. They also allow users to get comfortable interacting with artificial intelligence (AI) tools in general, and virtual assistants in particular, increasing the likelihood that they will use them in other situations, and they lock customers into a broader ecosystem, making it more likely that they will buy complementary products or access other services, such as online stores.

Thomas continues: “Smart speakers, particularly lower-priced models, are gateway devices, in that they give consumers the opportunity to interact with a virtual assistant like Amazon’s Alexa or Google’s Assistant, in a “safe” environment. For tech companies serious about competing in the virtual assistant sector, a smart speaker is becoming a necessity, hence the recent entry of Apple and Facebook into the market and the expected arrival of Samsung and Microsoft over the next year or so.”

In terms of the competitive landscape for smart speakers, Amazon was the pioneer and is still a dominant force, although its first-mover advantage has been eroded over the last year or so. Its closest challenger is Google, but neither company is present in the fastest-growing geographic market, China. Alibaba is the leading player there, with Xiaomi also performing well.

Thomas concludes: “With big names like Samsung and Microsoft expected to launch smart speakers in the next year or so, the competitive landscape will continue to fluctuate. It is likely that we will see two distinct markets emerge: the cheap, impulse-buy end of the spectrum, used by vendors to boost their ecosystems; and the more expensive, luxury end, where greater focus is placed on sound quality and aesthetics. This is the area of the market at which Apple has aimed the HomePod and early indications are that this is where Samsung’s Galaxy Home will also look to make an impact.”

Information based on GlobalData’s report: Smart Speakers – Thematic Research

 

 

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 

Jarvish’s smart motorcycle helmets will offer Alexa and Siri support and an AR display

 

Jarvish’s smart motorcycle helmets will offer Alexa and Siri support and an AR display — from the verge.com by Chaim Gartenberg

Excerpt:

The Jarvish X is the more basic of the two models. It offers integrated microphones and speakers for Siri, Google Assistant, and Alexa support so wearers have access things like directions, weather updates, and control music through voice control. There’s also a 2K, front-facing camera built into the helmet so you can record your ride. It’s set to cost $799 when it hits Kickstarter in January.

 

 

 

Blackboard, Apple mobile student ID has arrived — from cr80news.com by Andrew Hudson
Mobile Credential officially goes live at launch campuses

Excerpt:

We’ve officially reached the kickoff of Blackboard’s long-standing vision for the mobile student ID. Starting today on the campuses of the University of Alabama, Duke University and the University of Oklahoma, Blackboard with the aid of Apple is enabling students to use mobile credentials everywhere their plastic ID card was previously accepted.

[On 10/2/18], for the first time, iPhones and Apple Watches are enabling users to navigate the full range of transactions both on and off campus. At these three launch institutions, students can add their official student ID card to Apple Wallet to make purchases, authenticate for privileges, as well as enable physical access to dorms, rec centers, libraries and academic buildings.

 

 

 

NEW: The Top Tools for Learning 2018 [Jane Hart]

The Top Tools for Learning 2018 from the 12th Annual Digital Learning Tools Survey -- by Jane Hart

 

The above was from Jane’s posting 10 Trends for Digital Learning in 2018 — from modernworkplacelearning.com by Jane Hart

Excerpt:

[On 9/24/18],  I released the Top Tools for Learning 2018 , which I compiled from the results of the 12th Annual Digital Learning Tools Survey.

I have also categorised the tools into 30 different areas, and produced 3 sub-lists that provide some context to how the tools are being used:

  • Top 100 Tools for Personal & Professional Learning 2018 (PPL100): the digital tools used by individuals for their own self-improvement, learning and development – both inside and outside the workplace.
  • Top 100 Tools for Workplace Learning (WPL100): the digital tools used to design, deliver, enable and/or support learning in the workplace.
  • Top 100 Tools for Education (EDU100): the digital tools used by educators and students in schools, colleges, universities, adult education etc.

 

3 – Web courses are increasing in popularity.
Although Coursera is still the most popular web course platform, there are, in fact, now 12 web course platforms on the list. New additions this year include Udacity and Highbrow (the latter provides daily micro-lessons). It is clear that people like these platforms because they can chose what they want to study as well as how they want to study, ie. they can dip in and out if they want to and no-one is going to tell them off – which is unlike most corporate online courses which have a prescribed path through them and their use is heavily monitored.

 

 

5 – Learning at work is becoming personal and continuous.
The most significant feature of the list this year is the huge leap up the list that Degreed has made – up 86 places to 47th place – the biggest increase by any tool this year. Degreed is a lifelong learning platform and provides the opportunity for individuals to own their expertise and development through a continuous learning approach. And, interestingly, Degreed appears both on the PPL100 (at  30) and WPL100 (at 52). This suggests that some organisations are beginning to see the importance of personal, continuous learning at work. Indeed, another platform that underpins this, has also moved up the list significantly this year, too. Anders Pink is a smart curation platform available for both individuals and teams which delivers daily curated resources on specified topics. Non-traditional learning platforms are therefore coming to the forefront, as the next point further shows.

 

 

From DSC:
Perhaps some foreshadowing of the presence of a powerful, online-based, next generation learning platform…?

 

 

 

Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 
For museums, augmented reality is the next frontier — from wired.com by Arielle Pardes

Excerpt:

Mae Jemison, the first woman of color to go into space, stood in the center of the room and prepared to become digital. Around her, 106 cameras captured her image in 3-D, which would later render her as a life-sized hologram when viewed through a HoloLens headset.

Jemison was recording what would become the introduction for a new exhibit at the Intrepid Sea, Air, and Space Museum, which opens tomorrow as part of the Smithsonian’s annual Museum Day. In the exhibit, visitors will wear HoloLens headsets and watch Jemison materialize before their eyes, taking them on a tour of the Space Shuttle Enterprise—and through space history. They’re invited to explore artifacts both physical (like the Enterprise) and digital (like a galaxy of AR stars) while Jemison introduces women throughout history who have made important contributions to space exploration.

Interactive museum exhibits like this are becoming more common as augmented reality tech becomes cheaper, lighter, and easier to create.

 

 

Oculus will livestream it’s 5th Connect Conference on Oculus venues — from vrscout.com by Kyle Melnick

Excerpt (emphasis DSC):

Using either an Oculus Go standalone device or a mobile Gear VR headset, users will be able to login to the Oculus Venues app and join other users for an immersive live stream of various developer keynotes and adrenaline-pumping esports competitions.

 

From DSC:
What are the ramifications of this for the future of webinars, teaching and learning, online learning, MOOCs and more…?

 

 

 

10 new AR features in iOS 12 for iPhone & iPad — from mobile-ar.reality.news by Justin Meyers

Excerpt:

Apple’s iOS 12 has finally landed. The big update appeared for everyone on Monday, Sept. 17, and hiding within are some pretty amazing augmented reality upgrades for iPhones, iPads, and iPod touches. We’ve been playing with them ever since the iOS 12 beta launched in June, and here are the things we learned that you’ll want to know about.

For now, here’s everything AR-related that Apple has included in iOS 12. There are some new features aimed to please AR fanatics as well as hook those new to AR into finally getting with the program. But all of the new AR features rely on ARKit 2.0, the latest version of Apple’s augmented reality framework for iOS.

 

 

Berkeley College Faculty Test VR for Learning— from campustechnology.com by Dian Schaffhauser

Excerpt:

In a pilot program at Berkeley College, members of a Virtual Reality Faculty Interest Group tested the use of virtual reality to immerse students in a variety of learning experiences. During winter 2018, seven different instructors in nearly as many disciplines used inexpensive Google Cardboard headsets along with apps on smartphones to virtually place students in North Korea, a taxicab and other environments as part of their classwork.

Participants used free mobile applications such as Within, the New York Times VR, Discovery VR, Jaunt VR and YouTube VR. Their courses included critical writing, international business, business essentials, medical terminology, international banking, public speaking and crisis management.

 

 

 

 

The Mobile AR Leaders of 2018 — from next.reality.news

Excerpt:

This time last year, we were getting our first taste of what mobile app developers could do in augmented reality with Apple’s ARKit, and most people had never heard of Animojis. Google’s AR platform was still Tango. Snapchat had just introduced its World Lens AR experiences. Most mobile AR experiences existing in the wild were marker-based offerings from the likes of Blippar and Zappar, or generic Pokémon GO knock-offs.

In last year’s NR50, published before the introduction of ARKit, only two of the top 10 professionals worked directly with mobile AR, and Apple CEO Tim Cook was ranked number 26, based primarily on his forward-looking statements about AR.

This year, Cook comes in at number one, with five others categorized under mobile AR in the overall top 10 of the NR30.

What a difference a year makes.

In just 12 months, we’ve seen mobile AR grow at a breakneck pace. Since Apple launched its AR toolkit, users have downloaded more than 13 million ARKit apps from the App Store, not including existing apps updated with ARKit capabilities. Apple has already updated its platform and will introduce even more new features to the public with the release of ARKit 2.0 this fall. Last year’s iPhone X also introduced a depth-sensing camera and AR Animojis that captured the imaginations of its users.

 

 

The Weather Channel forecasts more augmented reality for its live broadcasts with Unreal Engine — from next.reality.news by Tommy Palladino

Excerpt:

Augmented reality made its live broadcast debut for The Weather Channel in 2015. The technology helps on-air talent at the network to explain the science behind weather phenomena and tell more immersive stories. Powered by Unreal Engine, The Future Group’s Frontier platform will enable The Weather Channel to be able to show even more realistic AR content, such as accurately rendered storms and detailed cityscapes, all in real time.

 

 

 

From DSC:
Imagine this type of thing in online-based learning, MOOCs, and/or even in blended learning based learning environments (i.e., in situations where learning materials are designed/created by teams of specialists). If that were the case, who needs to be trained to create these pieces? Will students be creating these types of pieces in the future? Hmmm….

 

 

Winners announced of the 2018 Journalism 360 Challenge — from vrfocus.com
The question of “How might we experiment with immersive storytelling to advance the field of journalism?” looks to be answered by 11 projects.

Excerpt:

The eleven winners were announced on 9/11/18 of a contest being held by the Google News Initiative, Knight Foundation and Online News Association. The 2018 Journalism 360 Challenge asked people the question “How might we experiment with immersive storytelling to advance the field of journalism?” and it generated over 400 responses.

 

 

 

 

 



 

Addendum:

Educause Explores Future of Extended Reality on Campus — from campustechnology.com by Dian Schaffhauser

Among the findings:

  • VR makes people feel like they’re really there. The “intellectual and physiological reactions” to constructs and events in VR are the same — “and sometimes identical” — to a person’s reactions in the real world;
  • 3D technologies facilitate active and experiential learning. AR, for example, lets users interact with an object in ways that aren’t possible in the physical world — such as seeing through surfaces or viewing data about underlying objects. And with 3D printing, learners can create “physical objects that might otherwise exist only simulations”; and
  • Simulations allow for scaling up of “high-touch, high-cost learning experiences.” Students may be able to go through virtual lab activities, for instance, even when a physical lab isn’t available.

Common challenges included implementation learning curves, instructional design, data storage of 3D images and effective cross-departmental collaboration.

“One significant result from this research is that it shows that these extended reality technologies are applicable across a wide spectrum of academic disciplines,” said Malcolm Brown, director of learning initiatives at Educause, in a statement. “In addition to the scientific disciplines, students in the humanities, for example, can re-construct cities and structures that no longer exist. I think this study will go a long way in encouraging faculty, instructional designers and educational technologists across higher education to further experiment with these technologies to vivify learning experiences in nearly all courses of study.”

 



 

 

 



 

Everything you need to know about those new iPhones — from wired.com by Arielle Pardes

Excerpt:

Actually, make that three new iPhones. Apple followed last year’s iPhone X with the iPhone Xs, iPhone Xs Max, and iPhone Xr. It also spent some time showing off the Apple Watch Series 4, its most powerful wearable yet. Missed the event? Catch our commentary on WIRED’s liveblog, or read on for everything you need to know about today’s big Apple event.

 

 

Apple’s latest iPhones are packed with AI smarts — from wired.com by Tom Simonite

Excerpt:

At a glance the three new iPhones unveiled next to Apple’s glassy circular headquarters Wednesday look much like last year’s iPhone X. Inside, the devices’ computational guts got an invisible but more significant upgrade.

Apple’s phones come with new chip technology with a focus on helping the devices understand the world around them using artificial intelligence algorithms. The company says the improvements allow the new devices to offer slicker camera effects and augmented reality experiences.

For the first time, non-Apple developers will be allowed to run their own algorithms on Apple’s AI-specific hardware.

 

 

Apple Watch 4 adds ECG, EKG, and more heart-monitoring capabilities — from wired.com by Lauren Goode

Excerpt:

The new Apple Watch Series 4, revealed by Apple earlier today, underscores that some of the watch’s most important features are its health and fitness-tracking functions. The new watch is one of the first over-the-counter devices in the US to offer electrocardiogram, or ECG, readings. On top of that, the Apple Watch has received FDA clearance—both for the ECG feature and another new feature that detects atrial fibrillation.

 

 

 

 

 

Three AI and machine learning predictions for 2019 — from forbes.com by Daniel Newman

Excerpt:

What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.

 

2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.

 

 

DeepMind AI matches health experts at spotting eye diseases — from endgadget.com by Nick Summers

Excerpt:

DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.

 

 

Microsoft and Amazon launch Alexa-Cortana public preview for Echo speakers and Windows 10 PCs — from venturebeat.com by Khari Johnson

Excerpt:

Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.

The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.

Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.

 

 

What Alexa can and cannot do on a PC — from venturebeat.com by Khari Johnson

Excerpt:

Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.

Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.

 

 

8 great iPad audio recording apps for teachers & students — from educatorstechnology.com

Excerpt:

For those of you asking about audio recording apps to use on iPad, here is a list of some of the best options out there. Whether you want to record a lecture, an audio note, a memo, or simply capture ideas and thoughts as they happen, the apps below provide you with the necessary technology to do so, and in the easiest and most effective way.

 

The title of this article being linked to is: Augmented and virtual reality mean business: Everything you need to know

 

Augmented and virtual reality mean business: Everything you need to know — from zdnet by Greg Nichols
An executive guide to the technology and market drivers behind the hype in AR, VR, and MR.

Excerpt:

Overhyped by some, drastically underestimated by others, few emerging technologies have generated the digital ink like virtual reality (VR), augmented reality (AR), and mixed reality (MR).  Still lumbering through the novelty phase and roller coaster-like hype cycles, the technologies are only just beginning to show signs of real world usefulness with a new generation of hardware and software applications aimed at the enterprise and at end users like you. On the line is what could grow to be a $108 billion AR/VR industry as soon as 2021. Here’s what you need to know.

 

The reason is that VR environments by nature demand a user’s full attention, which make the technology poorly suited to real-life social interaction outside a digital world. AR, on the other hand, has the potential to act as an on-call co-pilot to everyday life, seamlessly integrating into daily real-world interactions. This will become increasingly true with the development of the AR Cloud.

The AR Cloud
Described by some as the world’s digital twin, the AR Cloud is essentially a digital copy of the real world that can be accessed by any user at any time.

For example, it won’t be long before whatever device I have on me at a given time (a smartphone or wearable, for example) will be equipped to tell me all I need to know about a building just by training a camera at it (GPS is operating as a poor-man’s AR Cloud at the moment).

What the internet is for textual information, the AR Cloud will be for the visible world. Whether it will be open source or controlled by a company like Google is a hotly contested issue.

 

Augmented reality will have a bigger impact on the market and our daily lives than virtual reality — and by a long shot. That’s the consensus of just about every informed commentator on the subject.

 

 

 

Mixed reality will transform learning (and Magic Leap joins act one) — from edsurge.com by Maya Georgieva

Excerpt:

Despite all the hype in recent years about the potential for virtual reality in education, an emerging technology known as mixed reality has far greater promise in and beyond the classroom.

Unlike experiences in virtual reality, mixed reality interacts with the real world that surrounds us. Digital objects become part of the real world. They’re not just digital overlays, but interact with us and the surrounding environment.

If all that sounds like science fiction, a much-hyped device promises some of those features later this year. The device is by a company called Magic Leap, and it uses a pair of goggles to project what the company calls a “lightfield” in front of the user’s face to make it look like digital elements are part of the real world. The expectation is that Magic Leap will bring digital objects in a much more vivid, dynamic and fluid way compared to other mixed-reality devices such as Microsoft’s Hololens.

 

The title of the article being linked to here is Mixed reality will transform learning (and Magic Leap joins act one)

 

Now think about all the other things you wished you had learned this way and imagine a dynamic digital display that transforms your environment and even your living room or classroom into an immersive learning lab. It is learning within a highly dynamic and visual context infused with spatial audio cues reacting to your gaze, gestures, gait, voice and even your heartbeat, all referenced with your geo-location in the world. Unlike what happens with VR, where our brain is tricked into believing the world and the objects in it are real, MR recognizes and builds a map of your actual environment.

 

 

 

Also see:

virtualiteach.com
Exploring The Potential for the Vive Focus in Education

 

virtualiteach.com

 

 

 

Digital Twins Doing Real World Work — from stambol.com

Excerpt:

On the big screen it’s become commonplace to see a 3D rendering or holographic projection of an industrial floor plan or a mechanical schematic. Casual viewers might take for granted that the technology is science fiction and many years away from reality. But today we’re going to outline where these sophisticated virtual replicas – Digital Twins – are found in the real world, here and now. Essentially, we’re talking about a responsive simulated duplicate of a physical object or system. When we first wrote about Digital Twin technology, we mainly covered industrial applications and urban infrastructure like transit and sewers. However, the full scope of their presence is much broader, so now we’re going to break it up into categories.

 

The title of the article being linked to here is Digital twins doing real world work

 

Digital twin — from Wikipedia

Digital twin refers to a digital replica of physical assets (physical twin), processes and systems that can be used for various purposes.[1] The digital representation provides both the elements and the dynamics of how an Internet of Things device operates and lives throughout its life cycle.[2]

Digital twins integrate artificial intelligence, machine learning and software analytics with data to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition or position. This learning system, learns from itself, using sensor data that conveys various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment in which it may be a part of. A digital twin also integrates historical data from past machine usage to factor into its digital model.

In various industrial sectors, twins are being used to optimize the operation and maintenance of physical assets, systems and manufacturing processes.[3] They are a formative technology for the Industrial Internet of Things, where physical objects can live and interact with other machines and people virtually.[4]

 

 

Disney to debut its first VR short next month — from techcrunch.com by Sarah Wells

Excerpt:

Walt Disney Animation Studio is set to debut its first VR short film, Cycles, this August in Vancouver, the Association for Computing Machinery announced today. The plan is for it to be a headliner at the ACM’s computer graphics conference (SIGGRAPH), joining other forms of VR, AR and MR entertainment in the conference’s designated Immersive Pavilion.

This film is a first for both Disney and its director, Jeff Gipson, who joined the animation team in 2013 to work as a lighting artist on films like Frozen, Zootopia and Moana. The objective of this film, Gipson said in the statement released by ACM, is to inspire a deep emotional connection with the story.

“We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story,” said Gipson.

 

 

 

 

 

Computers that never forget a face — from Future Today Institute

Excerpts:

In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.

Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.

Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.

It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.

Near-Futures Scenarios (2018 – 2028):

OptimisticFaceprints make us safer, and they bring us back to physical offices and stores.  

Pragmatic: As faceprint adoption grows, legal challenges mount. 
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.

CatastrophicFaceprints are used for widespread surveillance and authoritative control.

 

 

 

How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent
Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.

 

 

 

Preparing students for workplace of the future  — from educationdive.com by Shalina Chatlani

Excerpt:

The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.

This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.

In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.

“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.

 

 

Amazon’s A.I. camera could help people with memory loss recognize old friends and family — from cnbc.com by Christina Farr

  • Amazon’s DeepLens is a smart camera that can recognize objects in front of it.
  • One software engineer, Sachin Solkhan, is trying to figure out how to use it to help people with memory loss.
  • Users would carry the camera to help them recognize people they know.

 

 

Microsoft acquired an AI startup that helps it take on Google Duplex — from qz.com by Dave Gershgorn

Excerpt:

We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.

Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.

 

 

Researchers developed an AI to detect DeepFakes — from thenextweb.com by Tristan Greene

Excerpt:

A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.

The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.

 

 

Bringing It Down To Earth: Four Ways Pragmatic AI Is Being Used Today — from forbes.com by Carlos Melendez

Excerpt:

Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.

Below are four key categories of pragmatic AI and ways they are being applied today.

1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots

 

 

Billable Hour ‘Makes No Sense’ in an AI World — from biglawbusiness.com by Helen Gunnarsson

Excerpt:

Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

 

 

Deep Learning Tool Tops Dermatologists in Melanoma Detection — from healthitanalytics.com
A deep learning tool achieved greater accuracy than dermatologists when detecting melanoma in dermoscopic images.

 

 

Apple’s plans to bring AI to your phone — from wired.com by Tom Simonite

Excerpt:

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

 

 

 
© 2024 | Daniel Christian