The 11 coolest announcements Google made at its biggest product event of the year — from businessinsider.com by Dave Smith

Excerpt:

On Tuesday (10/9), Google invited journalists to New York City to debut its newest smartphone, the Pixel 3, among several other hardware goodies.

Google made dozens of announcements in its 75-minute event, but a handful of “wow” moments stole the show.

Here are the 11 coolest announcements Google made at its big hardware event…

 

 

 

Google’s Pixel 3 event in 6 minutes — from businessinsider.com

 

 

Google unveiled its new Home Hub but it might be ‘late to the market’ — from barrons.com by Emily Bary

Excerpt:

Alphabet ’s Google has sold millions of voice-enabled speakers, but it inched toward the future at a Tuesday launch event when it introduced the Home Hub smart screen.

Google isn’t the first company to roll out a screen-enabled home device with voice-assistant technology— Amazon.com released its Echo Show in July 2017. Meanwhile, Lenovo has gotten good reviews for its Smart Display, and Facebook introduced the Portal on Monday.

For the most part, though, consumers have stuck to voice-only devices, and it will be up to Google and its rivals to convince them that an added screen is worth paying for. They’ll also have to reassure consumers that they can trust big tech to maintain their privacy, an admittedly harder task these days after recent security issues at Google and Facebook.

 

Amazon was right to realize early on that consumers aren’t always comfortable buying items they can’t even see pictures of, and that it’s hard to remember directions you’ve heard but not seen.

 

 

Google has announced its first smart speaker with a screen — from businessinsider.com by Brandt Ranj

  • Google has announced the Google Hub, its first smart home speaker with a screen. It’s available for pre-order at Best Buy, Target, and Walmart for $149. The Hub will be released on October 22.
  • The Hub has a 7″ touchscreen and sits on a base with a built-in speaker and microphones, which you can use to play music, watch videos, get directions, and control smart home accessories with your voice.
  • Its biggest advantage is its ability to hook into Google’s first-party services, like YouTube and Google Maps, which none of its competitors can use.
  • If you’re an Android or Chromecast user, trust Google more than Amazon, or want a smaller smart home speaker with a screen, the Google Hub is now your best bet.

 

 

Google is shutting down Google+ for consumers following security lapse — from theverge.com by Ashley Carman

Excerpt:

Google is going to shut down the consumer version of Google+ over the next 10 months, the company writes in a blog post today. The decision follows the revelation of a previously undisclosed security flaw that exposed users’ profile data that was remedied in March 2018.

 

 

Google shutters Google+ after users’ personal data exposed — from cbsnews.com

 

 

Google+ is shutting down, and the site’s few loyal users are mourning – from cnbc.com by Jillian D’Onfro

  • Even though Google had diverted resources away from Google+, there’s a small loyal base of users devastated to see it go away.
  • One fan said the company is using its recently revealed security breach as an excuse to shutter the site.
  • Google said it will be closing Google+ in the coming months.

 

 

 

10 jobs that are safe in an AI world — from linkedin.com by Kai-Fu Lee

Excerpts:

Teaching
AI will be a great tool for teachers and educational institutions, as it will help educators figure out how to personalize curriculum based on each student’s competence, progress, aptitude, and temperament. However, teaching will still need to be oriented around helping students figure out their interests, teaching students to learn independently, and providing one-on-one mentorship. These are tasks that can only be done by a human teacher. As such, there will still be a great need for human educators in the future.

Criminal defense law
Top lawyers will have nothing to worry about when it comes to job displacement. reasoning across domains, winning the trust of clients, applying years of experience in the courtroom, and having the ability to persuade a jury are all examples of the cognitive complexities, strategies, and modes of human interaction that are beyond the capabilities of AI. However, a lot of paralegal and preparatory work like document review, analysis, creating contracts, handling small cases, packing cases, and coming up with recommendations can be done much better and more efficiently with AI. The costs of law make it worthwhile for AI companies to go after AI paralegals and AI junior lawyers, but not top lawyers.

 

From DSC:
In terms of teaching, I agree that while #AI will help personalize learning, there will still be a great need for human teachers, professors, and trainers. I also agree w/ my boss (and with some of the author’s viewpoints here, but not all) that many kinds of legal work will still need the human touch & thought processes. I diverge from his thinking in terms of scope — the need for human lawyers will go far beyond just lawyers involved in crim law.

 

Also see:

15 business applications for artificial intelligence and machine learning — from forbes.com

Excerpt:

Fifteen members of Forbes Technology Council discuss some of the latest applications they’ve found for AI/ML at their companies. Here’s what they had to say…

 

 

 

Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

To higher ed: When the race track is going 180mph, you can’t walk or jog onto the track. [Christian]

From DSC:
When the race track is going 180mph, you can’t walk or jog onto the track.  What do I mean by that? 

Consider this quote from an article that Jeanne Meister wrote out at Forbes entitled, “The Future of Work: Three New HR Roles in the Age of Artificial Intelligence:”*

This emphasis on learning new skills in the age of AI is reinforced by the most recent report on the future of work from McKinsey which suggests that as many as 375 million workers around the world may need to switch occupational categories and learn new skills because approximately 60% of jobs will have least one-third of their work activities able to be automated.

Go scan the job openings and you will likely see many that have to do with technology, and increasingly, with emerging technologies such as artificial intelligence, deep learning, machine learning, virtual reality, augmented reality, mixed reality, big data, cloud-based services, robotics, automation, bots, algorithm development, blockchain, and more. 

 

From Robert Half’s 2019 Technology Salary Guide 

 

 

How many of us have those kinds of skills? Did we get that training in the community colleges, colleges, and universities that we went to? Highly unlikely — even if you graduated from one of those institutions only 5-10 years ago. And many of those institutions are often moving at the pace of a nice leisurely walk, with some moving at a jog, even fewer are sprinting. But all of them are now being asked to enter a race track that’s moving at 180mph. Higher ed — and society at large — are not used to moving at this pace. 

This is why I think that higher education and its regional accrediting organizations are going to either need to up their game hugely — and go through a paradigm shift in the required thinking/programming/curricula/level of responsiveness — or watch while alternatives to institutions of traditional higher education increasingly attract their learners away from them.

This is also, why I think we’ll see an online-based, next generation learning platform take place. It will be much more nimble — able to offer up-to-the minute, in-demand skills and competencies. 

 

 

The below graphic is from:
Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages

 

 

 


 

* Three New HR Roles To Create Compelling Employee Experiences
These new HR roles include:

  1. IBM: Vice President, Data, AI & Offering Strategy, HR
  2. Kraft Heinz Senior Vice President Global HR, Performance and IT
  3. SunTrust Senior Vice President Employee Wellbeing & Benefits

What do these three roles have in common? All have been created in the last three years and acknowledge the growing importance of a company’s commitment to create a compelling employee experience by using data, research, and predictive analytics to better serve the needs of employees. In each case, the employee assuming the new role also brought a new set of skills and capabilities into HR. And importantly, the new roles created in HR address a common vision: create a compelling employee experience that mirrors a company’s customer experience.

 


 

An excerpt from McKinsey Global Institute | Notes from the Frontier | Modeling the Impact of AI on the World Economy 

Workers.
A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.

 


 

 

Prudenti: Law schools facing new demands for innovative education — from libn.com by A. Gail Prudenti

Excerpt (emphasis DSC):

Law schools have always taught the law and the practice thereof, but in the 21st century that is not nearly enough to provide students with the tools to succeed.

Clients, particularly business clients, are not only looking for an “attorney” in the customary sense, but a strategic partner equipped to deal with everything from project management to metrics to process enhancement. Those demands present law schools with both an opportunity for and expectation of innovation in legal education.

At Hofstra Law, we are in the process of establishing a new Center for Applied Legal Technology and Innovation where law students will be taught to use current and emerging technology, and to apply those skills and expertise to provide cutting-edge legal services while taking advantage of interdisciplinary opportunities.

Our goal is to teach law students how to use technology to deliver legal services and to yield graduates who combine exceptional legal acumen with the skill and ability to travel comfortably among myriad disciplines. The lawyers of today—and tomorrow—must be more than just conversant with other professionals. Rather, they need to be able to collaborate with experts in other fields to serve the myriad and intertwined interests of the client.

 

 

Also see:

Workforce of the future: The competing forces shaping 2030 — from pwc.com

Excerpt (emphasis DSC):

We are living through a fundamental transformation in the way we work. Automation and ‘thinking machines’ are replacing human tasks and jobs, and changing the skills that organisations are looking for in their people. These momentous changes raise huge organisational, talent and HR challenges – at a time when business leaders are already wrestling with unprecedented risks, disruption and political and societal upheaval.

The pace of change is accelerating.

 


Graphic by DSC

 

Competition for the right talent is fierce. And ‘talent’ no longer means the same as ten years ago; many of the roles, skills and job titles of tomorrow are unknown to us today. How can organisations prepare for a future that few of us can define? How will your talent needs change? How can you attract, keep and motivate the people you need? And what does all this mean for HR?

This isn’t a time to sit back and wait for events to unfold. To be prepared for the future you have to understand it. In this report we look in detail at how the workplace might be shaped over the coming decade.

 

 

 

From DSC:

Peruse the titles of the articles in this document (that features articles from the last 1-2 years) with an eye on the topics and technologies addressed therein! 

 

Artificial Intelligence (AI), virtual reality, augmented reality, robotics, drones, automation, bots, machine learning, NLP/voice recognition and personal assistants, the Internet of Things, facial recognition, data mining, and more. How these technologies roll out — and if some of them should be rolling out at all — needs to be discussed and dealt with sooner. This is due to the fact that the pace of change has changed. If you can look at those articles  — with an eye on the last 500-1000 years or so to compare things to — and say that we aren’t living in times where the trajectory of technological change is exponential, then either you or I don’t know the meaning of that word.

 

 

 

 

The ABA and law schools need to be much more responsive and innovative — or society will end up suffering the consequences.

Daniel Christian

 

 
For museums, augmented reality is the next frontier — from wired.com by Arielle Pardes

Excerpt:

Mae Jemison, the first woman of color to go into space, stood in the center of the room and prepared to become digital. Around her, 106 cameras captured her image in 3-D, which would later render her as a life-sized hologram when viewed through a HoloLens headset.

Jemison was recording what would become the introduction for a new exhibit at the Intrepid Sea, Air, and Space Museum, which opens tomorrow as part of the Smithsonian’s annual Museum Day. In the exhibit, visitors will wear HoloLens headsets and watch Jemison materialize before their eyes, taking them on a tour of the Space Shuttle Enterprise—and through space history. They’re invited to explore artifacts both physical (like the Enterprise) and digital (like a galaxy of AR stars) while Jemison introduces women throughout history who have made important contributions to space exploration.

Interactive museum exhibits like this are becoming more common as augmented reality tech becomes cheaper, lighter, and easier to create.

 

 

Oculus will livestream it’s 5th Connect Conference on Oculus venues — from vrscout.com by Kyle Melnick

Excerpt (emphasis DSC):

Using either an Oculus Go standalone device or a mobile Gear VR headset, users will be able to login to the Oculus Venues app and join other users for an immersive live stream of various developer keynotes and adrenaline-pumping esports competitions.

 

From DSC:
What are the ramifications of this for the future of webinars, teaching and learning, online learning, MOOCs and more…?

 

 

 

10 new AR features in iOS 12 for iPhone & iPad — from mobile-ar.reality.news by Justin Meyers

Excerpt:

Apple’s iOS 12 has finally landed. The big update appeared for everyone on Monday, Sept. 17, and hiding within are some pretty amazing augmented reality upgrades for iPhones, iPads, and iPod touches. We’ve been playing with them ever since the iOS 12 beta launched in June, and here are the things we learned that you’ll want to know about.

For now, here’s everything AR-related that Apple has included in iOS 12. There are some new features aimed to please AR fanatics as well as hook those new to AR into finally getting with the program. But all of the new AR features rely on ARKit 2.0, the latest version of Apple’s augmented reality framework for iOS.

 

 

Berkeley College Faculty Test VR for Learning— from campustechnology.com by Dian Schaffhauser

Excerpt:

In a pilot program at Berkeley College, members of a Virtual Reality Faculty Interest Group tested the use of virtual reality to immerse students in a variety of learning experiences. During winter 2018, seven different instructors in nearly as many disciplines used inexpensive Google Cardboard headsets along with apps on smartphones to virtually place students in North Korea, a taxicab and other environments as part of their classwork.

Participants used free mobile applications such as Within, the New York Times VR, Discovery VR, Jaunt VR and YouTube VR. Their courses included critical writing, international business, business essentials, medical terminology, international banking, public speaking and crisis management.

 

 

 

 

Skype chats are coming to Alexa devices — from engadget.com by Richard Lawlor
Voice controlled internet calls to or from any device with Amazon’s system in it.

Excerpt:

Aside from all of the Alexa-connected hardware, there’s one more big development coming for Amazon’s technology: integration with Skype. Microsoft and Amazon said that voice and video calls via the service will come to Alexa devices (including Microsoft’s Xbox One) with calls that you can start and control just by voice.

 

 

Amazon Hardware Event 2018
From techcrunch.com

 

Echo HomePod? Amazon wants you to build your own — by Brian Heater
One of the bigger surprises at today’s big Amazon event was something the company didn’t announce. After a couple of years of speculation that the company was working on its own version of the Home…

 

 

The long list of new Alexa devices Amazon announced at its hardware event — by Everyone’s favorite trillion-dollar retailer hosted a private event today where they continued to…

 

Amazon introduces APL, a new design language for building Alexa skills for devices with screensAlong with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia…

Excerpt:

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

 

From DSC:
This is a great move by Amazon — as NLP and our voices become increasingly important in how we “drive” and utilize our computing devices.

 

 

Amazon launches an Echo Wall Clock, because Alexa is gonna be everywhere — by Sarah Perez

 

 

Amazon’s new Echo lineup targets Google, Apple and Sonos — from engadget.com by Nicole Lee
Alexa, dominate the industry.

The business plan from here is clear: Companies pay a premium to be activated when users pose questions related to their products and services. “How do you cook an egg?” could pull up a Food Network tutorial; “How far is Morocco?” could enable the Expedia app.
Also see how Alexa might be a key piece of smart classrooms in the future:
 

Why one London university is now offering degrees in VR — from techrepublic.com by Conner Forrest
London College of Communication, UAL is launching a master’s degree in virtual reality for the 2018-19 academic year.

The big takeaways for tech leaders:

  • London College of Communication, UAL is launching a Master of Arts in VR degree for the 2018-19 academic year.
  • The demand for VR professionals is growing in the film and media industries, where these technologies are being used most frequently.

 

From DSC:
Collaboration/videoconferencing tools like Webex, Blackboard Collaborate, Zoom, Google Hangouts, Skype, etc. are being used in a variety of scenarios. That platform has been well established. It will be interesting to see how VR might play into web-based collaboration, training, and learning.

 

 

San Diego’s Nanome Inc. releases collaborative VR-STEM software for free — from vrscout.com by Becca Loux

Excerpt:

The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.

The open-source tool is free for download now on Oculus and Steam.

Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.

 

“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.

 

San Diego’s Nanome Inc. Releases Collaborative VR-STEM Software For Free

 

 

10 ways VR will change life in the near future — from forbes.com

Excerpts:

  1. Virtual shops
  2. Real estate
  3. Dangerous jobs
  4. Health care industry
  5. Training to create VR content
  6. Education
  7. Emergency response
  8. Distraction simulation
  9. New hire training
  10. Exercise

 

From DSC:
While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.

 

How MR turns firstline workers into change agents — from virtualrealitypop.com by Charlie Finkand
Mixed Reality, a new dimension of work — from Microsoft and Harvard Business Review

Excerpts:

Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.

 

Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way. 

 

 

Let’s Speak: VR language meetups — from account.altvr.com

 

 

 

 

Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings — from alphr.com by Bobby Hellard
Microsoft Released on GitHub, Microsoft’s AI-powered developer tool can shave hours off web and app building

Excerpt:

Microsoft has developed an AI-powered web design tool capable of turning sketches of websites into functional HTML code.

Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.

 

 

 

 

 

Google’s VR Labs provide STEM students with hands-on experience — from vrscout.com by Kyle Melnick

Excerpt:

STEM students engaged in scientific disciplines, such as biochemistry and neuroscience, are often required by their respective degrees to spend a certain amount of time engaged in an official laboratory environment. Unfortunately, crowded universities and the rise of online education have made it difficult for these innovators-in-training to access properly equipped labs and log their necessary hours.

Cue Google VR Labs, a series of comprehensive virtual lab experiences available on the Google Daydream platform. Developed as part of partnership between Google and simulation education company Labster, the in-depth program boasts 30 interactive lab experiences in which biology students can engage in a series of hands-on scientific activities in a realistic environment.

These actions can include everything from the use of practical tools, such as DNA sequencers and microscopes, to reality-bending experiences only capable in a virtual environment, like traveling to the surface of the newly discovered Astakos IV exoplanet or examining and altering DNA on a molecular level.

 

Google’s VR Labs Provide STEM Students With Hands-On Experience

 

Also see:

 

 

 

Adobe Announces the 2019 Release of Adobe Captivate, Introducing Virtual Reality for eLearning Design — from theblog.adobe.com

Excerpt:

  • Immersive learning with VR experiences: Design learning scenarios that your learners can experience in Virtual Reality using VR headsets. Import 360° media assets and add hotspots, quizzes and other interactive elements to engage your learners with near real-life scenarios
  • Interactive videos: Liven up demos and training videos by making them interactive with the new Adobe Captivate. Create your own or bring in existing YouTube videos, add questions at specific points and conduct knowledge checks to aid learner remediation
  • Fluid Boxes 2.0: Explore the building blocks of Smart eLearning design with intelligent containers that use white space optimally. Objects placed in Fluid Boxes get aligned automatically so that learners always get fully responsive experience regardless of their device or browser.
  • 360° learning experiences: Augment the learning landscape with 360° images and videos and convert them into interactive eLearning material with customizable overlay items such as information blurbs, audio content & quizzes.

 

 

Blippar unveils indoor visual positioning system to anchor AR — from martechtoday.com by Barry Levine
Employing machine vision to recognize mapped objects, the company says it can determine which way a user is looking and can calculate positioning down to a centimeter.

A Blippar visualization of AR using its new indoor visual positioning system

 

The Storyteller’s Guide to the Virtual Reality Audience — from medium.com by Katy Newton

Excerpt:

To even scratch the surface of these questions, we need to better understand the audience’s experience in VR — not just their experience of the technology, but the way that they understand story and their role within it.

 

 

Hospital introducing HoloLens augmented reality into the operating room — from medgadget.com

Excerpt:

HoloLens technology is being paired with Microsoft’s Surface Hub, a kind of digital whiteboard. The idea is that the surgical team can gather together around a Surface Hub to review patient information, discuss the details of a procedure, and select what information should be readily accessible during surgery. During the procedure, a surgeon wearing a HoloLens would be able to review a CT or MRI scan, access other data in the electronic medical records, and to be able to manipulate these so as to get a clear picture of what is being worked on and what needs to be done.

 

 

Raleigh Fire Department invests in virtual reality to enrich training — from vrfocus.com by Nikholai Koolon
New system allows department personnel to learn new skills through immersive experiences.

Excerpt:

The VR solution allows emergency medical services (EMS) personnel to dive into a rich and detailed environment which allows them to pinpoint portions of the body to dissect. This then allows them then see each part of the body in great detail along with viewing it from any angle. The goal is to allow for users to gain the experience to diagnose injuries from a variety of vantage points all where working within an virtual environment capable of displaying countless scenarios.

 

 

For another emerging technology, see:

Someday this tiny spider bot could perform surgery inside your body — from fastcompany.com by Jesus Diaz
The experimental robots could also fix airplane engines and find disaster victims.

Excerpt:

A team of Harvard University researchers recently achieved a major breakthrough in robotics, engineering a tiny spider robot using tech that could one day work inside your body to repair tissues or destroy tumors. Their work could not only change medicine–by eliminating invasive surgeries–but could also have an impact on everything from how industrial machines are maintained to how disaster victims are rescued.

Until now, most advanced, small-scale robots followed a certain model: They tend to be built at the centimeter scale and have only one degree of freedom, which means they can only perform one movement. Not so with this new ‘bot, developed by scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering, the John A. Paulson School of Engineering and Applied Sciences, and Boston University. It’s built at the millimeter scale, and because it’s made of flexible materials–easily moved by pneumatic and hydraulic power–the critter has an unprecedented 18 degrees of freedom.

 


Plus some items from a few weeks ago


 

After almost a decade and billions in outside investment, Magic Leap’s first product is finally on sale for $2,295. Here’s what it’s like. — from

Excerpts (emphasis DSC):

I liked that it gave a new perspective to the video clip I’d watched: It threw the actual game up on the wall alongside the kind of information a basketball fan would want, including 3-D renderings and stats. Today, you might turn to your phone for that information. With Magic Leap, you wouldn’t have to.

Abovitz also said that intelligent assistants will play a big role in Magic Leap’s future. I didn’t get to test one, but Abovitz says he’s working with a team in Los Angeles that’s developing high-definition people that will appear to Magic Leap users and assist with tasks. Think Siri, Alexa or Google Assistant, but instead of speaking to your phone, you’d be speaking to a realistic-looking human through Magic Leap. Or you might be speaking to an avatar of someone real.

“You might need a doctor who can come to you,” Abovitz said. “AI that appears in front of you can give you eye contact and empathy.”

 

And I loved the idea of being able to place a digital TV screen anywhere I wanted.

 

 

Magic Leap One Available For Purchase, Starting At $2,295 — from vrscout.com by Kyle Melnick

Excerpt:

December of last year U.S. startup Magic Leap unveiled its long-awaited mixed reality headset, a secretive device five years and $2.44B USD in the making.

This morning that same headset, now referred to as the Magic Leap One Creator Edition, became available for purchase in the U.S. On sale to creators at a hefty starting price of $2,275, the computer spatial device utilizes synthetic lightfields to capture natural lightwaves and superimpose interactive, 3D content over the real-world.

 

 

 

Magic Leap One First Hands-On Impressions for HoloLens Developers — from magic-leap.reality.news

Excerpt:

After spending about an hour with the headset running through set up and poking around its UI and a couple of the launch day apps, I thought it would be helpful to share a quick list of some of my first impressions as someone who’s spent a lot of time with a HoloLens over the past couple years and try to start answering many of the burning questions I’ve had about the device.

 

 

World Campus researches effectiveness of VR headsets and video in online classes — from news.psu.edu

Excerpt:

UNIVERSITY PARK, Pa. — Penn State instructional designers are researching whether using virtual reality and 360-degree video can help students in online classes learn more effectively.

Designers worked with professors in the College of Nursing to incorporate 360-degree video into Nursing 352, a class on Advanced Health Assessment. Students in the class, offered online through Penn State World Campus, were offered free VR headsets to use with their smartphones to create a more immersive experience while watching the video, which shows safety and health hazards in a patient’s home.

Bill Egan, the lead designer for the Penn State World Campus RN to BSN nursing program, said students in the class were surveyed as part of a study approved by the Institutional Review Board and overwhelmingly said that they enjoyed the videos and thought they provided educational value. Eighty percent of the students said they would like to see more immersive content such as 360-degree videos in their online courses, he said.

 

 

7 Practical Problems with VR for eLearning — from learnupon.com

Excerpt:

In this post, we run through some practical stumbling blocks that prevent VR training from being feasible for most.

There are quite a number of practical considerations which prevent VR from totally overhauling the corporate training world. Some are obvious, whilst others only become apparent after using the technology a number of times. It’s important to be made aware of these limitations so that a large investment isn’t made in tech that isn’t really practical for corporate training.

 

Augmented reality – the next big thing for HR? — from hrdconnect.com
Augmented reality (AR) could have a huge impact on HR, transforming long-established processes into engaging and exciting something. What will this look like? How can we shape this into our everyday working lives?

Excerpt (emphasis DSC):

AR also has the potential to revolutionise our work lives, changing the way we think about office spaces and equipment forever.

Most of us still commute to an office every day, which can be a time-consuming and stressful experience. AR has the potential to turn any space into your own customisable workspace, complete with digital notes, folders and files – even a digital photo of your loved ones. This would give you access to all the information and tools that you would typically find in an office, but wherever and whenever you need them.

And instead of working on a flat, stationary, two-dimensional screen, your workspace would be a customisable three-dimensional space, where objects and information are manipulated with gestures rather than hardware. All you would need is an AR headset.

AR could also transform the way we advertise brands and share information. Imagine if your organisation had an AR stand at a conference – how engaging would that be for potential customers? How much more interesting and fun would meetings be if we used AR to present information instead of slides on a projector?

AR could transform the on-boarding experience into something fun and interactive – imagine taking an AR tour of your office, where information about key places, company history or your new colleagues pops into view as you go from place to place. 

 

 

RETINA Are Bringing Augmented Reality To Air Traffic Control Towers — from vrfocus.com by Nikholai Koolonavi

Excerpt:

A new project is aiming to make it easier for staff in airport control towers to visualize information to help make their job easier by leveraging augmented reality (AR) technology. The project, dubbed RETINA, is looking to modernise Europe’s air traffic management for safer, smarter and even smoother air travel.

 

 

 

It’s time to address artificial intelligence’s ethical problems — from wired.co.uk by Abigail Beall
AI is already helping us diagnose cancer and understand climate change, but regulation and oversight are needed to stop the new technology being abused

Excerpt:

The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.

Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.

The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.

 

 

‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace — from nytimes.com by Steve Lohr

Excerpt:

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

 

 

AI for Virtual Medical Assistants – 4 Current Applications — from techemergence.com by Kumba Sennaar

Excerpt:

In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.

This article will set out to determine the answers to the following questions:

  • What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
  • How is the healthcare market implementing these AI applications?

 

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says — from nytimes.com by Natasha Singer

Excerpt:

In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.

On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”

 

Back from January:

 

 

 

Campus Technology recently announced the recipients of the 2018 Campus Technology Impact Awards.

 

Campus Technology recently announced the recipients of the 2018 Campus Technology Impact Awards.

 

Categories include:

  • Teaching and Learning
  • Education Futurists
  • Student Systems & Services
  • Administration
  • IT Infrastructure & Systems

 

From DSC:
Having served as one of the judges for these competitions during the last several years, I really appreciate the level of innovation that’s been displayed by many of the submissions and the individuals/institutions behind them. 

 

 
© 2025 | Daniel Christian