An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 
 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

To higher ed: When the race track is going 180mph, you can’t walk or jog onto the track. [Christian]

From DSC:
When the race track is going 180mph, you can’t walk or jog onto the track.  What do I mean by that? 

Consider this quote from an article that Jeanne Meister wrote out at Forbes entitled, “The Future of Work: Three New HR Roles in the Age of Artificial Intelligence:”*

This emphasis on learning new skills in the age of AI is reinforced by the most recent report on the future of work from McKinsey which suggests that as many as 375 million workers around the world may need to switch occupational categories and learn new skills because approximately 60% of jobs will have least one-third of their work activities able to be automated.

Go scan the job openings and you will likely see many that have to do with technology, and increasingly, with emerging technologies such as artificial intelligence, deep learning, machine learning, virtual reality, augmented reality, mixed reality, big data, cloud-based services, robotics, automation, bots, algorithm development, blockchain, and more. 

 

From Robert Half’s 2019 Technology Salary Guide 

 

 

How many of us have those kinds of skills? Did we get that training in the community colleges, colleges, and universities that we went to? Highly unlikely — even if you graduated from one of those institutions only 5-10 years ago. And many of those institutions are often moving at the pace of a nice leisurely walk, with some moving at a jog, even fewer are sprinting. But all of them are now being asked to enter a race track that’s moving at 180mph. Higher ed — and society at large — are not used to moving at this pace. 

This is why I think that higher education and its regional accrediting organizations are going to either need to up their game hugely — and go through a paradigm shift in the required thinking/programming/curricula/level of responsiveness — or watch while alternatives to institutions of traditional higher education increasingly attract their learners away from them.

This is also, why I think we’ll see an online-based, next generation learning platform take place. It will be much more nimble — able to offer up-to-the minute, in-demand skills and competencies. 

 

 

The below graphic is from:
Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages

 

 

 


 

* Three New HR Roles To Create Compelling Employee Experiences
These new HR roles include:

  1. IBM: Vice President, Data, AI & Offering Strategy, HR
  2. Kraft Heinz Senior Vice President Global HR, Performance and IT
  3. SunTrust Senior Vice President Employee Wellbeing & Benefits

What do these three roles have in common? All have been created in the last three years and acknowledge the growing importance of a company’s commitment to create a compelling employee experience by using data, research, and predictive analytics to better serve the needs of employees. In each case, the employee assuming the new role also brought a new set of skills and capabilities into HR. And importantly, the new roles created in HR address a common vision: create a compelling employee experience that mirrors a company’s customer experience.

 


 

An excerpt from McKinsey Global Institute | Notes from the Frontier | Modeling the Impact of AI on the World Economy 

Workers.
A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.

 


 

 

How AI could help solve some of society’s toughest problems — from MIT Tech Review by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

  • Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
  • Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
  • Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
  • One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.

 

 

How AI can be a force for good — from science.sciencemag.org

Excerpt:

The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.

 

 

Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.

 

 

Robot wars — from ethicaljournalismnetwork.org by James Ball
How artificial intelligence will define the future of news

Excerpt:

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Alibaba looks to arm hotels, cities with its AI technology — from zdnet.com by Eileen Yu
Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.

Excerpt:

Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.

Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.

 

 

Accenture Introduces Ella and Ethan, AI Bots to Improve a Patient’s Health and Care Using the Accenture Intelligent Patient Platform — from marketwatch.com

Excerpt:

Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.

 

 

German firm’s 7 commandments for ethical AI — from france24.com

Excerpt:

FRANKFURT AM MAIN (AFP) –
German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.

 

 

 

 

The future of drug discovery and AI – the role of man and machine — from techemergence.com by  Ayn de Jesus

Excerpt:

Episode Summary: This week on AI in Industry, we speak with Amir Saffari, Senior Vice President of AI at BenevolentAI, a London-based pharmaceutical company that uses machine learning to find new uses for existing drugs and new treatments for diseases.

In speaking with him, we aim to learn two things:

  • How will machine learning play a role in the phases of drug discovery, from generating hypotheses to clinical trials?
  • In the future, what are the roles of man and machine in drug discovery? What processes will machines automate and potentially do better than humans in this field?

 

A few other articles caught my eye as well:

  • This little robot swims through pipes and finds out if they’re leaking — from fastcompany.com by Adele Peters
    Lighthouse, U.S. winner of the James Dyson Award, looks like a badminton birdie and detects the suction of water leaving pipes–which is a lot of water that we could put to better use.
    .
  • Samsung’s New York AI center will focus on robotics — from engadget.com by Saqib Shah
    NYU’s AI Now Institute is close-by and Samsung is keen for academic input.
    Excerpt:
    Samsung now has an artificial intelligence center in New York City — its third in North America and sixth in total — with an eye on robotics; a first for the company. It opened in Chelsea, Manhattan on Friday, walking distance from NYU (home to its own AI lab) boosting Samsung’s hopes for an academic collaboration.
    .
  • Business schools bridge the artificial intelligence skills gap — from swisscognitive.ch
    Excerpt:
    Business schools such as Kellogg, Insead and MIT Sloan have introduced courses on AI over the past two years, but Smith is the first to offer a full programme where students delve deep into machine learning.

    “Technologists can tell you all about the technology but usually not what kind of business problems it can solve,” Carlsson says. With business leaders, he adds, it is the other way round — they have plenty of ideas about how to improve their company but little way of knowing what the new technology can achieve. “The foundational skills businesses need to hack the potential of AI is the understanding of what problems the tech is actually good at solving,” he says.

 

 

 

The Mobile AR Leaders of 2018 — from next.reality.news

Excerpt:

This time last year, we were getting our first taste of what mobile app developers could do in augmented reality with Apple’s ARKit, and most people had never heard of Animojis. Google’s AR platform was still Tango. Snapchat had just introduced its World Lens AR experiences. Most mobile AR experiences existing in the wild were marker-based offerings from the likes of Blippar and Zappar, or generic Pokémon GO knock-offs.

In last year’s NR50, published before the introduction of ARKit, only two of the top 10 professionals worked directly with mobile AR, and Apple CEO Tim Cook was ranked number 26, based primarily on his forward-looking statements about AR.

This year, Cook comes in at number one, with five others categorized under mobile AR in the overall top 10 of the NR30.

What a difference a year makes.

In just 12 months, we’ve seen mobile AR grow at a breakneck pace. Since Apple launched its AR toolkit, users have downloaded more than 13 million ARKit apps from the App Store, not including existing apps updated with ARKit capabilities. Apple has already updated its platform and will introduce even more new features to the public with the release of ARKit 2.0 this fall. Last year’s iPhone X also introduced a depth-sensing camera and AR Animojis that captured the imaginations of its users.

 

 

The Weather Channel forecasts more augmented reality for its live broadcasts with Unreal Engine — from next.reality.news by Tommy Palladino

Excerpt:

Augmented reality made its live broadcast debut for The Weather Channel in 2015. The technology helps on-air talent at the network to explain the science behind weather phenomena and tell more immersive stories. Powered by Unreal Engine, The Future Group’s Frontier platform will enable The Weather Channel to be able to show even more realistic AR content, such as accurately rendered storms and detailed cityscapes, all in real time.

 

 

 

From DSC:
Imagine this type of thing in online-based learning, MOOCs, and/or even in blended learning based learning environments (i.e., in situations where learning materials are designed/created by teams of specialists). If that were the case, who needs to be trained to create these pieces? Will students be creating these types of pieces in the future? Hmmm….

 

 

Winners announced of the 2018 Journalism 360 Challenge — from vrfocus.com
The question of “How might we experiment with immersive storytelling to advance the field of journalism?” looks to be answered by 11 projects.

Excerpt:

The eleven winners were announced on 9/11/18 of a contest being held by the Google News Initiative, Knight Foundation and Online News Association. The 2018 Journalism 360 Challenge asked people the question “How might we experiment with immersive storytelling to advance the field of journalism?” and it generated over 400 responses.

 

 

 

 

 



 

Addendum:

Educause Explores Future of Extended Reality on Campus — from campustechnology.com by Dian Schaffhauser

Among the findings:

  • VR makes people feel like they’re really there. The “intellectual and physiological reactions” to constructs and events in VR are the same — “and sometimes identical” — to a person’s reactions in the real world;
  • 3D technologies facilitate active and experiential learning. AR, for example, lets users interact with an object in ways that aren’t possible in the physical world — such as seeing through surfaces or viewing data about underlying objects. And with 3D printing, learners can create “physical objects that might otherwise exist only simulations”; and
  • Simulations allow for scaling up of “high-touch, high-cost learning experiences.” Students may be able to go through virtual lab activities, for instance, even when a physical lab isn’t available.

Common challenges included implementation learning curves, instructional design, data storage of 3D images and effective cross-departmental collaboration.

“One significant result from this research is that it shows that these extended reality technologies are applicable across a wide spectrum of academic disciplines,” said Malcolm Brown, director of learning initiatives at Educause, in a statement. “In addition to the scientific disciplines, students in the humanities, for example, can re-construct cities and structures that no longer exist. I think this study will go a long way in encouraging faculty, instructional designers and educational technologists across higher education to further experiment with these technologies to vivify learning experiences in nearly all courses of study.”

 



 

 

Activists urge killer robot ban ‘before it is too late’ — from techxplore.com by Nina Larson

Excerpt:

Countries should quickly agree a treaty banning the use of so-called killer robots “before it is too late”, activists said Monday as talks on the issue resumed at the UN.

They say time is running out before weapons are deployed that use lethal force without a human making the final kill-order and have criticised the UN body hosting the talks—the Convention of Certain Conventional Weapons (CCW)—for moving too slowly.

“Killer robots are no longer the stuff of science fiction,” Rasha Abdul Rahim, Amnesty International’s advisor on artificial intelligence and human rights, said in a statement.

“From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law,” she said.

 

Activists urge killer robot ban before it is too late

 

From DSC:
I’ve often considered how much out front many technologies are in our world today. It takes the rest of society some time to catch up with emerging technologies and ask whether we should be implementing technology A, B, or C.  Just because we can, doesn’t mean we should. A worn-out statement perhaps, but given the exponential pace of technological change, one that is highly relevant to our world today.

 

 



Addendum on 9/8/18:



 

 

Smart Machines & Human Expertise: Challenges for Higher Education — from er.educause.edu by Diana Oblinger

Excerpts:

What does this mean for higher education? One answer is that AI, robotics, and analytics become disciplines in themselves. They are emerging as majors, minors, areas of emphasis, certificate programs, and courses in many colleges and universities. But smart machines will catalyze even bigger changes in higher education. Consider the implications in three areas: data; the new division of labor; and ethics.

 

Colleges and universities are challenged to move beyond the use of technology to deliver education. Higher education leaders must consider how AI, big data, analytics, robotics, and wide-scale collaboration might change the substance of education.

 

Higher education leaders should ask questions such as the following:

  • What place does data have in our courses?
  • Do students have the appropriate mix of mathematics, statistics, and coding to understand how data is manipulated and how algorithms work?
  • Should students be required to become “data literate” (i.e., able to effectively use and critically evaluate data and its sources)?

Higher education leaders should ask questions such as the following:

  • How might problem-solving and discovery change with AI?
  • How do we optimize the division of labor and best allocate tasks between humans and machines?
  • What role do collaborative platforms and collective intelligence have in how we develop and deploy expertise?


Higher education leaders should ask questions such as the following:

  • Even though something is possible, does that mean it is morally responsible?
  • How do we achieve a balance between technological possibilities and policies that enable—or stifle—their use?
  • An algorithm may represent a “trade secret,” but it might also reinforce dangerous assumptions or result in unconscious bias. What kind of transparency should we strive for in the use of algorithms?

 

 

 

Adobe Announces the 2019 Release of Adobe Captivate, Introducing Virtual Reality for eLearning Design — from theblog.adobe.com

Excerpt:

  • Immersive learning with VR experiences: Design learning scenarios that your learners can experience in Virtual Reality using VR headsets. Import 360° media assets and add hotspots, quizzes and other interactive elements to engage your learners with near real-life scenarios
  • Interactive videos: Liven up demos and training videos by making them interactive with the new Adobe Captivate. Create your own or bring in existing YouTube videos, add questions at specific points and conduct knowledge checks to aid learner remediation
  • Fluid Boxes 2.0: Explore the building blocks of Smart eLearning design with intelligent containers that use white space optimally. Objects placed in Fluid Boxes get aligned automatically so that learners always get fully responsive experience regardless of their device or browser.
  • 360° learning experiences: Augment the learning landscape with 360° images and videos and convert them into interactive eLearning material with customizable overlay items such as information blurbs, audio content & quizzes.

 

 

Blippar unveils indoor visual positioning system to anchor AR — from martechtoday.com by Barry Levine
Employing machine vision to recognize mapped objects, the company says it can determine which way a user is looking and can calculate positioning down to a centimeter.

A Blippar visualization of AR using its new indoor visual positioning system

 

The Storyteller’s Guide to the Virtual Reality Audience — from medium.com by Katy Newton

Excerpt:

To even scratch the surface of these questions, we need to better understand the audience’s experience in VR — not just their experience of the technology, but the way that they understand story and their role within it.

 

 

Hospital introducing HoloLens augmented reality into the operating room — from medgadget.com

Excerpt:

HoloLens technology is being paired with Microsoft’s Surface Hub, a kind of digital whiteboard. The idea is that the surgical team can gather together around a Surface Hub to review patient information, discuss the details of a procedure, and select what information should be readily accessible during surgery. During the procedure, a surgeon wearing a HoloLens would be able to review a CT or MRI scan, access other data in the electronic medical records, and to be able to manipulate these so as to get a clear picture of what is being worked on and what needs to be done.

 

 

Raleigh Fire Department invests in virtual reality to enrich training — from vrfocus.com by Nikholai Koolon
New system allows department personnel to learn new skills through immersive experiences.

Excerpt:

The VR solution allows emergency medical services (EMS) personnel to dive into a rich and detailed environment which allows them to pinpoint portions of the body to dissect. This then allows them then see each part of the body in great detail along with viewing it from any angle. The goal is to allow for users to gain the experience to diagnose injuries from a variety of vantage points all where working within an virtual environment capable of displaying countless scenarios.

 

 

For another emerging technology, see:

Someday this tiny spider bot could perform surgery inside your body — from fastcompany.com by Jesus Diaz
The experimental robots could also fix airplane engines and find disaster victims.

Excerpt:

A team of Harvard University researchers recently achieved a major breakthrough in robotics, engineering a tiny spider robot using tech that could one day work inside your body to repair tissues or destroy tumors. Their work could not only change medicine–by eliminating invasive surgeries–but could also have an impact on everything from how industrial machines are maintained to how disaster victims are rescued.

Until now, most advanced, small-scale robots followed a certain model: They tend to be built at the centimeter scale and have only one degree of freedom, which means they can only perform one movement. Not so with this new ‘bot, developed by scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering, the John A. Paulson School of Engineering and Applied Sciences, and Boston University. It’s built at the millimeter scale, and because it’s made of flexible materials–easily moved by pneumatic and hydraulic power–the critter has an unprecedented 18 degrees of freedom.

 


Plus some items from a few weeks ago


 

After almost a decade and billions in outside investment, Magic Leap’s first product is finally on sale for $2,295. Here’s what it’s like. — from

Excerpts (emphasis DSC):

I liked that it gave a new perspective to the video clip I’d watched: It threw the actual game up on the wall alongside the kind of information a basketball fan would want, including 3-D renderings and stats. Today, you might turn to your phone for that information. With Magic Leap, you wouldn’t have to.

Abovitz also said that intelligent assistants will play a big role in Magic Leap’s future. I didn’t get to test one, but Abovitz says he’s working with a team in Los Angeles that’s developing high-definition people that will appear to Magic Leap users and assist with tasks. Think Siri, Alexa or Google Assistant, but instead of speaking to your phone, you’d be speaking to a realistic-looking human through Magic Leap. Or you might be speaking to an avatar of someone real.

“You might need a doctor who can come to you,” Abovitz said. “AI that appears in front of you can give you eye contact and empathy.”

 

And I loved the idea of being able to place a digital TV screen anywhere I wanted.

 

 

Magic Leap One Available For Purchase, Starting At $2,295 — from vrscout.com by Kyle Melnick

Excerpt:

December of last year U.S. startup Magic Leap unveiled its long-awaited mixed reality headset, a secretive device five years and $2.44B USD in the making.

This morning that same headset, now referred to as the Magic Leap One Creator Edition, became available for purchase in the U.S. On sale to creators at a hefty starting price of $2,275, the computer spatial device utilizes synthetic lightfields to capture natural lightwaves and superimpose interactive, 3D content over the real-world.

 

 

 

Magic Leap One First Hands-On Impressions for HoloLens Developers — from magic-leap.reality.news

Excerpt:

After spending about an hour with the headset running through set up and poking around its UI and a couple of the launch day apps, I thought it would be helpful to share a quick list of some of my first impressions as someone who’s spent a lot of time with a HoloLens over the past couple years and try to start answering many of the burning questions I’ve had about the device.

 

 

World Campus researches effectiveness of VR headsets and video in online classes — from news.psu.edu

Excerpt:

UNIVERSITY PARK, Pa. — Penn State instructional designers are researching whether using virtual reality and 360-degree video can help students in online classes learn more effectively.

Designers worked with professors in the College of Nursing to incorporate 360-degree video into Nursing 352, a class on Advanced Health Assessment. Students in the class, offered online through Penn State World Campus, were offered free VR headsets to use with their smartphones to create a more immersive experience while watching the video, which shows safety and health hazards in a patient’s home.

Bill Egan, the lead designer for the Penn State World Campus RN to BSN nursing program, said students in the class were surveyed as part of a study approved by the Institutional Review Board and overwhelmingly said that they enjoyed the videos and thought they provided educational value. Eighty percent of the students said they would like to see more immersive content such as 360-degree videos in their online courses, he said.

 

 

7 Practical Problems with VR for eLearning — from learnupon.com

Excerpt:

In this post, we run through some practical stumbling blocks that prevent VR training from being feasible for most.

There are quite a number of practical considerations which prevent VR from totally overhauling the corporate training world. Some are obvious, whilst others only become apparent after using the technology a number of times. It’s important to be made aware of these limitations so that a large investment isn’t made in tech that isn’t really practical for corporate training.

 

Augmented reality – the next big thing for HR? — from hrdconnect.com
Augmented reality (AR) could have a huge impact on HR, transforming long-established processes into engaging and exciting something. What will this look like? How can we shape this into our everyday working lives?

Excerpt (emphasis DSC):

AR also has the potential to revolutionise our work lives, changing the way we think about office spaces and equipment forever.

Most of us still commute to an office every day, which can be a time-consuming and stressful experience. AR has the potential to turn any space into your own customisable workspace, complete with digital notes, folders and files – even a digital photo of your loved ones. This would give you access to all the information and tools that you would typically find in an office, but wherever and whenever you need them.

And instead of working on a flat, stationary, two-dimensional screen, your workspace would be a customisable three-dimensional space, where objects and information are manipulated with gestures rather than hardware. All you would need is an AR headset.

AR could also transform the way we advertise brands and share information. Imagine if your organisation had an AR stand at a conference – how engaging would that be for potential customers? How much more interesting and fun would meetings be if we used AR to present information instead of slides on a projector?

AR could transform the on-boarding experience into something fun and interactive – imagine taking an AR tour of your office, where information about key places, company history or your new colleagues pops into view as you go from place to place. 

 

 

RETINA Are Bringing Augmented Reality To Air Traffic Control Towers — from vrfocus.com by Nikholai Koolonavi

Excerpt:

A new project is aiming to make it easier for staff in airport control towers to visualize information to help make their job easier by leveraging augmented reality (AR) technology. The project, dubbed RETINA, is looking to modernise Europe’s air traffic management for safer, smarter and even smoother air travel.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian