What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

An Artificial Intelligence Developed Its Own Non-Human Language — from theatlantic.com by Adrienne LaFrance
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

Excerpt:

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.

 

 

 

Oculus Education Pilot Kicks Off in 90 California Libraries — from oculus.com

Excerpt:

Books, like VR, open the door to new possibilities and let us experience worlds that would otherwise be beyond reach. Today, we’re excited to bring the two together through a new partnership with the California State Library. This pilot program will place 100 Rifts and Oculus Ready PCs in 90 libraries throughout the state, letting even more people step inside VR and see themselves as part of the revolution.

“It’s pretty cool to imagine how many people will try VR for the very first time—and have that ‘wow’ moment—in their local libraries,” says Oculus Education Program Manager Cindy Ball. “We hope early access will cause many people to feel excited and empowered to move beyond just experiencing VR and open their minds to the possibility of one day joining the industry.”

 

 

Also see:

Oculus Brings Rift to 90 Libraries in California for Public Access VR — from roadtovr.com by Dominic Brennan

Excerpt:

Oculus has announced a pilot program to place 100 Rifts and Oculus Ready PCs in 90 libraries throughout the state of California, from the Oregon border down to Mexico. Detailed on the Oculus Blog, the new partnership with the California State Library hopes to highlight the educational potential of VR, as well as provide easy access to VR hardware within the heart of local communities.

“Public libraries provide safe, supportive environments that are available and welcoming to everyone,” says Oculus Education Program Manager Cindy Ball. “They help level the playing field by providing educational opportunities and access to technology that may not be readily available in the community households. Libraries share the love—at scale.”

 

 

 

2017 Internet Trends Report — from kpcb.com by Mary Meeker

 

 

Mary Meeker’s 2017 internet trends report: All the slides, plus analysis — from recode.net by Rani Molla
The most anticipated slide deck of the year is here.

Excerpt:

Here are some of our takeaways:

  • Global smartphone growth is slowing: Smartphone shipments grew 3 percent year over year last year, versus 10 percent the year before. This is in addition to continued slowing internet growth, which Meeker discussed last year.
  • Voice is beginning to replace typing in online queries. Twenty percent of mobile queries were made via voice in 2016, while accuracy is now about 95 percent.
  • In 10 years, Netflix went from 0 to more than 30 percent of home entertainment revenue in the U.S. This is happening while TV viewership continues to decline.
  • China remains a fascinating market, with huge growth in mobile services and payments and services like on-demand bike sharing. (More here: The highlights of Meeker’s China slides.)

 

 

Read Mary Meeker’s essential 2017 Internet Trends report — from techcrunch.com by Josh Constine

Excerpt:

This is the best way to get up to speed on everything going on in tech. Kleiner Perkins venture partner Mary Meeker’s annual Internet Trends report is essentially the state of the union for the technology industry. The widely anticipated slide deck compiles the most informative research on what’s getting funded, how Internet adoption is progressing, which interfaces are resonating, and what will be big next.

You can check out the 2017 report embedded below, and here’s last year’s report for reference.

 

 

Complete Guide to Virtual Reality Careers — from vudream.com by Mark Metry

Excerpt:

So you want to jump in the illustrious intricate pool of Virtual Reality?

Come on in my friend. The water is warm with confusion and camaraderie. To be honest, few people have any idea what’s going on in the industry.

VR is a brand new industry, hardly anyone has experience.

That’s a good thing for you.

Marxent Labs reports that there are 5 virtual reality jobs.
UX/UI Designers:
UX/UI Designers create roadmaps demonstrating how the app should flow and design the look and feel of the app, in order to ensure user-friendly experiences.
Unity Developers:
Specializing in Unity 3D software, Unity Developers create the foundation of the experience.
3D Modelers:
3D artists render lifelike digital imagery.
Animators:
Animators bring the 3D models to life. Many 3D modelers are cross-trained in animation, which is a highly recommended combination a 3D candidate to possess.
Project Manager:
The Project Manager is responsible for communicating deadlines, budgets, requirements, roadblocks, and more between the client and the internal team.
Videographer:
Each project is captured and edited into clips to make showcase videos for marketing and entertainment.

 

 

Virtual Reality (VR) jobs jump in the job market — from forbes.com by Karsten Strauss

Excerpt:

One of the more vibrant, up-and-coming sectors of the tech industry these days is virtual reality. From the added dimension it brings to gaming and media consumption to the level of immersion the technology can bring to marketing, VR is expected to see a bump in the near future.

And major players have not been blind to that potential. Most famously, Facebook’s Mark Zuckerberg laid down a $2 billion bet on the technology in the spring of 2014 when his company acquired virtual reality firm, Oculus Rift. That investment put a stamp of confidence on the space and it’s grown ever since.

So it makes sense, then, that tech-facing companies are scanning for developers and coders who can help them build out their VR capabilities. Though still early, some in the job-search industry are noticing a trend in the hiring market.

 

 

 

 

 

Five things to know about Facebook’s huge augmented reality fantasy — from gizmodo.com by Michael Nunez

Excerpt:

One example of how this might work is at a restaurant. Your friend will be able to leave an augmented reality sticky note on the menu, letting you know which menu item is the best or which one’s the worst when you hold your camera up to it.

Another example is if you’re at a celebration, like New Year’s Eve or a birthday party. Facebook could use an augmented reality filter to fill the scene with confetti or morph the bar into an aquarium or any other setting corresponding with the team’s mascot. The basic examples are similar to Snapchat’s geo-filters—but the more sophisticated uses because it will actually let you leave digital objects behind for your friends to discover. Very cool!

 

“We’re going to make the camera the first mainstream AR platform,” said Zuckerberg.

 

 

 

 

 

 

 

 

Here’s Everything Facebook Announced at F8, From VR to Bots — from wired.com

Excerpt:

On Tuesday, Facebook kicked off its annual F8 developer conference with a keynote address. CEO Mark Zuckerberg and others on his executive team made a bunch of announcements aimed at developers, but the implications for Facebook’s users was pretty clear. The apps that billions of us use daily—Facebook, Messenger, WhatsApp, Instagram—are going to be getting new camera tricks, new augmented reality capabilities, and more bots. So many bots!

 

Facebook’s bold and bizarre VR hangout app is now available for the Oculus Rift — from theverge.com by Nick Statt

Excerpt:

Facebook’s most fascinating virtual reality experiment, a VR hangout session where you can interact with friends as if you were sitting next to one another, is now ready for the public. The company is calling the product Facebook Spaces, and it’s being released today in beta form for the Oculus Rift.

 

 

 

From DSC:

Is this a piece of the future of distance education / online learning-based classrooms?

 

 

 

Facebook Launches Local ‘Developer Circles’ To Help Entrepreneurs Collaborate, Build Skills — from forbes.com by Kathleen  Chaykowski

Excerpt:

In 2014, Facebook launched its FbStart program, which has helped several thousand early stage apps build and grow their apps through a set of free tools and mentorship meetings. On Tuesday, Facebook unveiled a new program to reach a broader range of developers, as well as students interested in technology.

The program, called “Developer Circles,” is intended to bring developers in local communities together offline as well as online in Facebook groups to encourage the sharing of technical know-how, discuss ideas and build new projects. The program is also designed to serve students who may not yet be working on an app, but who are interested in building skills to work in computer science.

 

 

Facebook launches augmented reality Camera Effects developer platform — from techcrunch.com by Josh Constine

Excerpt:

Facebook will rely on an army of outside developers to contribute augmented reality image filters and interactive experiences to its new Camera Effects platform. After today’s Facebook F8 conference, the first effects will become available inside Facebook’s Camera feature on smartphones, but the Camera Effects platform is designed to eventually be compatible with future augmented reality hardware, such as eyeglasses.

While critics thought Facebook was just mindlessly copying Snapchat with its recent Stories and Camera features in Facebook, Messenger, Instagram and WhatsApp, Mark Zuckerberg tells TechCrunch his company was just laying the groundwork for today’s Camera Effects platform launch.

 

 

Mark Zuckerberg Sees Augmented Reality Ecosystem in Facebook — from nytimes.com by Mike Isaac

Excerpt:

On Tuesday, Mr. Zuckerberg introduced what he positioned as the first mainstream augmented reality platform, a way for people to view and digitally manipulate the physical world around them through the lens of their smartphone cameras.

 

 

Facebook Launches Social VR App ‘Facebook Spaces’ in Beta for Rift — from virtualrealitypulse.com by Ben Lang

 

 

 


Addendums on 4/20/17:


 

 

 

 

21 bot experts make their predictions for 2017 — from venturebeat.com by Adelyn Zhou

Excerpt:

2016 was a huge year for bots, with major platforms like Facebook launching bots for Messenger, and Amazon and Google heavily pushing their digital assistants. Looking forward to 2017, we asked 21 bot experts, entrepreneurs, and executives to share their predictions for how bots will continue to evolve in the coming year.

From Jordi Torras, founder and CEO, Inbenta:
“Chatbots will get increasingly smarter, thanks to the adoption of sophisticated AI algorithms and machine learning. But also they will specialize more in specific tasks, like online purchases, customer support, or online advice. First attempts of chatbot interoperability will start to appear, with generalist chatbots, like Siri or Alexa, connecting to specialized enterprise chatbots to accomplish specific tasks. Functions traditionally performed by search engines will be increasingly performed by chatbots.”

 

 

 

 

 


From DSC:
For those of us working within higher education, chatbots need to be on our radars. Here are 2 slides from my NGLS 2017 presentation.

 

 

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

A massive AI partnership is tapping civil rights and economic experts to keep AI safe — from qz.com by Dave Gershgorn

Excerpt:

When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.

This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.

The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.

 

 


Also relevant/see:

Building Public Policy To Address Artificial Intelligence’s Impact — from blogs.wsj.com by Irving Wladawsky-Berger

Excerpt:

Artificial intelligence may be at a tipping point, but it’s not immune to backlash from users in the event of system mistakes or a failure to meet heightened expectations. As AI becomes increasingly used for more critical tasks, care needs to be taken by proponents to avoid unfulfilled promises as well as efforts that appear to discriminate against certain segments of society.

Two years ago, Stanford University launched the One Hundred Year Study of AI to address “how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.

The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030.

 

 

Apple iPhone 8 To Get 3D-Sensing Tech For Augmented-Reality Apps — from investors.com by Patrick Seitz

Excerpt:

Apple’s (AAPL) upcoming iPhone 8 smartphone will include a 3D-sensing module to enable augmented-reality applications, Rosenblatt Securities analyst Jun Zhang said Wednesday. Apple has included the 3D-sensing module in all three current prototypes of the iPhone 8, which have screen sizes of 4.7, 5.1 and 5.5 inches, he said. “We believe Apple’s 3D sensing might provide a better user experience with more applications,” Zhang said in a research report. “So far, we think 3D sensing aims to provide an improved smartphone experience with a VR/AR environment.”

Apple's iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)Apple’s iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)

 

 

AltspaceVR Education Overview

 

 

 

 

10 Prominent Developers Detail Their 2017 Predictions for The VR/AR Industry — from uploadvr.com by David Jagneaux

Excerpt:

As we look forward to 2017 then, we’ve reached out to a bunch of industry experts and insiders to get their views on where we’re headed over the next 12 months.

2016 provided hints of where Facebook, HTC, Sony, Google, and more will take their headsets in the near future, but where does the industry’s best and brightest think we’ll end up this time next year? With CES, the year’s first major event, now in the books, let’s hear from some those that work with VR itself about what happens next.

We asked all of these developers the same four questions:

1) What do you think will happen to the VR/AR market in 2017?
2) What NEEDS to happen to the VR AR market in 2017?
3) What will be the big breakthroughs and innovations of 2017?
4) Will 2017 finally be the “year of VR?”

 

 

MEL Lab’s Virtual Reality Chemistry Class — from thereisonlyr.com by Grant Greene
An immersive learning startup brings novel experiences to science education.

 

 

The MEL app turned my iPhone 6 into a virtual microscope, letting me walk through 360 degree, 3-D representations of the molecules featured in the experiment kits.

 

 

 

 

Labster releases ‘World of Science’ Simulation on Google Daydream — from labster.com by Marian Reed

Excerpt:

Labster is exploring new platforms by which students can access its laboratory simulations and is pleased to announce the release of its first Google Daydream-compatible virtual reality (VR) simulation, ‘Labster: World of Science’. This new simulation, modeled on Labster’s original ‘Lab Safety’ virtual lab, continues to incorporate scientific learning alongside of a specific context, enriched by story-telling elements. The use of the Google VR platform has enabled Labster to fully immerse the student, or science enthusiast, in a wet lab that can easily be navigated with intuitive usage of Daydream’s handheld controller.

 

 

The Inside Story of Google’s Daydream, Where VR Feels Like Home — from wired.com by David Pierce

Excerpt:

Jessica Brillhart, Google’s principle VR filmmaker, has taken to calling people “visitors” rather than “viewers,” as a way of reminding herself that in VR, people aren’t watching what you’ve created. They’re living it. Which changes things.

 

 

Welcoming more devices to the Daydream-ready family — from blog.google.com by Amit Singh

Excerpt:

In November, we launched Daydream with the goal of bringing high quality, mobile VR to everyone. With the Daydream View headset and controller, and a Daydream-ready phone like the Pixel or Moto Z, you can explore new worlds, kick back in your personal VR cinema and play games that put you in the center of the action.

Daydream-ready phones are built for VR with high-resolution displays, ultra smooth graphics, and high-fidelity sensors for precise head tracking. To give you even more choices to enjoy Daydream, today we’re welcoming new devices that will soon join the Daydream-ready family.

 

 

Kessler Foundation awards virtual reality job interview program — from haptic.al by Deniz Ergürel

Excerpt:

Kessler Foundation, one of the largest public charities in the United States, is awarding a virtual reality training project to support high school students with disabilities. The foundation is providing a two-year, $485,000 Signature Employment Grant to the University of Michigan in Ann Arbor, to launch the Virtual Reality Job Interview Training program. Kessler Foundation says, the VR program will allow for highly personalized role-play, with precise feedback and coaching that may be repeated as often as desired without fear or embarrassment.

 

 

Deep-water safety training goes virtual — from shell.com by Soh Chin Ong
How a visit to a shopping centre led to the use of virtual reality safety training for a new oil production project, Malikai, in the deep waters off Sabah in Malaysia.

 

 

 

Research study suggests VR can have a huge impact in the classroom — from uploadvr.com by Joe Durbin

Excerpt:

“Every child is a genius in his or her own way. VR can be the key to awakening the genius inside.”

This is the closing line of a new research study currently making its way out of China. Conducted by Beijing Bluefocus E-Commerce Co., Ltd and Beijing iBokan Wisdom Mobile Internet Technology Training Institution, the study takes a detailed look at the different ways virtual reality can make public education more effective.

 

“Compared with traditional education, VR-based education is of obvious advantage in theoretical knowledge teaching as well as practical skills training. In theoretical knowledge teaching, it boasts the ability to make abstract problems concrete, and theoretical thinking well-supported. In practical skills training, it helps sharpen students’ operational skills, provides an immersive learning experience, and enhances students’ sense of involvement in class, making learning more fun, more secure, and more active,” the study states.

 

 

VR for Education – what was and what is — from researchvr.podigee.io

Topics discussed:

  • VR for education: one time use vs everyday use
  • Ecological Validity of VR Research
  • AR definition & history
  • Tethered vs untethered
  • Intelligent Ontology-driven Games for Teaching Human Anatomy
  • Envelop VR
  • VR for Education
  • Gartner curve – then and now

 

 

 

Virtual reality industry leaders come together to create new association — from gvra.com

Excerpt:

CALIFORNIA — Acer Starbreeze, Google, HTC VIVE, Facebook’s Oculus, Samsung, and Sony Interactive Entertainment [on 12/7/16] announced the creation of a non-profit organization of international headset manufacturers to promote the growth of the global virtual reality (VR) industry. The Global Virtual Reality Association (GVRA) will develop and share best practices for industry and foster dialogue between public and private stakeholders around the world.

The goal of the Global Virtual Reality Association is to promote responsible development and adoption of VR globally. The association’s members will develop and share best practices, conduct research, and bring the international VR community together as the technology progresses. The group will also serve as a resource for consumers, policymakers, and industry interested in VR.

VR has the potential to be the next great computing platform, improving sectors ranging from education to healthcare, and contribute significantly to the global economy. Through research, international engagement, and the development of best practices, the founding companies of the Global Virtual Reality Association will work to unlock and maximize VR’s potential and ensure those gains are shared as broadly around the world as possible.

For more information, visit www.GVRA.com.

 

 

 

Occipital shows off a $399 mixed reality headset for iPhone — from techcrunch.com by Lucas Matney

Excerpt:

Occipital announced today that it is launching a mixed reality platform built upon its depth-sensing technologies called Bridge. The headset is available for $399 and starts shipping in March; eager developers can get their hands on an Explorer Edition for $499, which starts shipping next week.

 

 

From DSC:
While I hope that early innovators in the AR/VR/MR space thrive, I do wonder what will happen if and when Apple puts out their rendition/version of a new form of Human Computer Interaction (or forms) — such as integrating AR-capabilities directly into their next iPhone.

 

 

Enterprise augmented reality applications ready for prime time — from internetofthingsagenda.techtarget.com by Beth Stackpole
Pokémon Go may have put AR on the map, but the technology is now being leveraged for enterprise applications in areas like marketing, maintenance and field service.

Excerpt:

Unlike virtual reality, which creates an immersive, computer-generated environment, the less familiar augmented reality, or AR, technology superimposes computer-generated images and overlays information on a user’s real-world view. This computer-generated sensory data — which could include elements such as sound, graphics, GPS data, video or 3D models — bridges the digital and physical worlds. For an enterprise, the applications are boundless, arming workers walking the warehouse or selling on the shop floor, for example, with essential information that can improve productivity, streamline customer interactions and deliver optimized maintenance in the field.

 

 

15 virtual reality trends we’re predicting for 2017 — from appreal-vr.com by Yariv Levski

Excerpt (emphasis DSC):

2016 is fast drawing to a close. And while many will be glad to see the back of it, for those of us who work and play with Virtual Reality, it has been a most exciting year.

By the time the bells ring out signalling the start of a new year, the total number of VR users will exceed 43 million. This is a market on the move, projected to be worth $30bn by 2020. If it’s to meet that valuation, then we believe 2017 will be an incredibly important year in the lifecycle of VR hardware and software development.

VR will be enjoyed by an increasingly mainstream audience very soon, and here we take a quick look at some of the trends we expect to develop over the next 12 months for that to happen.

 

 

Murdoch University hosts trial of virtual reality classroom TeachLivE — from communitynews.com.au by Josh Zimmerman

Excerpt:

IN an Australian first, education students will be able hone their skills without stepping foot in a classroom. Murdoch University has hosted a pilot trial of TeachLivE, a virtual reality environment for teachers in training.

 

The student avatars are able to disrupt the class in a range of ways that teachers may encounter such as pulling out mobile phones or losing their pen during class.

 

murdoch-university-teachlive-dec017

 

 

8 Cutting Edge Virtual Reality Job Opportunities — from appreal-vr.com by Yariv Levski
Today we’re highlighting the top 8 job opportunities in VR to give you a current scope of the Virtual Reality job market.

 

 

 

Epson’s Augmented Reality Glasses Are a Revolution in Drone Tech — from dronelife.com by Miriam McNabb

Excerpt:

The Epson Moverio BT-300, to give the smart glasses their full name, are wearable technology – lightweight, comfortable see-through glasses – that allow you to see digital data, and have a first person view (FPV) experience: all while seeing the real world at the same time. The applications are almost endless.

 

 

 

Volkswagen Electric Car To Feature Augmented Reality Navigation System — from gas2.org by Steve Hanley

Excerpt:

Volkswagen’s pivot away from diesel cars to electric vehicles is still a work in progress, but some details about its coming I.D. electric car — unveiled in Paris earlier this year — are starting to come to light. Much of the news is about an innovative augmented reality heads-up display Volkswagen plans to offer in its electric vehicles. Klaus Bischoff, head of the VW brand, says the I.D. electric car will completely reinvent vehicle instrumentation systems when it is launched at the end of the decade.

 

 

These global research centers are a proof that virtual reality is more than gaming — from haptic.al by Deniz Ergürel

Excerpt:

For decades, numerous research centers and academics around the world have been working the potential of virtual reality technology. Countless research projects undertaken in these centers are an important indicator that everything from health care to real estate can experience disruption in a few years.

  • Virtual Human Interaction Lab — Stanford University
  • Virtual Reality Applications Center — Iowa State University
  • Institute for Creative Technologies—USC
  • Medical Virtual Reality — USC
  • The Imaging Media Research Center — Korea Institute of Science and Technology
  • Virtual Reality & Immersive Visualization Group — RWTH Aachen University
  • Center For Simulations & Virtual Environments Research — UCIT
  • Duke immersive Virtual Environment —Duke University
  • Experimental Virtual Environments (EVENT) Lab for Neuroscience and Technology — Barcelona University
  • Immersive Media Technology Experiences (IMTE) — Norwegian University of Technology
  • Human Interface Technology Laboratory — University of Washington

 

 

Where Virtual and Physical Worlds Converge — from disruptionhub.com

Excerpt:

Augmented Reality (AR) dwelled quietly in the shadow of VR until earlier this year, when a certain app propelled it into the mainstream. Now, AR is a household term and can hold its own with advanced virtual technologies. The AR industry is predicted to hit global revenues of $90 billion by 2020, not just matching VR but overtaking it by a large margin. Of course, a lot of this turnover will be generated by applications in the entertainment industry. VR was primarily created by gamers for gamers, but AR began as a visionary idea that would change the way that humanity interacted with the world around them. The first applications of augmented reality were actually geared towards improving human performance in the workplace… But there’s far, far more to be explored.

 

 

VR’s killer app has arrived, and it’s Google Earth — from arstechnica.com by Sam Machkovech
Squishy geometry aside, you won’t find a cooler free VR app on any device.

Excerpt:

I stood at the peak of Mount Rainier, the tallest mountain in Washington state. The sounds of wind whipped past my ears, and mountains and valleys filled a seemingly endless horizon in every direction. I’d never seen anything like it—until I grabbed the sun.

Using my HTC Vive virtual reality wand, I reached into the heavens in order to spin the Earth along its normal rotational axis, until I set the horizon on fire with a sunset. I breathed deeply at the sight, then spun our planet just a little more, until I filled the sky with a heaping helping of the Milky Way Galaxy.

Virtual reality has exposed me to some pretty incredible experiences, but I’ve grown ever so jaded in the past few years of testing consumer-grade headsets. Google Earth VR, however, has dropped my jaw anew. This, more than any other game or app for SteamVR’s “room scale” system, makes me want to call every friend and loved one I know and tell them to come over, put on a headset, and warp anywhere on Earth that they please.

 

 

VR is totally changing how architects dream up buildings — from wired.com by Sam Lubell

Excerpt:

In VR architecture, the difference between real and unreal is fluid and, to a large extent, unimportant. What is important, and potentially revolutionary, is VR’s ability to draw designers and their clients into a visceral world of dimension, scale, and feeling, removing the unfortunate schism between a built environment that exists in three dimensions and a visualization of it that has until now existed in two.

 

 

How VR can democratize Architecture — from researchvr.podigee.io

Excerpt:

Many of the VR projects in Architecture are focused on the final stages of design process, basically for selling a house to a client. Thomas sees the real potential in the early stages: when the main decisions need to be made. VR is so good for this, as it helps for non professionals to understand and grasp the concepts of architecture very intuitively. And this is what we talked mostly about.

 

 

 

How virtual reality could revolutionize the real estate industry — from uploadvr.com by Benjamin Maltbie

 

 

 

Will VR disrupt the airline industry? Sci-Fi show meets press virtually instead of flying — from singularityhub.com by Aaron Frank

Excerpt:

A proposed benefit of virtual reality is that it could one day eliminate the need to move our fleshy bodies around the world for business meetings and work engagements. Instead, we’ll be meeting up with colleagues and associates in virtual spaces. While this would be great news for the environment and business people sick of airports, it would be troubling news for airlines.

 

 

How theaters are evolving to include VR experiences — from uploadvr.com by Michael Mascioni

 

 

 

#AI, #VR, and #IoT Are Coming to a Courthouse Near You! — from americanbar.org by Judge Herbert B. Dixon Jr.

Excerpt:

Imagine during one of your future trials that jurors in your courtroom are provided with virtual reality headsets, which allow them to view the accident site or crime scene digitally and walk around or be guided through a 3D world to examine vital details of the scene.

How can such an evidentiary presentation be accomplished? A system is being developed whereby investigators use a robot system inspired by NASA’s Curiosity Mars rover using 3D imaging and panoramic videography equipment to record virtual reality video of the scene.6 The captured 360° immersive video and photographs of the scene would allow recreation of a VR experience with video and pictures of the original scene from every angle. Admissibility of this evidence would require a showing that the VR simulation fairly and accurately depicts what it represents. If a judge permits presentation of the evidence after its accuracy is established, jurors receiving the evidence could turn their heads and view various aspects of the scene by looking up, down, and around, and zooming in and out.

Unlike an animation or edited video initially created to demonstrate one party’s point of view, the purpose of this type of evidence would be to gather data and objectively preserve the scene without staging or tampering. Even further, this approach would allow investigators to revisit scenes as they existed during the initial forensic examination and give jurors a vivid rendition of the site as it existed when the events occurred.

 

 

Microsoft goes long for mixed reality — from next.reality.news

Excerpt:

The theme running throughout most of this year’s WinHEC keynote in Shenzhen, China was mixed reality. Microsoft’s Alex Kipman continues to be a great spokesperson and evangelist for the new medium, and it is apparent that Microsoft is going in deep, if not all in, on this version of the future. I, for one, as a mixed reality or bust developer, am very glad to see it.

As part of the presentation, Microsoft presented a video (see below) that shows the various forms of mixed reality. The video starts with a few virtual objects in the room with a person, transitions into the same room with a virtual person, then becomes a full virtual reality experience with Windows Holographic.

 

 

Amazon Opening Store That Will Eliminate Checkout — and Lines — from bloomberg.com by Jing Cao
At Amazon Seattle location items get charged to Prime account | New technology combines artificial intelligence and sensors

Excerpt:

Amazon.com Inc. unveiled technology that will let shoppers grab groceries without having to scan and pay for them — in one stroke eliminating the checkout line.

The company is testing the new system at what it’s calling an Amazon Go store in Seattle, which will open to the public early next year. Customers will be able to scan their phones at the entrance using a new Amazon Go mobile app. Then the technology will track what items they pick up or even return to the shelves and add them to a virtual shopping cart in real time, according a video Amazon posted on YouTube. Once the customers exit the store, they’ll be charged on their Amazon account automatically.

 

 

 

Amazon Introduces ‘Amazon Go’ Retail Stores, No Checkout, No Lines — from investors.com

Excerpt:

Online retail king Amazon.com (AMZN) is taking dead aim at the physical-store world Monday, introducing Amazon Go, a retail convenience store format it is developing that will use computer vision and deep-learning algorithms to let shoppers just pick up what they want and exit the store without any checkout procedure.

Shoppers will merely need to tap the Amazon Go app on their smartphones, and their virtual shopping carts will automatically tabulate what they owe, and deduct that amount from their Amazon accounts, sending you a receipt. It’s what the company has deemed “just walk out technology,” which it said is based on the same technology used in self-driving cars. It’s certain to up the ante in the company’s competition with Wal-Mart (WMT), Target (TGT) and the other retail leaders.

 

 

Google DeepMind Makes AI Training Platform Publicly Available — from bloomberg.com by Jeremy Kahn
Company is increasingly embracing open-source initiatives | Move comes after rival Musk’s OpenAI made its robot gym public

Excerpt:

Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.

DeepMind is putting the entire source code for its training environment — which it previously called Labyrinth and has now renamed as DeepMind Lab — on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.

 

Related:
Alphabet DeepMind is inviting developers into the digital world where its AI learns to explore — from qz.com by Dave Gershgorn

 

 

 

After Retail Stumble, Beacons Shine From Banks to Sports Arenas — from bloomberg.com by Olga Kharif
Shipments of the devices expected to grow to 500 million

Excerpt (emphasis DSC):

Beacon technology, which was practically left for dead after failing to deliver on its promise to revolutionize the retail industry, is making a comeback.

Beacons are puck-size gadgets that can send helpful tips, coupons and other information to people’s smartphones through Bluetooth. They’re now being used in everything from bank branches and sports arenas to resorts, airports and fast-food restaurants. In the latest sign of the resurgence, Mobile Majority, an advertising startup, said on Monday that it was buying Gimbal Inc., a beacon maker it bills as the largest independent source of location data other than Google and Apple Inc.

Several recent developments have sparked the latest boom. Companies like Google parent Alphabet Inc. are making it possible for people to use the feature without downloading any apps, which had been a major barrier to adoption, said Patrick Connolly, an analyst at ABI. Introduced this year, Google Nearby Notifications lets developers tie an app or a website to a beacon to send messages to consumers even when they have no app installed.

But in June, Cupertino, California-based Mist Systems began shipping a software-based product that simplified the process. Instead of placing 10 beacons on walls and ceilings, for example, management using Mist can install one device every 2,000 feet (610 meters), then designate various points on a digital floor plan as virtual beacons, which can be moved with a click of a mouse.

 

 

Google’s Hand-Fed AI Now Gives Answers, Not Just Search Results — from wired.com by Cade Metz

Excerpt:

Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.

“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”

That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.

Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.

 

 

Deep Learning in Production at Facebook — from re-work.co by Katie Pollitt

Excerpt:

Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, they run ML models at massive scale, computing trillions of predictions every day.

At the 2016 Deep Learning Summit in Boston, Andrew Tulloch, Research Engineer at Facebook, talked about some of the tools and tricks Facebook use for scaling both the training and deployment of some of their deep learning models at Facebook. He also covered some useful libraries that they’d open-sourced for production-oriented deep learning applications. Tulloch’s session can be watched in full below.

 

 

The Artificial Intelligence Gold Rush — from foresightr.com by Mark Vickers
Big companies, venture capital firms and governments are all banking on AI

Excerpt:

Let’s start with some of the brand-name organizations laying down big bucks on artificial intelligence.

  • Amazon: Sells the successful Echo home speaker, which comes with the personal assistant Alexa.
  • Alphabet (Google): Uses deep learning technology to power Internet searches and developed AlphaGo, an AI that beat the world champion in the game of Go.
  • Apple: Developed the popular virtual assistant Siri and is working on other phone-related AI applications, such as facial recognition.
  • Baidu: Wants to use AI to improve search, recognize images of objects and respond to natural language queries.
  • Boeing: Works with Carnegie Mellon University to develop machine learning capable of helping it design and build planes more efficiently.
  • Facebook: Wants to create the “best AI lab in the world.” Has its personal assistant, M, and focuses heavily on facial recognition.
    IBM: Created the Jeopardy-winning Watson AI and is leveraging its data analysis and natural language capabilities in the healthcare industry.
  • Intel: Has made acquisitions to help it build specialized chips and software to handle deep learning.
  • Microsoft: Works on chatbot technology and acquired SwiftKey, which predicts what users will type next.
  • Nokia: Has introduced various machine learning capabilities to its portfolio of customer-experience software.
    Nvidia: Builds computer chips customized for deep learning.
  • Salesforce: Took first place at the Stanford Question Answering Dataset, a test of machine learning and comprehension, and has developed the Einstein model that learns from data.
  • Shell: Launched a virtual assistant to answer customer questions.
  • Tesla Motors: Continues to work on self-driving automobile technologies.
  • Twitter: Created an AI-development team called Cortex and acquired several AI startups.

 

 

 

IBM Watson and Education in the Cognitive Era — from i-programmer.info by Nikos Vaggalis

Excerpt:

IBM’s seemingly ubiquitous Watson is now infiltrating education, through AI powered software that ‘reads’ the needs of individual  students in order to engage them through tailored learning approaches.

This is not to be taken lightly, as it opens the door to a new breed of technologies that will spearhead the education or re-education of the workforce of the future.

As outlined in the 2030 report, despite robots or AI displacing a big chunk of the workforce, they will also play a major role in creating job opportunities as never before.In such a competitive landscape, workers of all kinds, white or blue collar to begin with, should come readied with new, versatile and contemporary skills.

The point is, the very AI that will leave someone jobless, will also help him to re-adapt into a new job’s requirements.It will also prepare the new generations through the use of such optimal methodologies that will once more give meaning to the aging  and counter-productive schooling system which has the  students’ skills disengaged from the needs of the industry and which still segregates students into ‘good’ and ‘bad’. Might it be that ‘bad’ students become just like that due to the system’s inability to stimulate their interest?

 

 

 

 

Google, Facebook, and Microsoft are remaking themselves around AI — from wired.com by Cade Metz

Excerpt (emphasis DSC):

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

 

But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

 

 

Intelligence everywhere! Gartner’s Top 10 Strategic Technology Trends for 2017 — from which-50.com

Excerpt (emphasis DSC):

AI and Advanced Machine Learning
Artificial intelligence (AI) and advanced machine learning (ML) are composed of many technologies and techniques (e.g., deep learning, neural networks, natural-language processing [NLP]). The more advanced techniques move beyond traditional rule-based algorithms to create systems that understand, learn, predict, adapt and potentially operate autonomously. This is what makes smart machines appear “intelligent.”

“Applied AI and advanced machine learning give rise to a spectrum of intelligent implementations, including physical devices (robots, autonomous vehicles, consumer electronics) as well as apps and services (virtual personal assistants [VPAs], smart advisors), ” said David Cearley, vice president and Gartner Fellow. “These implementations will be delivered as a new class of obviously intelligent apps and things as well as provide embedded intelligence for a wide range of mesh devices and existing software and service solutions.”

 

gartner-toptechtrends-2017

 

 

 

 

aiexperiments-google-nov2016

 

Google’s new website lets you play with its experimental AI projects — from mashable.com by Karissa Bell

Excerpt:

Google is letting users peek into some of its most experimental artificial intelligence projects.

The company unveiled a new website Tuesday called A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out. The projects include a game that guesses what you’re drawing, a camera app that recognizes objects you put in front of it and a music app that plays “duets” with you.

 

Google unveils a slew of new and improved machine learning APIs — from digitaltrends.com by Kyle Wiggers

Excerpt:

On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

 

Found in translation: More accurate, fluent sentences in Google Translate — from blog.google by Barak Turovsky

Excerpt:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

 

 

‘Augmented Intelligence’ for Higher Ed — from insidehighered.com by Carl Straumsheim
IBM picks Blackboard and Pearson to bring the technology behind the Watson computer to colleges and universities.

Excerpts:

[IBM] is partnering with a small number of hardware and software providers to bring the same technology that won a special edition of the game show back in 2011 to K-12 institutions, colleges and continuing education providers. The partnerships and the products that might emerge from them are still in the planning stage, but the company is investing in the idea that cognitive computing — natural language processing, informational retrieval and other functions similar to the ones performed by the human brain — can help students succeed in and outside the classroom.

Chalapathy Neti, vice president of education innovation at IBM Watson, said education is undergoing the same “digital transformation” seen in the finance and health care sectors, in which more and more content is being delivered digitally.

IBM is steering clear of referring to its technology as “artificial intelligence,” however, as some may interpret it as replacing what humans already do.

“This is about augmenting human intelligence,” Neti said. “We never want to see these data-based systems as primary decision makers, but we want to provide them as decision assistance for a human decision maker that is an expert in conducting that process.”

 

 

What a Visit to an AI-Enabled Hospital Might Look Like — from hbr.org by R “Ray” Wang

Excerpt (emphasis DSC):

The combination of machine learning, deep learning, natural language processing, and cognitive computing will soon change the ways that we interact with our environments. AI-driven smart services will sense what we’re doing, know what our preferences are from our past behavior, and subtly guide us through our daily lives in ways that will feel truly seamless.

Perhaps the best way to explore how such systems might work is by looking at an example: a visit to a hospital.

The AI loop includes seven steps:

  1. Perception describes what’s happening now.
  2. Notification tells you what you asked to know.
  3. Suggestion recommends action.
  4. Automation repeats what you always want.
  5. Prediction informs you of what to expect.
  6. Prevention helps you avoid bad outcomes.
  7. Situational awareness tells you what you need to know right now.

 

 

Japanese artificial intelligence gives up on University of Tokyo admissions exam — from digitaltrends.com by Brad Jones

Excerpt:

Since 2011, Japan’s National Institute of Informatics has been working on an AI, with the end goal of having it pass the entrance exam for the University of Tokyo, according to a report from Engadget. This endeavor, dubbed the Todai Robot Project in reference to a local nickname for the school, has been abandoned.

It turns out that the AI simply cannot meet the exact requirements of the University of Tokyo. The team does not expect to reach their goal of passing the test by March 2022, so the project is being brought to an end.

 

 

“We are building not just Azure to have rich compute capability, but we are, in fact, building the world’s first AI supercomputer,” he said.

— from Microsoft CEO Satya Nadella spruiks power of machine learning,
smart bots and mixed reality at Sydney developers conference

 

Why it’s so hard to create unbiased artificial intelligence — from techcrunch.com by Ben Dickson

Excerpt:

As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we’ve become somewhat expectant that robots can succeed where humans have failed — namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society.

While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make.

 

 

 

The Non-Technical Guide to Machine Learning & Artificial Intelligence — from medium.com by Sam DeBrule

Excerpt:

This list is a primer for non-technical people who want to understand what machine learning makes possible.

To develop a deep understanding of the space, reading won’t be enough. You need to: have an understanding of the entire landscape, spot and use ML-enabled products in your daily life (Spotify recommendations), discuss artificial intelligence more regularly, and make friends with people who know more than you do about AI and ML.

News: For starters, I’ve included a link to a weekly artificial intelligence email that Avi Eisenberger and I curate (machinelearnings.co). Start here if you want to develop a better understanding of the space, but don’t have the time to actively hunt for machine learning and artificial intelligence news.

Startups: It’s nice to see what startups are doing, and not only hear about the money they are raising. I’ve included links to the websites and apps of 307+ machine intelligence companies and tools.

People: Here’s a good place to jump into the conversation. I’ve provided links to Twitter accounts (and LinkedIn profiles and personal websites in their absence) of the founders, investors, writers, operators and researchers who work in and around the machine learning space.

Events: If you enjoy getting out from behind your computer, and want to meet awesome people who are interested in artificial intelligence in real life, there is one place that’s best to do that, more on my favorite place below.

 

 

 

How one clothing company blends AI and human expertise — from hbr.org by H. James Wilson, Paul Daugherty, & Prashant Shukla

Excerpt:

When we think about artificial intelligence, we often imagine robots performing tasks on the warehouse or factory floor that were once exclusively the work of people. This conjures up the specter of lost jobs and upheaval for many workers. Yet, it can also seem a bit remote — something that will happen in “the future.” But the future is a lot closer than many realize. It also looks more promising than many have predicted.

Stitch Fix provides a glimpse of how some businesses are already making use of AI-based machine learning to partner with employees for more-effective solutions. A five-year-old online clothing retailer, its success in this area reveals how AI and people can work together, with each side focused on its unique strengths.

 

 

 

 

he-thinkaboutai-washpost-oc2016

 

Excerpt (emphasis DSC):

As the White House report rightly observes, the implications of an AI-suffused world are enormous — especially for the people who work at jobs that soon will be outsourced to artificially-intelligent machines. Although the report predicts that AI ultimately will expand the U.S. economy, it also notes that “Because AI has the potential to eliminate or drive down wages of some jobs … AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality.”

Accordingly, the ability of people to access higher education continuously throughout their working lives will become increasingly important as the AI revolution takes hold. To be sure, college has always helped safeguard people from economic dislocations caused by technological change. But this time is different. First, the quality of AI is improving rapidly. On a widely-used image recognition test, for instance, the best AI result went from a 26 percent error rate in 2011 to a 3.5 percent error rate in 2015 — even better than the 5 percent human error rate.

Moreover, as the administration’s report documents, AI has already found new applications in so-called “knowledge economy” fields, such as medical diagnosis, education and scientific research. Consequently, as artificially intelligent systems come to be used in more white-collar, professional domains, even people who are highly educated by today’s standards may find their livelihoods continuously at risk by an ever-expanding cybernetic workforce.

 

As a result, it’s time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning.

 

Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.

 

 

From DSC:
That last bolded paragraph is why I think the vision of easily accessible learning — using the devices that will likely be found in one’s apartment or home — will be enormously powerful and widespread in a few years. Given the exponential pace of change that we are experiencing — and will likely continue to experience for some time — people will need to reinvent themselves quickly.

Higher education needs to rethink our offerings…or someone else will.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

Our latest way to bring your government to you — from whitehouse.gov
Why we’re open-sourcing the code for the first-ever government bot on Facebook Messenger.

 

botgif

Excerpt:

On August 26th, President Obama publicly responded to a Facebook message sent to him by a citizen—a first for any president in history. Since then, he has received over one and a half million Facebook messages, sent from people based all around the world.

While receiving messages from the public isn’t a recent phenomenon—every day, the White House receives thousands of phone calls, physical letters, and submissions through our online contact form—being able to contact the President through Facebook has never been possible before. Today [10/14/16], it’s able to happen because of the first-ever government bot on Facebook messenger.

 

 

Also see:

 

 

 
© 2024 | Daniel Christian