From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

whydeeplearningchangingyourlife-sept2016

 

Why deep learning is suddenly changing your life — from fortune.com by Roger Parloff

Excerpt:

Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.

In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.

Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

 

Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

 

 

ai-machinelearning-deeplearning-relationship-roger-fall2016

 

 

Graphically speaking:

 

ai-machinelearning-deeplearning-relationship-fall2016

 

 

 

“Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”

 

 

One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.

 

 

 

 

Introducing the new Surface family — from microsoft.com
To do great things, you need powerful tools that deliver an ideal balance of craftsmanship, performance, and versatility. Every Surface device is engineered with these things in mind, and you at the center. And that’s how the Surface family does more. Just like you.

 

 

microsoftintrossurfacedesktop2-10-26-16

 

Also see the new Surface Dial:

 

mssurfacedial-10-26-16

 

 

 

Microsoft ‘Surface Studio’ and ‘Dial’ Up Close  — from blogs.barrons.com by Tiernan Ray

Excerpt (emphasis DSC):

Microsoft (MSFT) this morning held an event in downtown Manhattan that included both an update to Windows, the “Creators” edition, and also new versions of the company’s Surface tablet computer, including a revamp of the “Surface Book” laptop and tablet combo device, and a new desktop machine called the “Surface Studio” that has the thinnest display ever made, the company claims.

Perhaps the most intriguing thing at the event was something called “Dial,” a rotating puck device that can function like a wireless mouse with the Studio, but can also be placed right on top of the display itself to bring up a context-specific menu of functions, or to perform actions like cut and paste.

 

 

 

Microsoft announces its first desktop PC, the $3,000 Surface Studio — from businessinsider.com by Steve Kovach

Excerpt (emphasis DSC):

Microsoft on Wednesday announced its first desktop PC, the Surface Studio.

It’s an all-in-one computer, designed to compete with Apple’s iMac. The PC is geared toward professionals, and it has high-end specs designed for tasks like video or photo editing.

But the real surprises are the adjustable display and a new accessory called the Surface Dial. The display can lie nearly flat on the table, giving graphics artists the ability to draw and work. The Surface Dial can be placed on the screen to bring up color palettes and other options. The Surface Dial will work with other Surface products — the Surface Pro and Surface Book — but you won’t be able to use it on the screens.

 

 

 

Microsoft wants to bring machine learning into the mainstream — from networkworld.com by Steven Max Patterson
Microsoft released the beta of the Cognitive Toolkit with machine learning models, infrastructure and development tools, enabling customers to start building

Excerpt (emphasis DSC):

Microsoft just released the open-source licensed beta release of the Microsoft Cognitive Toolkit on Github. This announcement represents a shift in Microsoft’s customer focus from research to implementation. It is an update to the Computational Network Toolkit (CNTK). The toolkit is a supervised machine learning system in the same category of other open-source projects such as Tensorflow, Caffe and Torch.

Microsoft is one of the leading investors in and contributors to the open machine learning software and research community. A glance at the Neural Information Processing Systems (NIPS) conference reveals that there are just four major technology companies committed to moving the field of neural networks forward: Microsoft, Google, Facebook and IBM.

This announcement signals Microsoft interest to bring machine learning into the mainstream. The open source license reveals Microsoft’s continued collaboration with the machine learning community.

 

 

 

Microsoft just democratized virtual reality with $299 headsets — from pcworld.com by Gordon Mah Ung

Excerpt:

VR just got a lot cheaper.

Microsoft on Wednesday morning said PC OEMs will soon be shipping VR headsets that enable virtual reality and mixed reality starting at $299.

Details of the hardware and how it works were sparse, but Microsoft said HP, Dell, Lenovo, Asus, and Acer will be shipping the headsets timed with its upcoming Windows 10 Creators Update, due in spring 2017.

Despite the relatively low price, the upcoming headsets may have a big advantage over HTC and Valve’s Vive and Facebook’s Oculus Rift: no need for separate calibration hardware to function. Both Vive and Oculus require multiple emitters on stands to be placed around a room for the positioning to function.

 

microsoft-299-vr-headsets-10-26-16

 

 

The 10 Coolest Features Coming to Windows 10 — from wired.com by Michael Calore

Excerpt:

Microsoft is gearing up for a Windows refresh. The Windows 10 Creators Update will arrive on all Windows 10 devices for free in the spring of 2017. Today, Microsoft showed off all the new features coming to the multi-mode OS. Here’s the best of what will be coming to your Windows PC or Surface device.

 

 

 

 

 

The Surface Studio Story: How Microsoft Reimagined The Desktop PC For Creativity — from fastcompany.com by Mark Sullivan
A 28-inch screen, a very special hinge, and a new type of input device add up to an experience conceived with artists and designers in mind.

 

 

 

IBM Watson Education and Pearson to drive cognitive learning experiences for college students — from prnewswire.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE: IBM) and Pearson (FTSE: PSON) the world’s learning company, today announced a new global education alliance intended to make Watson’s cognitive capabilities available to millions of college students and professors.

Combining IBM’s cognitive capabilities with Pearson’s digital learning products will give students a more immersive learning experience with their college courses, an easy way to get help and insights when they need it, all through asking questions in natural language just like they would with another student or professor. Importantly, it provides instructors with insights about how well students are learning, allowing them to better manage the entire course and flag students who need additional help.

For example, a student experiencing difficulty while studying for a biology course can query Watson, which is embedded in the Pearson courseware. Watson has already read the Pearson courseware content and is ready to spot patterns and generate insights.  Serving as a digital resource, Watson will assess the student’s responses to guide them with hints, feedback, explanations and help identify common misconceptions, working with the student at their pace to help them master the topic.

 

 

ibm-watson-2016

 

 

Udacity partners with IBM Watson to launch the AI Nanodegree — from venturebeat.com by Paul Sawers

Excerpt:

Online education platform Udacity has partnered with IBM Watson to launch a new artificial intelligence (AI) Nanodegree program.

Costing $1,600 for the full two-term, 26-week course, the AI Nanodegree covers a myriad of topics including logic and planning, probabilistic inference, game-playing / search, computer vision, cognitive systems, and natural language processing (NLP). It’s worth noting here that Udacity already offers an Intro to Artificial Intelligence (free) course and the Machine Learning Engineer Nanodegree, but with the A.I. Nanodegree program IBM Watson is seeking to help give developers a “foundational understanding of artificial intelligence,” while also helping graduates identify job opportunities in the space.

 

 

The Future Cognitive Workforce Part 1: Announcing the AI Nanodegree with Udacity — from ibm.com by Rob High

Excerpt:

As artificial intelligence (AI) begins to power more technology across industries, it’s been truly exciting to see what our community of developers can create with Watson. Developers are inspiring us to advance the technology that is transforming society, and they are the reason why such a wide variety of businesses are bringing cognitive solutions to market.

With AI becoming more ubiquitous in the technology we use every day, developers need to continue to sharpen their cognitive computing skills. They are seeking ways to gain a competitive edge in a workforce that increasingly needs professionals who understand how to build AI solutions.

It is for this reason that today at World of Watson in Las Vegas we announced with Udacity the introduction of a Nanodegree program that incorporates expertise from IBM Watson and covers the basics of artificial intelligence. The “AI Nanodegree” program will be helpful for those looking to establish a foundational understanding of artificial intelligence. IBM will also help aid graduates of this program with identifying job opportunities.

 

 

The Future Cognitive Workforce Part 2: Teaching the Next Generation of Builders — from ibm.com by Steve Abrams

Excerpt:

Announced today at World of Watson, and as Rob High outlined in the first post in this series, IBM has partnered with Udacity to develop a nanodegree in artificial intelligence. Rob discussed IBM’s commitment to empowering developers to learn more about cognitive computing and equipping them with the educational resources they need to build their careers in AI.

To continue on this commitment, I’m excited to announce another new program today geared at college students that we’ve launched with Kivuto Solutions, an academic software distributor. Via Kivuto’s popular digital resource management platform, students and academics around the world will now gain free access to the complete IBM Bluemix Portfolio — and specifically, Watson. This offers students and faculty at any accredited university – as well as community colleges and high schools with STEM programs – an easy way to tap into Watson services. Through this access, teachers will also gain a better means to create curriculum around subjects like AI.

 

 

 

IBM introduces new Watson solutions for professions — from finance.yahoo.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE:IBM) today unveiled a series of new cognitive solutions intended for professionals in marketing, commerce, supply chain and human resources. With these new offerings, IBM is enabling organizations across all industries and of all sizes to integrate new cognitive capabilities into their businesses.

Watson solutions learn in an expert way, which is critical for professionals that want to uncover insights hidden in their massive amounts of data to understand, reason and learn about their customers and important business processes. Helping professionals augment their existing knowledge and experience without needing to engage a data analyst empowers them to make more informed business decisions, spot opportunities and take action with confidence.

“IBM is bringing Watson cognitive capabilities to millions of professionals around the world, putting a trusted advisor and personal analyst at their fingertips,” said Harriet Green, general manager Watson IoT, Cognitive Engagement & Education. “Similar to the value that Watson has brought to the world of healthcare, cognitive capabilities will be extended to professionals in new areas, helping them harness the value of the data being generated in their industries and use it in new ways.”

 

 

 

IBM says new Watson Data Platform will ‘bring machine learning to the masses’ — from techrepublic.com by Hope Reese
On Tuesday, IBM unveiled a cloud-based AI engine to help businesses harness machine learning. It aims to give everyone, from CEOs to developers, a simple platform to interpret and collaborate on data.

Excerpt:

“Insight is the new currency for success,” said Bob Picciano, senior vice president at IBM Analytics. “And Watson is the supercharger for the insight economy.”

Picciano, speaking at the World of Watson conference in Las Vegas on Tuesday, unveiled IBM’s Watson Data Platform, touted as the “world’s fastest data ingestion engine and machine learning as a service.”

The cloud-based Watson Data Platform, will “illuminate dark data,” said Picciano, and will “change everything—absolutely everything—for everyone.”

 

 

 

See the #IBMWoW hashtag on Twitter for more news/announcements coming from IBM this week:

 

ibm-wow-hashtag-oct2016

 

 

 

 

Previous postings from earlier this month:

 

  • IBM launches industry first Cognitive-IoT ‘Collaboratory’ for clients and partners
    Excerpt:
    IBM have unveiled an €180 million investment in a new global headquarters to house its Watson Internet of Things business.  Located in Munich, the facility will promote new IoT capabilities around Blockchain and security as well as supporting the array of clients that are driving real outcomes by using Watson IoT technologies, drawing insights from billions of sensors embedded in machines, cars, drones, ball bearings, pieces of equipment and even hospitals. As part of a global investment designed to bring Watson cognitive computing to IoT, IBM has allocated more than $200 million USD to its global Watson IoT headquarters in Munich. The investment, one of the company’s largest ever in Europe, is in response to escalating demand from customers who are looking to transform their operations using a combination of IoT and Artificial Intelligence technologies. Currently IBM has 6,000 clients globally who are tapping Watson IoT solutions and services, up from 4,000 just 8 months ago.

 

 

cognitiveapproachhr-oct2016

 

 

 

 

 

These VR apps are designed to replace your office and daily commute — from uploadvr.com by David Matthews

Excerpt:

Eric Florenzano is a VR consultant and game designer who lives in the San Francisco Bay area. He is currently working on new game ideas with a small team spread out across the US.

So far, so normal, right?. But what you don’t know is that Florenzano is one of a handful of advocates pioneering something they claim could transform work, end commuting, and even lead to a mass exodus from large cities: the virtual office.

“There’s no physical office [for us.] It’s all virtual. That’s the crazy thing,” explains Florenzano. Rather than meeting in person or arranging a conference call, his team jumps into Bigscreen, which allows users, who are represented by floating heads and controllers, to share their monitors in virtual rooms.

 

uploadvrimage-oct2016

 

Also see:

 

bigscreen_rocket_league

 

 

How to train thousands of surgeons at the same time in virtual reality — from singularity.com by Sveta McShane

Excerpt:

Recently, I wrote about how the future of surgery is going to be robotic, data-driven and artificially intelligent.

Although it’s approaching fast, that future is still in the works. In the meantime, there is a real need to train surgeons in a more scalable way, according to Dr. Shafi Ahmed, a surgeon at the Royal London and St. Bartholomew’s hospitals and cofounder of Medical Realities, a company developing a new virtual reality platform for surgical training.

In April of 2016, he live-streamed a cancer surgery in virtual reality. The procedure, a low-risk removal of a colon tumor in a man in his 70s, was filmed in 360 video and streamed live across the world. The high-def 4K camera captured the doctors’ every movement, and those watching could see everything that was happening in immersive detail.

 

 

Duke neurosurgeons test Hololens as an AR assist on tricky procedures — from techcrunch.com by Devin Coldewey,

Excerpt:

“Since we can manipulate a hologram without actually touching anything, we have access to everything we need without breaking a sterile field. In the end, this is actually an improvement over the current OR system because the image is directly overlaid on the patient, without having to look to computer screens for aid,” said Cutler in a Duke news release.

 

 

OTOY Enables Groundbreaking VR Social Features — from uploadvr.com

Excerpt:

Oculus and OTOY may have achieved a breakthrough in social VR functionality.

VR headset owners should soon be able to share a variety of environments and Web-based content with one another in virtual reality. For example, friends can feel like they are together on the bridge of the Enterprise, and on the viewscreen of the ship they see a list of Star Trek episodes to watch with one another.

We have yet to test all of this functionality first-hand, but we’ve seen some of it live in the Gear VR — accessing, for example, a Star Trek environment inside OTOY’s ORBX Media Player app from within the Oculus Social Beta.

 

 

 

 

VR just got a lot more stylish with the Dlodlo V1 Glasses — from seriouswonder.com by B.J. Murphy

 

dlodlovr-glasses-oct2016

 

 

Microsoft CEO says mixed reality is the ‘ultimate computer’ — from engadget.com by Nicole Lee
The company’s goal is to “invent new computers and new computing.”

Excerpt:

“Whether it be HoloLens, mixed reality, or Surface, our goal is to invent new computers and new computing,” he added. This also includes investing in artificial intelligence, which is now its own group within the company.

Nadella admitted that for a long time, Microsoft was complacent. “Early success is probably the worst thing that can happen in life,” he said. But now, he wants Microsoft to be more of a “learn-it-all” culture rather than a “know-it-all” culture.

 

 

A Chinese Lens on Augmented, Virtual and Mixed Reality — from adage.com by David Berkowitz

Excerpt:

These networks keep growing. One of the hosts of the conference, ARinChina, brought me over along with a group of about a half-dozen Westerners. This media company connects a community of 60,000 developers, all of whom are invested in staying ahead of breakthrough technologies like virtual reality (VR), augmented reality (AR) and the hybrid known as mixed reality (MR). The AR track where I presented was hosted by RAVV, a new technology think tank that is pulling together subject matter experts across robotics, artificial intelligence, autonomous vehicles, VR and AR. RAVV is building an international ecosystem that includes its own approaches for startup incubation, knowledge sharing and other collaborative endeavors.

To get a sense of how global the emerging mixed reality field is, consider that, in February, China’s e-commerce giant Alibaba led the $800 million Series C round for Florida-based Magic Leap, an MR startup. As our daily reality becomes more virtual and augmented, it doesn’t matter where someone is on the map. This field is connecting far-flung practitioners, hinting at a time, soon, when AR, VR and MR will connect people in ways never before possible.

 

 


Addendum 10/25/16:

 

 

 

From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 

Coppell ISD becomes first district to use IBM, Apple format — from bizjournals.com by Shawn Shinneman

Excerpt:

Teachers at Coppell Independent School District have become the first to use a new IBM and Apple technology platform built to aid personalized learning.

IBM Watson Element for Educators pairs IBM analytics and data tools such as cognitive computing with Apple design. It integrates student grades, interests, participation, and trends to help educators determine how a student learns best, the company says.

It also recommends learning content personalized to each student. The platform might suggest a reading assignment on astronomy for a young student who has shown an interest in space.

 

From DSC:
Technologies involved with systems like IBM’s Watson will likely bring some serious impact to the worlds of education and training & development. Such systems — and the affordances that they should be able to offer us — should not be underestimated.  The potential for powerful, customized, personalized learning could easily become a reality in K-20 as well as in the corporate training space. This is an area to keep an eye on for sure, especially with the growing influence of cognitive computing and artificial intelligence.

These kinds of technology should prove helpful in suggesting modules and courses (i.e., digital learning playlists), but I think the more powerful systems will be able to drill down far more minutely than that. I think these types of systems will be able to assist with all kinds of math problems and equations as well as analyze writing examples, correct language mispronunciations, and more (perhaps this is already here…apologies if so). In other words, the systems will “learn” where students can go wrong doing a certain kind of math equation…and then suggest steps to correct things when the system spots a mistake (or provide hints at how to correct mistakes).

This road takes us down to places where we have:

  • Web-based learner profiles — including learner’s preferences, passions, interests, skills
  • Microlearning/badging/credentialing — likely using blockchain
  • Learning agents/bots to “contact” for assistance
  • Guidance for lifelong learning
  • More choice, more control

 

ibmwatson-oct2016

 

 

Also see:

  • First IBM Watson Education App for iPad Delivers Personalized Learning for K-12 Teachers and Students — from prnewswire.com
    Educators at Coppell Independent School District in Texas first to use new iPad app to tailor learning experiences to student’s interests and aptitudes
    Excerpts:
    With increasing demands on educators, teachers need tools that will enable them to better identify the individual needs of all students while designing learning experiences that engage and hold the students’ interest as they master the content. This is especially critical given that approximately one third of American students require remedial education when they enter college today, and current college attainment rates are not keeping pace with the country’s projected workforce needs1.  A view of academic and day-to-day updates in real time can help teachers provide personalized support when students need it.

    IBM Watson Element provides teachers with a holistic view of each student through a fun, easy-to-use and intuitive mobile experience that is a natural extension of their work. Teachers can get to know their students beyond their academic performance, including information about personal interests and important milestones students choose to share.  For example, teachers can input notes when a student’s highly anticipated soccer match is scheduled, when another has just been named president for the school’s World Affairs club, and when another has recently excelled following a science project that sparked a renewed interest in chemistry.The unique “spotlight” feature in Watson Element provides advanced analytics that enables deeper levels of communication between teachers about their students’ accomplishments and progress. For example, if a student is excelling academically, teachers can spotlight that student, praising their accomplishments across the school district. Or, if a student received a top award in the district art show, a teacher can spotlight the student so their other teachers know about it.
 

Preparing for the future of Artificial Intelligence
Executive Office of the President
National Science & Technology Council
Committee on Technology
October 2016

preparingfor-futureai-usgov-oct2016

Excerpt:

As a contribution toward preparing the United States for a future in which AI plays a growing role, this report surveys the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further action s by Federal agencies and other actors. A companion document lays out a strategic plan for Federally-funded research and development in AI. Additionally, in the coming months, the Administration will release a follow-on report exploring in greater depth the effect of AI-driven automation on jobs and the economy.

The report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, which was chartered in May 2016 to foster interagency coordination, to provide technical and policy advice on topics related to AI, and to monitor the development of AI technologies across industry, the research community, and the Federal Government. The report was reviewed by the NSTC Committee on Technology, which concurred with its contents. The report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five public workshops co-hosted with universities and other associations that are referenced in this report.

In the coming years, AI will continue to contribute to economic growth and will be a valuable tool for improving the world, as long as industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits.

 

 

 
 

From DSC:
Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)

The educational benefits — as well as the business/profit-related benefits will certainly be significant!

For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices. (Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)

 


Some use cases for such an app:


Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!

They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.

 

girl
Above image via shutterstock.com

 

horticulturalapp-danielchristian

 

In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc.  in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.

 

horticulturalapp2-danielchristian

 

Or let’s look at the potential uses of this type of app from some different angles.

Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have any Eastern Poison Ivy in it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivy for you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).

 

easternpoisonivy

 

 

Or consider another use of such an app:

  • A homeowner who wants to get rid of a certain kind of weed.  The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
  • Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.

 

Or consider another use of such an app:

  • A homeowner has a diseased tree, and they want to know what to do about it. The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
  • Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.

 

Or consider other/similar apps along these lines:

  • Skin ML (for detecting any issues re: acme, skin cancers, etc.)
  • Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
  • Fish ML
  • Etc.

fish-ml-gettyimages

Image from gettyimages.com

 

So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.

 


*  From Wikipedia:

Horticulture involves nine areas of study, which can be grouped into two broad sections: ornamentals and edibles:

  1. Arboriculture is the study of, and the selection, plant, care, and removal of, individual trees, shrubs, vines, and other perennial woody plants.
  2. Turf management includes all aspects of the production and maintenance of turf grass for sports, leisure use or amenity use.
  3. Floriculture includes the production and marketing of floral crops.
  4. Landscape horticulture includes the production, marketing and maintenance of landscape plants.
  5. Olericulture includes the production and marketing of vegetables.
  6. Pomology includes the production and marketing of pome fruits.
  7. Viticulture includes the production and marketing of grapes.
  8. Oenology includes all aspects of wine and winemaking.
  9. Postharvest physiology involves maintaining the quality of and preventing the spoilage of plants and animals.

 

 

 

 

accenture-futuregrowthaisept2016

accenture-futurechannelsgrowthaisept2016

 

Why Artificial Intelligence is the Future of Growth — from accenture.com

Excerpt:

Fuel For Growth
Compelling data reveal a discouraging truth about growth today. There has been a marked decline in the ability of traditional levers of production—capital investment and labor—to propel economic growth.

Yet, the numbers tell only part of the story. Artificial intelligence (AI) is a new factor of production and has the potential to introduce new sources of growth, changing how work is done and reinforcing the role of people to drive growth in business.

Accenture research on the impact of AI in 12 developed economies reveals that AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.

 

 

Also see:

 

 

 

Amazon is winning the race to the future — from bizjournals.com by

Excerpt:

This is the week when artificially intelligent assistants start getting serious.

On Tuesday, Google is expected to announce the final details for Home, its connected speaker with the new Google Assistant built inside.

But first Amazon, which surprised everyone last year by practically inventing the AI-in-a-can platform, will release a new version of the Echo Dot, a cheaper and smaller model of the full-sized Echo that promises to put the company’s Alexa assistant in every room in your house.

The Echo Dot has all the capabilities of the original Echo, but at a much cheaper price, and with a compact form factor that’s designed to be tucked away. Because of its size (it looks like a hockey puck from the future), its sound quality isn’t as good as the Echo, but it can hook up to an external speaker through a standard audio cable or Bluetooth.

 

amazon-newdot-oct2016

 

 

100 bot people to watch #BotWatch #1 — from chatbotsmagazine.com

Excerpt:

100 people to watch in the bot space, in no order.

I’ll publish a new list once a month. This one is #1 October 2016.

This is my personal top 100 for people to watch in the bot space.

 

 

Should We Give Chatbots Their Own Personalities? — from re-work.com by Sophie Curtis

Excerpt:

Today, we have machines that assemble cars, make candy bars, defuse bombs, and a myriad of other things. They can dispense our drinks, facilitate our bank deposits, and find the movies we want to watch with a touch of the screen.

Automation allows all kinds of amazing things, but it is all done with virtually no personality. Building a chatbot with the ability to be conversational with emotion is crucial to getting people to gain trust in the technology. And now there are plenty of tools and resources available to rapidly create and launch chatbots with the personality customers want and businesses needs.

Jordi Torras is CEO and Founder of Inbenta, a company that specializes in NLP, semantic search and chatbots to improve customer experience. We spoke to him ahead of his presentation at the Virtual Assistant Summit in San Francisco, to learn about the recent explosion of chatbots and virtual assistants, and what we can expect to see in the future.

 

 

 

How I built and launched my first chatbot in hours — from chatbotsmagazine.com by Max Pelzner
From idea to MVB (Minimum Viable Bot), and launched in 24 hours!

 

 

 

Developing a Chatbot? Do Not Make These Mistakes! — from chatbotsmagazine.com Hira Saeed

 

 

 

This is what an A.I.-powered future looks like — from venturebeat.com by Grayson Brulte

Excerpt:

Today, we are just beginning to scratch the surface of what is possible with artificial intelligence (A.I.) and how individuals will interact with its various forms. Every single aspect of our society — from cars to houses to products to services — will be reimagined and redesigned to incorporate A.I.

A child born in the year 2030 will not comprehend why his or her parents once had to manually turn on the lights in the living room. In the future, the smart home will seamlessly know the needs, wants, and habits of the individuals who live in the home prior to them taking an action.

Before we arrive at this future, it is helpful to take a step back and reimagine how we design cars, houses, products, and services. We are just beginning to see glimpses of this future with the Amazon Echo and Google Home smart voice assistants.

 

 

Artificial intelligence created to fold laundry for you — from geek.com by Matthew Humphries

Excerpt:

So, Seven Dreamers Laboratories, in collaboration with Panasonic and Daiwa House Industry, have created just such a machine. However, folding laundry correctly turns out to be quite a complicated task, and so an artificial intelligence was required to make it a reliable process.

Laundry folding is actually a five stage process, including:

Grabbing
Spreading
Recognizing
Folding
Sorting/Storing

The grabbing and spreading seems pretty easy, but then the machine needs to understand what type of clothing it needs to fold. That recognizing stage requires both image recognition and AI. The image recognition classifies the type of clothing, then the AI figures out which processes to use in order to start folding.

 

 

 

 

 

 

2 days of global chatbot experts at Talkabot in 12 minutes — from chatbotsmagazine.com by Alec Lazarescu

Excerpt:

During a delightful “cold spell” in Austin at the end of September, a few hundred chatbot enthusiasts joined together for the first talkabot.ai conference.

As a participant both writing about and building chatbots, I’m excited to share a mix of valuable actionable insights and strategic vision directions picked up from speakers and attendees as well as behind the scenes discussions with the organizers from Howdy.

In a very congenial and collaborative atmosphere, a number of valuable recurring themes stood out from a variety of expert speakers ranging from chatbot builders to tool makers to luminaries from adjacent industries.

 

 

 


Addendum:


 

alexaprize-2016

The Alexa Prize (emphasis DSC)

The way humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Alexa, the voice service that powers Amazon Echo, enables customers to interact with the world around them in a more intuitive way using only their voice.

The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI. The inaugural competition is focused on creating a socialbot, a new Alexa skill that converses coherently and engagingly with humans on popular topics and news events. Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Through the innovative work of students, Alexa customers will have novel, engaging conversations. And, the immediate feedback from Alexa customers will help students improve their algorithms much faster than previously possible.

Amazon will award the winning team $500,000. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans on popular topics for 20 minutes.

 

 

 

Google welcomes the future of mobile VR with its $79 Daydream View VR headset — from techcrunch.com by Lucas Matney

Excerpt:

Today at its October hardware/software/everything event, the company showed off its latest VR initiatives including a Daydream headset. The $79 Daydream View VR headset looks quite a bit different than other headsets on the market with its fabric exterior.

Clay Bavor, head of VR, said the design is meant to be more comfortable and friendly. It’s unclear whether the cloth aesthetic is a recommendation for the headset reference design as Xiaomi’s Daydream headset is similarly soft and decidedly design-centric.

The headset and the Google Daydream platform will launch in November.

 

 

 

 

Here’s the Google Pixel — from techcrunch.com by Brian Heater

Excerpt:

While the event is positioned as hardware first, this is Google we’re talking about here, and as such, the real focus is software. The company led the event with talk about its forthcoming Google Assistant AI, and as such, the Pixel will be the first handset to ship with the friendly voice helper. As the company puts it, “we’re building hardware with the Google Assistant it its core. ”

 

 

 

 

 

 

 

Google Home will go on sale today for $129, shipping November 4 — from techcrunch.com by Frederic Lardinois

Excerpt:

Google Home, the company’s answer to Amazon’s Echo, made its official debut at the Google I/O developer conference earlier this year. Since then, we’ve heard very little about Google’s voice-activated personal assistant. Today, at Google’s annual hardware event, the company finally provided us with more details.

Google Home will cost $129 (with a free six-month trial of YouTube red) and go on sale on Google’s online store today. It will ship on November 4.

Google’s Mario Queiroz today argued that our homes are different from other environments. So like the Echo, Google Home combines a wireless speaker with a set of microphones that listen for your voice commands. There is a mute button on the Home and four LEDs on top of the device so you know when it’s listening to you; otherwise, you won’t find any other physical buttons on it.

 

 

 

 

Google Working with Netflix, HBO & Hulu for Daydream Content— from vrfocus.com by Kevin Joyce
#madebygoogle reveals services ready and on the way to support Google Daydream

Excerpt:

Google’s #madebygoogle press conference today revealed some significant details about the company’s forthcoming plans for virtual reality (VR). Daydream is set to launch later this year, and along with the reveal of the first ‘Daydream Ready’ smartphone handset, Pixel, and Google’s own version of the head-mounted display (HMD), Daydream View, the company revealed some of the partners that will be bringing content to the device.

 

 

Google officially unveils $649 Pixel phone with unlimited storage; $129 Google Home — from cnbc.com by Anita Balakrishnan

 

 

 

Google Unveils ‘Home,’ Embraces Aggressive Shift To Hardware — from forbes.com by Matt Drange

Excerpt:

You can add to the seemingly never-ending list of things that Google is deeply involved in: hardware production.

On Tuesday, Google made clear that hardware is more than just a side business, aggressively expanding its offerings across a number of different categories. Headlined by the much-anticipated Google Home and a lineup of smartphones, dubbed Pixel, the announcements mark a major shift in Google’s approach to supplementing its massively profitable advertising sales business and extensive history in software development.

Aimed squarely at Amazon’s Echo, Home is powered by more than 70 billion facts collected by Google’s knowledge graph, the company says. By saying, “OK, Google” Home quickly pulls information from other websites, such as Wikipedia, and gives contextualized answers akin to searching Google manually and clicking on a couple links. Of course, Home is integrated with Google’s other devices, so adding items to your shopping list, for example, are easily pulled up via Pixel. Home can also be programmed to read back information in your calendar, traffic updates and the weather. “If the president can get a daily briefing, why shouldn’t you?” Google’s Rishi Chandra asked when he introduced Home on Tuesday.

 

 

 

 


A comment from DC:
More and more, people are speaking to a device and expect that device to do something for them. How much longer, especially with the advent of chatbots, before people expect this of learning-related applications?

Natural language processing, cognitive computing, and artificial intelligence continue their march forward.


 

Addendums:

 

trojanhorse4ai-googleoct2016

 

 

googleassistanteverywhere-oct2016

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 
© 2025 | Daniel Christian