DeepMind, Vodafone, Google & Facebook – Deep Learning & AI Highlights — from re-work.co by Nikita Johnson

Excerpt:

This week, the The Deep Learning Summit and AI Assistant Summit saw over 450 DL and AI experts and enthusiasts come together to learn from each other and explore the most recent research and progressions in the space. Over the past two days we’ve heard from the likes of Amazon, Facebook, Google, Vodafone, as well as Universities such as Cambridge, Warwick, UCL, Imperial, and exciting new startups like Jukedeck and Echobox. Topics have been incredibly diverse covering NLP, space exploration, ML for music composition, and many more.

We’ve collected some of our favourite takeaways from both tracks over the last two days, as well as hearing what our attendees thought.

What did we hear at the AI Assistant Summit?
I’m driving in France and google translate is automatically translating all the french road signs for me and directing me to my location, telling me my time of arrival – this is the future. There is no interface there is no screen.
Adi Chhabra, Evolution of AI & Machine Learning in Customer Experience – Beyond Interfaces, Vodafone

We are at the beginning of the era of assistance. In the future every employee will have an assistant to help him with decision making.
Christophe Bourguignat, Deep Learning for Conversational Intelligence on Analytics Data, Zelros

 

 

Six reasons why disruption is coming to learning departments — from feathercap.net, with thanks to Mr. Tim Seager for this resource

Excerpts:

  1. Training materials and interactions will not just be pre-built courses but any structured or unstructured content available to the organization.
  2. Curation of all learning, employee or any useful organizational content will become a whole lot easier.
  3. The learning department won’t have to build it all themselves.
  4. Learning bots and voice enabled learning.
  5. Current workplace learning systems and LMSs will go through a big transition or they will lose relevancy.
  6. Learning departments will go beyond onboarding, compliance training and leadership training and move to training everyone in the company on all job skills.

 

A successful example of this is Amazon.com. As a shopper on their site we have access to millions of  book and product SKUs. Amazon uses a combination of all three techniques to position the right book or product based on our behavior, peer experiences as well as having a semantic understanding of the product page we’re viewing. There’s no reason we can’t have the same experience on workplace learning systems where all viable learning content/ company content could be organized and disseminated to each learner for the right time and circumstance.

 

 



From DSC:
Several items of what Feathercap is saying in their solid posting remind me of a vision of a next generation learning platform:

  • Contributing to — and tapping into — streams of content
  • Lifelong learning and reinventing oneself
  • Artificial intelligence, including Natural Language Processing (NLP) and the use of voice to drive systems/functionality
  • Learning agents/bots
  • 24×7 access
  • Structured and unstructured learning
  • Socially-based means of learning
  • …and more

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

 

The End of Typing: The Next Billion Mobile Users Will Rely on Video and Voice — from wsj.com by Eric Bellman
Tech companies are rethinking products for the developing world, creating new winners and losers

Excerpt:

The internet’s global expansion is entering a new phase, and it looks decidedly unlike the last one.

Instead of typing searches and emails, a wave of newcomers—“the next billion,” the tech industry calls them—is avoiding text, using voice activation and communicating with images.

 

 

From DSC:
The above article reminds me that our future learning platforms will be largely driven by our voices. That’s why I put it into my vision of a next generation learning platform.

 

 

 

 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 

Voice technology may be poised for a breakthrough with Chinese consumers. — from Shepherd Laughlin jwtintelligence.com

Excerpt:

Worldwide, more consumers are interacting with technology using their voices. As the Innovation Group London and Mindshare Futures found in our Speak Easy report, consumers who use voice technology think that it frees them from having to look at screens, helps them organize their lives, and is less mentally draining than traditional touch or typing devices.

Among the many markets where voice technology is catching on, China faces unique challenges and opportunities. The complex Chinese writing system means that current methods of selecting characters using keyboards can be slow and laborious, which suggests that fully functional voice technology would find an instant market. Spoken Chinese, however, has proven difficult for computers to decipher.

But voice technology is moving ahead anyway. 2015 saw the release of the LingLong DingDong, a product created through a partnership between iFlytek and JD.com, which has become known as China’s answer to the Amazon Echo. The device can understand both Mandarin and Cantonese. It plays music, gives directions, answers questions about the weather and the news, and more. The Tmall Genie, a similar product, functions using Alibaba’s voice assistant, AliGenie.

 

 

 

Amazon’s Alexa passes 15,000 skills, up from 10,000 in February — from techcrunch.com by Sarah Perez

Excerpt:

Amazon’s Alexa voice platform has now passed 15,000 skills — the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. The figure is up from the 10,000 skills Amazon officially announced back in February, which had then represented a 3x increase from September.

The new 15,000 figure was first reported via third-party analysis from Voicebot, and Amazon has now confirmed to TechCrunch that the number is accurate.

According to Voicebot, which only analyzed skills in the U.S., the milestone was reached for the first time on June 30, 2017. During the month of June, new skill introductions increased by 23 percent, up from the less than 10 percent growth that was seen in each of the prior three months.

The milestone also represents a more than doubling of the number of skills that were available at the beginning of the year, when Voicebot reported there were then 7,000 skills. That number was officially confirmed by Amazon at CES.

 

 


From DSC:
Again, I wonder…what are the implications for learning from this new, developing platform?


 

 

Video: 4 FAQs about Watson as tutor — from er.educause.edu by Satya Nitta

Excerpt:

How is IBM using Watson’s intelligent tutoring system? So we are attempting to mimic the best practices of human tutoring. The gold standard will always remain one on one human to human tutoring. The whole idea here is an intelligent tutoring system as a computing system that works autonomously with learners, so there is no human intervention. It’s basically pretending to be the teacher itself and it’s working with the learner. What we’re attempting to do is we’re attempting to basically put conversational systems, systems that understand human conversation and dialogue, and we’re trying to build a system that, in a very natural way, interacts with people through conversation. The system basically has the ability to ask questions, to answer questions, to know who you are and where you are in your learning journey, what you’re struggling with, what you’re strong on and it will personalize its pedagogy to you.

There’s a natural language understanding system and a machine learning system that’s trying to figure out where you are in your learning journey and what the appropriate intervention is for you. The natural language system enables this interaction that’s very rich and conversation-based, where you can basically have a human-like conversation with it and, to a large extent, it will try to understand and to retrieve the right things for you. Again the most important thing is that we will set the expectations appropriately and we have appropriate exit criteria for when the system doesn’t actually understand what you’re trying to do.

 

 

 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

Winner takes all — from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han

 

We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.

— Larry Page, CEO, Alphabet

 

 

Excerpt:

An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.

If you look closely, the world’s top technology companies are making similar bets.

 


 

 

Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.

 

 

 

Introducing Deep Learning and Neural Networks — Deep Learning for Rookies — from medium.com by Nahua Kang

Excerpts:

Here’s a short list of general tasks that deep learning can perform in real situations:

  1. Identify faces (or more generally image categorization)
  2. Read handwritten digits and texts
  3. Recognize speech (no more transcribing interviews yourself)
  4. Translate languages
  5. Play computer games
  6. Control self-driving cars (and other types of robots)

And there’s more. Just pause for a second and imagine all the things that deep learning could achieve. It’s amazing and perhaps a bit scary!

Currently there are already many great courses, tutorials, and books on the internet covering this topic, such as (not exhaustive or in specific order):

  1. Michael Nielsen’s Neural Networks and Deep Learning
  2. Geoffrey Hinton’s Neural Networks for Machine Learning
  3. Goodfellow, Bengio, & Courville’s Deep Learning
  4. Ian Trask’s Grokking Deep Learning,
  5. Francois Chollet’s Deep Learning with Python
  6. Udacity’s Deep Learning Nanodegree (not free but high quality)
  7. Udemy’s Deep Learning A-Z ($10–$15)
  8. Stanford’s CS231n and CS224n
  9. Siraj Raval’s YouTube channel

The list goes on and on. David Venturi has a post for freeCodeCamp that lists many more resources. Check it out here.

 

 

 

 

 

When AI can transcribe everything — from theatlantic.com by Greg Noone
Tech companies are rapidly developing tools to save people from the drudgery of typing out conversations—and the impact could be profound.

Excerpt:

Despite the recent emergence of browser-based transcription aids, transcription’s an area of drudgery in the modern Western economy where machines can’t quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could.

Automatic speech recognition, or ASR, is an area that has gripped the firm’s chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland’s Edinburgh University. “I’d just left China,” he says, remembering the difficulty he had in using his undergraduate knowledge of the American English to parse the Scottish brogue of his lecturers. “I wished every lecturer and every professor, when they talked in the classroom, could have subtitles.”

“That’s the thing with transcription technology in general,” says Prenger. “Once the accuracy gets above a certain bar, everyone will probably start doing their transcriptions that way, at least for the first several rounds.” He predicts that, ultimately, automated transcription tools will increase both the supply of and the demand for transcripts. “There could be a virtuous circle where more people expect more of their audio that they produce to be transcribed, because it’s now cheaper and easier to get things transcribed quickly. And so, it becomes the standard to transcribe everything.”

 

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Australian start-up taps IBM Watson to launch language translation earpiece — from prnewswire.com
World’s first available independent translation earpiece, powered by AI to be in the hands of consumers by July

Excerpts:

SYDNEY, June 12, 2017 /PRNewswire/ — Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds, being the first of its kind to hit global markets next month.

Unveiled at last week’s United Nations Artificial Intelligence (AI) for Good Summit in Geneva, Switzerland, the Translate One2One earpiece supports translations across English, Japanese, French, Italian, Spanish, Brazilian Portuguese, German and Chinese. Available to purchase today for delivery in July, the earpiece carries a price tag of $179 USD, and is the first independent translation device that doesn’t rely on Bluetooth or Wi-Fi connectivity.

 

Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds.

 

 

From DSC:
How much longer before this sort of technology gets integrated into videoconferencing and transcription tools that are used in online-based courses — enabling global learning at a scale never seen before? (Or perhaps NLP-based tools are already being integrated into global MOOCs and the like…not sure.) It would surely allow for us to learn from each other in a variety of societies throughout the globe.

 

 

 

From DSC:
In reviewing the item below, I wondered:

How should students — as well as Career Services Groups/Departments within institutions of higher education — respond to the growing use of artificial intelligence (AI) in peoples’ job searches?

My take on it? Each student needs to have a solid online-based footprint — such as offering one’s own streams of content via a WordPress-based blog, one’s Twitter account, and one’s LinkedIn account. That is, each student has to be out there digitally, not just physically. (Though I suspect having face-to-face conversations and interactions will always be an incredibly powerful means of obtaining jobs as well. But if this trend picks up steam, one’s online-based footprint becomes all the more important to finding work.)

 




How AI is changing your job hunt
 — from by Jennifer Alsever

Excerpt (emphasis DSC):

The solution appeared in the form of artificial intelligence software from a young company called Interviewed. It speeds the vetting process by providing online simulations of what applicants might do on their first day as an employee. The software does much more than grade multiple-choice questions. It can capture not only so-called book knowledge but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture. That includes assessing which words he or she favors—a penchant for using “please” and “thank you,” for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail. “We can look at 4,000 candidates and within a few days whittle it down to the top 2% to 3%,” claims Freedman, whose company now employs 45 people. “Forty-eight hours later, we’ve hired someone.” It’s not perfect, he says, but it’s faster and better than the human way.

It isn’t just startups using such software; corporate behemoths are implementing it too. Artificial intelligence has come to hiring.

Predictive algorithms and machine learning are fast emerging as tools to identify the best candidates.

 

 



Addendum on 6/7/17:

 

 

 



Addendum on 6/15/17:

  • Want a job? It may be time to have a chat with a bot — from sfchronicle.com by Nicholas Cheng
    Excerpt:
    “The future is AI-based recruitment,” Mya CEO Eyal Grayevsky said. Candidates who were being interviewed through a chat couldn’t tell that they were talking to a bot, he added — even though the company isn’t trying to pass its bot off as human.

    A 2015 study by the National Bureau of Economic Research surveyed 300,000 people and found that those who were hired by a machine, using algorithms to match them to a job, stayed in their jobs 15 percent longer than those who were hired by human recruiters.

    A report by the McKinsey Global Institute estimates that more than half of human resources jobs may be lost to automation, though it did not give a time period for that shift.

    “Recruiting jobs will definitely go away,” said John Sullivan, who teaches management at San Francisco State University.

 

 

2017 Internet Trends Report — from kpcb.com by Mary Meeker

 

 

Mary Meeker’s 2017 internet trends report: All the slides, plus analysis — from recode.net by Rani Molla
The most anticipated slide deck of the year is here.

Excerpt:

Here are some of our takeaways:

  • Global smartphone growth is slowing: Smartphone shipments grew 3 percent year over year last year, versus 10 percent the year before. This is in addition to continued slowing internet growth, which Meeker discussed last year.
  • Voice is beginning to replace typing in online queries. Twenty percent of mobile queries were made via voice in 2016, while accuracy is now about 95 percent.
  • In 10 years, Netflix went from 0 to more than 30 percent of home entertainment revenue in the U.S. This is happening while TV viewership continues to decline.
  • China remains a fascinating market, with huge growth in mobile services and payments and services like on-demand bike sharing. (More here: The highlights of Meeker’s China slides.)

 

 

Read Mary Meeker’s essential 2017 Internet Trends report — from techcrunch.com by Josh Constine

Excerpt:

This is the best way to get up to speed on everything going on in tech. Kleiner Perkins venture partner Mary Meeker’s annual Internet Trends report is essentially the state of the union for the technology industry. The widely anticipated slide deck compiles the most informative research on what’s getting funded, how Internet adoption is progressing, which interfaces are resonating, and what will be big next.

You can check out the 2017 report embedded below, and here’s last year’s report for reference.

 

 
© 2024 | Daniel Christian