Writing Resources from the “State of Writing” — from stateofwriting.com with thanks to Mary Bennett who mentions that this site is an “incredibly useful resource with tips and its own collection of writing and grammar guides.”

Some of the useful sections include resources regarding:

  • Dictionaries and Thesauruses
  • Style Guides
  • Grammar and Punctuation
  • Etymology
  • Quotations
  • Writing various genres
  • Foreign Languages

 

 

Microsoft’s meeting room of the future is wild — from theverge.com by Tom Warren
Transcription, translation, and identification

Excerpts:

Microsoft just demonstrated a meeting room of the future at the company’s Build developer conference.

It all starts with a 360-degree camera and microphone array that can detect anyone in a meeting room, greet them, and even transcribe exactly what they say in a meeting regardless of language.

Microsoft takes the meeting room scenario even further, though. The company is using its artificial intelligence tools to then act on what meeting participants say.

 

 

From DSC:
Whoa! Many things to think about here. Consider the possibilities for global/blended/online-based learning (including MOOCs) with technologies associated with translation, transcription, and identification.

 

 

An AI Bot for the Teacher — with thanks to Karthik Reddy for this resource

Artificial intelligence is the stuff of science fiction – if you are old enough, you will remember those Terminator movies a good few years ago, where mankind was systematically being wiped out by computers.

The truth is that AI, though not quite at Terminator level yet, is already a fact and something that most of us have encountered already. If you have ever used the virtual assistant on your phone or the Ask Google feature, you have used AI.

Some companies are using it as part of their sales and marketing strategies. An interesting example is Lowe’s Home Improvement that, instead of chatbots, uses actual robots into their physical stores. These robots are capable of helping customers locate products that they’re interested in, taking a lot of the guesswork out of the entire shopping experience.

Of course, there are a lot of different potential applications for AI that are very interesting. Imagine an AI teaching assistant, for example. They could help grade papers, fact check and assist with lesson planning, etc., all to make our harassed teachers’ lives a little easier.

Chatbots could be programmed as tutors to help kids better understand core topics if they are struggling with them, ensuring that they don’t hold the rest of the class up. And, for kids who have a real affinity with the subject, help them learn more about what they are interested in.

It could also help enhance long distance training.  Imagine if your students could get instant answers to basic questions through a simple chatbot. Sure, if they were still not getting it, they would come through to you – the chatbot cannot replace a real, live, teacher after all. But it could save you a lot of time and frustration.

Here, of course, we have only skimmed the surface of what artificial intelligence is capable of. Why not look through this infographic to see how different brands have been using this tech, and see what possible applications of it we might expect.

 

Brands that use AI to enhance marketing (infographic) 2018
From 16best.net with thanks to Karthik Reddy for this resource

 

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

Experience Virtual Reality on the web with Chrome — from blog.google

Excerpt:

Virtual reality (VR) lets you tour the Turkish palace featured in “Die Another Day,” learn about life in a Syrian refugee camp firsthand, and walk through your dream home right from your living room. With the latest version of Chrome, we’re bringing VR to the web—making it as easy to step inside Air Force One as it is to access your favorite webpage.

For a fully immersive experience, use Chrome with your Daydream-ready phone and Daydream View—just browse to a VR experience you want to view, choose to enter VR, and put the phone in your Daydream View headset. If you don’t have a headset you can view VR content on any phone or desktop computer and interact using your finger or mouse.

You can already try out some great VR-enabled sites, with more coming soon. For example, explore the intersection of humans, nature and technology in the interactive documentary Bear 71. Questioning how we see the world through the lens of technology, this story blurs the lines between the wild world and the wired one.

 

 

Learn A New Language With Your Mobile Using MondlyAR — from vrfocus.com by
Start learn a new language today on your Android device.

Excerpt:

MondlyAR features an avatar “teacher” who brings virtual objects – planets, animals, musical instruments and more – into the room as teaching tools, engages the user in conversations and gives instant feedback on pronunciation thanks to chatbot technology. By incorporating these lifelike elements in the lessons, students are more likely to understand, process, and retain what they are taught.

Users will have seven languages to chose from, American English, British English, French, Spanish, Italian, Portuguese, and German with the studio expecting to be able to offer no less than 30 languages in AR by the next update in August 2018.

 

 

Augmented Reality takes 3-D printing to next level — from rtoz.org

Excerpt:

Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work. To use the Robotic Modeling Assistant (RoMA), a designer wears an AR headset with hand controllers. As soon as a design feature is completed, the robotic arm prints the new feature.

 

 

 

The Legal Hazards of Virtual Reality and Augmented Reality Apps — from by Tam Harbert
Liability and intellectual property issues are just two areas developers need to know about

Excerpt:

As virtual- and augmented-reality technologies mature, legal questions are emerging that could trip up VR and AR developers. One of the first lawyers to explore these questions is Robyn Chatwood, of the international law firm Dentons. “VR and AR are areas where the law is just not keeping up with [technology] developments,” she says. IEEE Spectrum contributing editor Tam Harbert talked with Chatwood about the legal challenges.

 

 

Why VR has a bright future in the elearning world — elearninglearning.com by Origin Learning

Excerpt:

The Benefits of Using Virtual Reality in eLearning

  • It offers a visual approach – According to numerous studies, people retain what they have read better when they are able to see it or experience it somehow. VR in eLearning makes this possible and creates a completely new visual experience to improve learners’ retention capacity and their understanding of the material.
  • It lowers the risk factor – VR in eLearning can simulate dangerous and risky situations in an environment that is controllable, so that it removes the risk factor usually associated with such situations. This lets learners alleviate their fear of making a mistake.
  • It facilitates complex data – Like the visual approach, when learners can really experience complex situations, they are more likely to handle them with ease. VR simplifies the complexity of those situations, allowing learners to actually experience everything themselves, rather than just reading about it.
  • It offers remote access – VR in eLearning doesn’t require an actual classroom so that learning can be conducted remotely, which can help you save a lot of time and money that would normally have to be spent on planning a complete learning program.
  • It provides real-life scenarios – As mentioned, one of the greatest things about VR in the context of eLearning is that it allows learners to really immerse themselves in various virtual scenarios. For instance, if the learning program involves some real situation that a certain business has faced before, an employee will be able to handle such a situation more efficiently after experiencing it virtually.
  • It is fun and innovative – People love to try out new things. VR offers a completely innovative and interactive approach to learning and makes learning become an entertaining, rather than an everyday dull process.

 

5 reasons to use augmented reality in education — from kitaboo.com

Excerpt:

[AR] is making it possible to add a layer of enhanced reality to a context-sensitive virtual world. This gives educators and trainers numerous possibilities to enhance the learning experience, making it lively, significant and circumstantial to the learner.

According to the investment company, Goldman Sachs, Augmented Reality “has the potential to become a standard tool in education and could revolutionize the way in which students are taught, for both the K-12 segment and higher education.” The company further projects that by 2025, there would be 15 million users of educational AR worldwide, representing a $700 million market.

Let’s have a look at 5 main reasons to use Augmented Reality in education.

 

 

 

The Difference Between Virtual Reality, Augmented Reality And Mixed Reality — from forbes.com

 

 

 

 

 

How to Set Up a VR Pilot — from campustechnology.com by Dian Schaffhauser
As Washington & Lee University has found, there is no best approach for introducing virtual reality into your classrooms — just stages of faculty commitment.

Excerpt:

The work at the IQ Center offers a model for how other institutions might want to approach their own VR experimentation. The secret to success, suggested IQ Center Coordinator David Pfaff, “is to not be afraid to develop your own stuff” — in other words, diving right in. But first, there’s dipping a toe.

The IQ Center is a collaborative workspace housed in the science building but providing services to “departments all over campus,” said Pfaff. The facilities include three labs: one loaded with high-performance workstations, another decked out for 3D visualization and a third packed with physical/mechanical equipment, including 3D printers, a laser cutter and a motion-capture system.

 

 

 

The Future of Language Learning: Augmented Reality vs Virtual Reality — from medium.com by Denis Hurley

Excerpts:

Here, I would like to stick to the challenges and opportunities presented by augmented reality and virtual reality for language learning.

While the challenge is a significant one, I am more optimistic than most that wearable AR will be available and popular soon. We don’t yet know how Snap Spectacles will evolve, and, of course, there’s always Apple.

I suspect we will see a flurry of new VR apps from language learning startups soon, especially from Duolingo and in combination with their AI chat bots. I am curious if users will quickly abandon the isolating experiences or become dedicated users.

 

 

Bose has a plan to make AR glasses — from cnet.com by David Carnoy
Best known for its speakers and headphones, the company has created a $50 million development fund to back a new AR platform that’s all about audio.

Excerpts:

“Unlike other augmented reality products and platforms, Bose AR doesn’t change what you see, but knows what you’re looking at — without an integrated lens or phone camera,” Bose said. “And rather than superimposing visual objects on the real world, Bose AR adds an audible layer of information and experiences, making every day better, easier, more meaningful, and more productive.”

The secret sauce seems to be the tiny, “wafer-thin” acoustics package developed for the platform. Bose said it represents the future of mobile micro-sound and features “jaw-dropping power and clarity.”

Bose adds the technology can “be built into headphones, eyewear, helmets and more and it allows simple head gestures, voice, or a tap on the wearable to control content.”

 

Bose is making AR glasses focused on audio, not visuals

Here are some examples Bose gave for how it might be used:

    • For travel, the Bose AR could simulate historic events at landmarks as you view them — “so voices and horses are heard charging in from your left, then passing right in front of you before riding off in the direction of their original route, fading as they go.” You could hear a statue make a famous speech when you approach it. Or get told which way to turn towards your departure gate while checking in at the airport.
    • Bose AR could translate a sign you’re reading. Or tell you the word or phrase for what you’re looking at in any language. Or explain the story behind the painting you’ve just approached.
  • With gesture controls, you could choose or change your music with simple head nods indicating yes, no, or next (Bragi headphones already do this).
  • Bose AR would add useful information based on where you look. Like the forecast when you look up or information about restaurants on the street you look down.

 

 

The 10 Best VR Apps for Classrooms Using Merge VR’s New Merge Cube — from edsurge.com

 

Google Lens arrives on iOS — from techcrunch.com by Sarah Perez

Excerpt:

On the heels of last week’s rollout on Android, Google’s  new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.

For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.

 

The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.

 

 

15 Top Augmented Reality (AR) Apps Changing Education — from vudream.com by Steven Wesley

 

 

 

CNN VR App Brings News to Oculus Rift — from vrscout.com by Jonathan Nafarrete

 

 

 

 

DC: The next generation learning platform will likely offer us such virtual reality-enabled learning experiences such as this “flight simulator for teachers.”

Virtual reality simulates classroom environment for aspiring teachers — from phys.org by Charles Anzalone, University at Buffalo

Excerpt (emphasis DSC):

Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”

The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.

The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.

 

Also related/see:

 

  • TeachLive.org
    TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk.  This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.

 

 

 

 

 

This start-up uses virtual reality to get your kids excited about learning chemistry — from Lora Kolodny and Erin Black

  • MEL Science raised $2.2 million in venture funding to bring virtual reality chemistry lessons to schools in the U.S.
  • Eighty-two percent of science teachers surveyed in the U.S. believe virtual reality content can help their students master their subjects.

 

This start-up uses virtual reality to get your kids excited about learning chemistry from CNBC.

 

 


From DSC:
It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!

The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.

But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.”  It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.

 

I’m tracking these developments at:
http://danielschristian.com/thelivingclassroom/

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


*  Also see:


Blockchain, Bitcoin and the Tokenization of Learning — from edsurge.com by Sydney Johnson

Excerpt:

In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.

A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.

 

 

 

 

The Beatriz Lab - A Journey through Alzheimer's Disease

This three-part lab can be experienced all at once or separately. At the beginning of each part, Beatriz’s brain acts as an omniscient narrator, helping learners understand how changes to the brain affect daily life and interactions.

Pre and post assessments, along with a facilitation guide, allow learners and instructors to see progression towards outcomes that are addressed through the story and content in the three parts, including:

1) increased knowledge of Alzheimer’s disease and the brain
2) enhanced confidence to care for people with Alzheimer’s disease
3) improvement in care practice

Why a lab about Alzheimer’s Disease?
The Beatriz Lab is very important to us at Embodied Labs. It is the experience that inspired the start of our company. We believe VR is more than a way to evoke feelings of empathy; rather, it is a powerful behavior change tool. By taking the perspective of Beatriz, healthcare professionals and trainees are empowered to better care for people with Alzheimer’s disease, leading to more effective care practices and better quality of life. Through embodying Beatriz, you will gain insight into life with Alzheimer’s and be able to better connect with and care for your loved ones, patients, clients, or others in this communities who live with the disease every day. In our embodied VR experience, we hope to portray both the difficult and joyful moments — the disease surely is a mix of both.

Watch our new promo video to learn more!

 

 

As part of the experience, you will take a 360 degree trip into Beatriz’s brain,
and visit a neuron “forest” that is being affected by amyloid beta plaques and tau proteins.

 

From DSC:
I love the work that Carrie Shaw and @embodiedLabs are doing! Thanks Carrie & Company!

 

 

 

Top 7 Business Collaboration Conference Apps in Virtual Reality (VR) — from vudream.com by Ved Pitre

Excerpt (emphasis DSC):

As VR continues to grow and improve, the experiences will feel more real. But for now, here are the best business conference applications in virtual reality.

 

 

 

Final Cut Pro X Arrives With 360 VR Video Editing — from vrscount.com by Jonathan Nafarrete

Excerpt:

A sign of how Apple is supporting VR in parts of its ecosystem, Final Cut Pro X (along with Motion and Compressor), now has a complete toolset that lets you import, edit, and deliver 360° video in both monoscopic and stereoscopic formats.

Final Cut Pro X 10.4 comes with a handful of slick new features that we tested, such as advanced color grading and support for High Dynamic Range (HDR) workflows. All useful features for creators, not just VR editors, especially since Final Cut Pro is used so heavily in industries like video editing and production. But up until today, VR post-production options have been minimal, with no support from major VR headsets. We’ve had options with Adobe Premiere plus plugins, but not everyone wants to be pigeon-holed into a single software option. And Final Cut Pro X runs butter smooth on the new iMac, so there’s that.

Now with the ability to create immersive 360° films right in Final Cut Pro, an entirely new group of creators have the ability to dive into the world of 360 VR video. Its simple and intuitive, something we expect from an Apple product. The 360 VR toolset just works.

 

 

 

See Original, Exclusive Star Wars Artwork in VR — from vrscount.com by Alice Bonasio

 

Excerpt:

HWAM’s first exhibition is a unique collection of Star Wars production pieces, including the very first drawings made for the film franchise and never-before-seen production art from the original trilogy by Lucasfilm alum Joe Johnston, Ralph McQuarrie, Phil Tippett, Drew Struzan, Colin Cantwell, and more.

 

 

 

Learning a language in VR is less embarrassing than IRL — from qz.com by Alice Bonasio

Excerpt:

Will virtual reality help you learn a language more quickly? Or will it simply replace your memory?

VR is the ultimate medium for delivering what is known as “experiential learning.” This education theory is based on the idea that we learn and remember things much better when doing something ourselves than by merely watching someone else do it or being told about it.

The immersive nature of VR means users remember content they interact with in virtual scenarios much more vividly than with any other medium. (According to experiments carried out by professor Ann Schlosser at the University of Washington, VR even has the capacity to prompt the development of false memories.)

 

 

Since immersion is a key factor in helping students not only learn much faster but also retain what they learn for longer, these powers can be harnessed in teaching and training—and there is also research that indicates that VR is an ideal tool for learning a language.

 

 


Addendum on 12/20/17:

 


 

 

 

Want to learn a new language? With this AR app, just point & tap — from fastcodesign.com by Mark Wilson
A new demo shows how augmented reality could redefine apps as we know them.

Excerpt:

There’s a new app gold rush. After Facebook and Apple both released augmented reality development kits in recent months, developers are demonstrating just what they can do with these new technologies. It’s a race to invent the future first.

To get a taste of how quickly and dramatically our smartphone apps are about to change, just take a look at this little demo by front end engineer Frances Ng, featured on Prosthetic Knowledge. Just by aiming her iPhone at various objects and tapping, she can both identify items like lamps and laptops, and translate their names to a number of different languages. Bye bye, multilingual dictionaries and Google translate. Hello, “what the heck is the Korean word for that?”

 

 

 

Also see:

Apple ARKit & Machine Learning Come Together Making Smarter Augmented Reality — from next.reality.news by Jason Odom

Excerpt:

The world is a massive place, especially when you consider the field of view of your smartglasses or mobile device. To fulfill the potential promise of augmented reality, we must find a way to fill that view with useful and contextual information. Of course, the job of creating contextual, valuable information, to fill the massive space that is the planet earth, is a daunting task to take on. Machine learning seems to be one solution many are moving toward.

Tokyo, Japan based web developer, Frances Ng released a video on Twitter showing off her first experiments with Apple’s ARKit and CoreML, Apple’s machine learning system. As you can see in the gifs below, her mobile device is being used to recognize a few objects around her room, and then display the name of the identified objects.

 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian