From DSC: At the Next Generation Learning Spaces Conference, held recently in San Diego, CA, I moderated a panel discussion re: AR, VR, and MR. I started off our panel discussion with some introductory ideas and remarks — meant to make sure that numerous ideas were on the radars at attendees’ organizations. Then Vinay and Carrie did a super job of addressing several topics and questions (Mary was unable to make it that day, as she got stuck in the UK due to transportation-related issues).
That said, I didn’t get a chance to finish the second part of the presentation which I’ve listed below in both 4:3 and 16:9 formats. So I madea recording of these ideas, and I’m relaying it to you in the hopes that it can help you and your organization.
From DSC: Note this new type of Human Computer Interaction (HCI). I think that we’ll likely be seeing much more of this sort of thing.
Excerpt (emphasis DSC):
How is Hayo different?
AR that connects the magical and the functional:
Unlike most AR integrations, Hayo removes the screens from smarthome use and transforms the objects and spaces around you into a set of virtual remote controls.Hayo empowers you to create experiences that have previously been limited by the technology, but now are only limited by your imagination.
The best interface is no interface at all. Aside from the one-time setup Hayo does not use any screens. Your real-life surfaces become the interface and you, the user, become the controls. Virtual remote controls can be placed wherever you want for whatever you need by simply using your Hayo device to take a 3D scan of your space.
Smarter AR experience:
Hayo anticipates your unique context, passive motion and gestures to create useful and more unique controls for the connected home. The Hayo system learns your behaviors and uses its AI to help meet your needs.
If you enjoyed this article, please consider sharing it!
Some conference participants were concerned that this beleaguered region might grow. In fact, one attendee — an old friend who strategizes about technology for a big New York bank — commented that perhaps Wall Street would become “the new Rust Belt.” His concern was that automation of the finance industry would hollow out jobs in that field in the same way that robotics and other technologies have reduced manufacturing employment.
This is a sobering prospect, but there is plenty of evidence that it’s a real possibility. Key aspects of the finance industry have already been automated to a substantial degree. Jobs in the New York finance field have been declining for several years. According to data from research firm Coalition Ltd., more than 10,000 “front-office producer” jobs have been lost within the top 10 banks since 2011. Coalition also suggests that global fixed-income headcount has fallen 31% since 2011.
According to a new report, organizations are moving away from hierarchies, focusing on improving the employee experience, redesigning training, and reinventing the role of HR.
Business and HR leaders should rethink almost all of their management and HR practices as the proliferation of digital technologies transform the way organizations work, according to predictions for 2017 from Bersin by Deloitte, Deloitte Consulting LLP.
This year’s report includes 11 predictions about rapid technological, structural, and cultural changes that will reshape the world of work, including management, HR, and the markets for HR and workplace technology.
Get ready for AI to show up where you’d least expect it.
In 2016, tech companies like Google, Facebook, Apple and Microsoft launched dozens of products and services powered by artificial intelligence. Next year will be all about the rest of the business world embracing AI.
Artificial intelligence is a 60-year-old term, and its promise has long seemed like it was forever over the horizon. But new hardware, software, services and expertise means it’s finally real — even though companies will still need plenty of human brain power to get it working.
AI was one of the hottest trends in tech this year, and it’s only poised to get bigger. You’ve already brushed up against AI: It screens out spam, organizes your digital photos and transcribes your spoken text messages. In 2017, it will spread beyond digital doodads to mainstream businesses.
The design world has seen its own changes and updates as well. And as we know, change is the only constant. We’ve asked some of the top creatives to share what 2017 design trends they think will be headed our way.
SAN JOSE, Calif.–(BUSINESS WIRE)–The market has evolved from technologists looking to learn and understand new big data technologies to customers who want to learn about new projects, new companies and most importantly, how organizations are actually benefitting from the technology. According to John Schroeder, executive chairman and founder of MapR Technologies, Inc., the acceleration in big data deployments has shifted the focus to the value of the data. John has crystallized his view of market trends into these six major predictions for 2017…
2016 was a rich year for medical technology. Virtual Reality. Augmented Reality. Smart algorithms analysing wearable data. Amazing technologies arrived in our lives and on the market almost every day. And it will not stop in the coming year. The role of a futurist is certainly not making bold predictions about the future. No such big bet has taken humanity forward. Instead, our job is constantly analysing the trends shaping the future and trying to build bridges between them and what we have today. Still, people expect me to come up with predictions about medical technologies every year, and thus here they are.
Artificial intelligence (and machine/deep learning) is the hottest trend, eclipsing, but building on, the accumulated hype for the previous “new big thing,” big data. The new catalyst for the data explosion is the Internet of Things, bringing with it new cybersecurity vulnerabilities. The rapid fluctuations in the relative temperature of these trends also create new dislocations and opportunities in the tech job market.
The hottest segment of the hottest trend—artificial intelligence—is the market for chatbots. “The movement towards conversational interfaces will accelerate,” says Stuart Frankel, CEO, Narrative Science. “The recent, combined efforts of a number of innovative tech giants point to a coming year when interacting with technology through conversation becomes the norm. Are conversational interfaces really a big deal? They’re game-changing. Since the advent of computers, we have been forced to speak the language of computers in order to communicate with them and now we’re teaching them to communicate in our language.”
Google changed the world with its PageRank algorithm, creating a new kind of internet search engine that could instantly sift through the world’s online information and, in many cases, show us just what we wanted to see. But that was a long time ago. As the volume of online documents continues to increase, we need still newer ways of finding what we want.
That’s why Google is now running its search engine with help from machine learning, augmenting its predetermined search rules with deep neural networks that can learn to identify the best search results by analyzing vast amounts of existing search data. And it’s not just Google. Microsoft is pushing its Bing search engine in the same direction, and so are others beyond the biggest names in tech.
3 Forces Shaping Ed Tech in 2017— from campustechnology.com by Dian Schaffhauser
Ovum’s latest report examines the key trends that are expected to impact higher education in the new year.
Institutions Will Support the Use of More Innovative Tech in Teaching and Learning
Schools Will Leverage Technology for Improving the Student Experience
The Next-Generation IT Strategy Will Focus More on IT Agility
We’ve seen a lot of exciting new innovations take place over the course of 2016. This year has introduced interesting new uses for virtual reality—like using VR to help burn victims in hospitals mentally escape from the pain during procedures—and even saw the world’s first revolutionary augmented reality game in the form of Pokémon Go. The iPhone 7 was also introduced, leaving millions of people uncertain of their feelings regarding Apple, while Samsung loyalists just prayed that their smartphones would stay in one piece.
Undoubtedly, there have been quite a few ups and downs in technology over the past year. With any luck, 2017 will provide us with even more new innovations and advancements in tech. But what exactly do we have to look forward to? TMC recently caught up with Jordan Edelson, CEO of Appetizer Mobile, to discuss his thoughts on 2016 and his predictions for what’s to come in the future. You can find the entire exchange below.
2016 is fast drawing to a close. And while many will be glad to see the back of it, for those of us who work and play with Virtual Reality, it has been a most exciting year. By the time the bells ring out signalling the start of a new year, the total number of VR users will exceed 43 million. This is a market on the move, projected to be worth $30bn by 2020. If it’s to meet that valuation, then we believe 2017 will be an incredibly important year in the lifecycle of VR hardware and software development. VR will be enjoyed by an increasingly mainstream audience very soon, and here we take a quick look at some of the trends we expect to develop over the next 12 months for that to happen.
Every December, we take a look back at big ideas from the past twelve months that promise to gain momentum in the new year. With more than eleven thousand projects launched between our Design and Tech categories in 2016, we have a nice sample to draw from. More importantly, we have a community of forward-thinking backers who help creators figure out which versions of the future to pursue. Here are some of the emerging trends we expect to see more of in 2017.
Everyday artificial intelligence
Whether chatting with a device as if it’s a virtual assistant strikes you as a sci-fi dream come true or a dystopian nightmare, we’re going to see an increasing number of products that use voice-controlled artificial intelligence interfaces to fit into users’ lives more seamlessly. Among the projects leading the way in this arena are Vi, wireless earphones that double as a personal trainer; Bonjour, an alarm clock that wakes you up with a personalized daily briefing; and Dashbot, a talking car accessory that recalls Kit, David Hasselhoff’s buddy from Knight Rider. One of the factors driving this talking AI boom is the emergence of platforms like Microsoft’s Cognitive Service, Amazon’s Alexa, and Google’s Speech API, which allow product developers to focus on user experience rather than low-level speech processing. For the DIY set, Seeed’s ReSpeaker offers a turnkey devkit for working with these services, and we’ll surely see more tools for integrating AI voice interfaces into all manner of products.
During Microsoft’s Build Conference earlier this year, CEO Satya Nadella delivered the three-hour keynote address, in which he highlighted his belief that the future of technology lies in human language. In this new wave of technology, conversation is the new interface, and “bots are the new apps.” While not as flashy as virtual reality nor as immediately practical as 3D printing, chatbots are nevertheless gaining major traction this year, with support coming from across the entire tech industry. The big tech enterprises are all entering the chatbot space, and many startups are too.
Out with the apps, in with the chatbots. The reason for the attention is simple: The power of the natural language processor, software that processes and parses human language, creating a simple and universal means of interacting with technology.
Developments in computing are driving the transformation of entire systems of production, management, and governance. In this interview Justine Cassell, Associate Dean, Technology, Strategy and Impact, at the School of Computer Science, Carnegie Mellon University, and co-chair of the Global Future Council on Computing, says we must ensure that these developments benefit all society, not just the wealthy or those participating in the “new economy”.
Artificial Intelligence (AI) is an important development and consumers globally will see it playing a much more prominent role — both in society and at work — next year, a new report said on Tuesday. Ericsson ConsumerLab, in its annual trend report titled “The 10 Hot Consumer Trends for 2017 and beyond”, said that 35 percent of advanced internet users want an AI advisor at work and one in four would like AI as their manager.At the same time, almost half of the respondents were concerned that AI robots will soon make a lot of people lose their jobs.
From driverless cars to robotic workers, the future is going to be here before you know it. Many emerging technologies you hear about today will reach a tipping point by 2025, according to a report from The World Economic Forum’s Global Agenda Council on the Future of Software & Society. The council surveyed more than 800 executives and experts from the technology sector to share their respective timelines for when technologies would become mainstream. From the survey results, the council identified 21 defining moments, all of which they predict will occur by 2030. Here’s a look at the technological shifts you can expect during the next 14 years.
… The first robotic pharmacist will arrive in the US 2021.
A new year is quickly approaching and Microsoft Research is offering a glimpse at what the tech scene has in store for 2017 along with some hints at the Redmond, Wash., tech giant’s own priorities for the coming year. This year, the company gathered prominent women researchers to share their thoughts on what to expect next year. Surprising nobody’s who’s been following Microsoft’s software and cloud computing strategy of late, the company is betting big on artificial intelligence (AI).
It’s still early days for the Internet of Things. As recently as 2014, 87 percent of consumers had never heard of the technology, according to Accenture. In 2016, and 19% of business and government professionals reported that they had never heard of the Internet of Things while 18% were only vaguely familiar with it, according to research from the Internet of Things Institute. Although the technology is getting the most traction in the industrial space, the most promising use cases for the technology are just starting to come to light. To get a sense of what to expect as we head into 2017, we spoke with Stanford lecturer and IoT author Timothy Chou, Ph.D.; Thulium.co CEO Tamara McCleary; industry observer and influencer Evan Kirstel; and Sandy Carter, CEO and founder of Silicon-Blitz.
Amazon.com Inc. unveiled technology that will let shoppers grab groceries without having to scan and pay for them — in one stroke eliminating the checkout line.
The company is testing the new system at what it’s calling an Amazon Go store in Seattle, which will open to the public early next year. Customers will be able to scan their phones at the entrance using a new Amazon Go mobile app. Then the technology will track what items they pick up or even return to the shelves and add them to a virtual shopping cart in real time, according a video Amazon posted on YouTube. Once the customers exit the store, they’ll be charged on their Amazon account automatically.
Online retail king Amazon.com (AMZN) is taking dead aim at the physical-store world Monday, introducing Amazon Go, a retail convenience store format it is developing that will use computer vision and deep-learning algorithms to let shoppers just pick up what they want and exit the store without any checkout procedure.
Shoppers will merely need to tap the Amazon Go app on their smartphones, and their virtual shopping carts will automatically tabulate what they owe, and deduct that amount from their Amazon accounts, sending you a receipt. It’s what the company has deemed “just walk out technology,” which it said is based on the same technology used in self-driving cars. It’s certain to up the ante in the company’s competition with Wal-Mart (WMT), Target (TGT) and the other retail leaders.
Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.
DeepMind is putting the entire source code for its training environment — which it previously called Labyrinth and has now renamed as DeepMind Lab — on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.
Beacon technology, which was practically left for dead after failing to deliver on its promise to revolutionize the retail industry, is making a comeback.
Beacons are puck-size gadgets that can send helpful tips, coupons and other information to people’s smartphones through Bluetooth. They’re now being used in everything from bank branches and sports arenas to resorts, airports and fast-food restaurants. In the latest sign of the resurgence, Mobile Majority, an advertising startup, said on Monday that it was buying Gimbal Inc., a beacon maker it bills as the largest independent source of location data other than Google and Apple Inc.
Several recent developments have sparked the latest boom. Companies like Google parent Alphabet Inc. are making it possible for people to use the feature without downloading any apps, which had been a major barrier to adoption, said Patrick Connolly, an analyst at ABI. Introduced this year, Google Nearby Notifications lets developers tie an app or a website to a beacon to send messages to consumers even when they have no app installed. … But in June, Cupertino, California-based Mist Systems began shipping a software-based product that simplified the process. Instead of placing 10 beacons on walls and ceilings, for example, management using Mist can install one device every 2,000 feet (610 meters), then designate various points on a digital floor plan as virtual beacons, which can be moved with a click of a mouse.
Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.
“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”
That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.
Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.
Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, they run ML models at massive scale, computing trillions of predictions every day.
At the 2016 Deep Learning Summit in Boston, Andrew Tulloch, Research Engineer at Facebook, talked about some of the tools and tricks Facebook use for scaling both the training and deployment of some of their deep learning models at Facebook. He also covered some useful libraries that they’d open-sourced for production-oriented deep learning applications. Tulloch’s session can be watched in full below.
Let’s start with some of the brand-name organizations laying down big bucks on artificial intelligence.
Amazon: Sells the successful Echo home speaker, which comes with the personal assistant Alexa.
Alphabet (Google): Uses deep learning technology to power Internet searches and developed AlphaGo, an AI that beat the world champion in the game of Go.
Apple: Developed the popular virtual assistant Siri and is working on other phone-related AI applications, such as facial recognition.
Baidu: Wants to use AI to improve search, recognize images of objects and respond to natural language queries.
Boeing: Works with Carnegie Mellon University to develop machine learning capable of helping it design and build planes more efficiently.
Facebook: Wants to create the “best AI lab in the world.” Has its personal assistant, M, and focuses heavily on facial recognition.
IBM: Created the Jeopardy-winning Watson AI and is leveraging its data analysis and natural language capabilities in the healthcare industry.
Intel: Has made acquisitions to help it build specialized chips and software to handle deep learning.
Microsoft: Works on chatbot technology and acquired SwiftKey, which predicts what users will type next.
Nokia: Has introduced various machine learning capabilities to its portfolio of customer-experience software.
Nvidia: Builds computer chips customized for deep learning.
Salesforce: Took first place at the Stanford Question Answering Dataset, a test of machine learning and comprehension, and has developed the Einstein model that learns from data.
Shell: Launched a virtual assistant to answer customer questions.
Tesla Motors: Continues to work on self-driving automobile technologies.
Twitter: Created an AI-development team called Cortex and acquired several AI startups.
IBM’s seemingly ubiquitous Watson is now infiltrating education, through AI powered software that ‘reads’ the needs of individual students in order to engage them through tailored learning approaches.
This is not to be taken lightly, as it opens the door to a new breed of technologies that will spearhead the education or re-education of the workforce of the future.
As outlined in the 2030 report, despite robots or AI displacing a big chunk of the workforce, they will also play a major role in creating job opportunities as never before.In such a competitive landscape, workers of all kinds, white or blue collar to begin with, should come readied with new, versatile and contemporary skills.
The point is, the very AI that will leave someone jobless, will also help him to re-adapt into a new job’s requirements.It will also prepare the new generations through the use of such optimal methodologies that will once more give meaning to the aging and counter-productive schooling system which has the students’ skills disengaged from the needs of the industry and which still segregates students into ‘good’ and ‘bad’. Might it be that ‘bad’ students become just like that due to the system’s inability to stimulate their interest?
If you enjoyed this article, please consider sharing it!
CIO’s Sharon Florentine took a look at data from global freelance marketplace Upwork, based on annual job posting growth and skills demand. The following are leading IoT skills Florentine identified that will be demand as the IoT proliferates, with level the growth seen over a one-year period:
Circuit design (231% growth): Builds miniaturized circuit boards for sensors and devices.
Microcontroller programming (225% growth): Writes code that provides intelligence to microcontrollers, the embedded chips within IoT devices.
AutoCAD (216% growth): Designs the devices.
Machine learning (199% growth): Writes the algorithms that recognize data patterns within devices.
Security infrastructure (194% growth): Identifies and integrates the standards, protocols and technologies that protect devices, as well as the data inside.
Big data (183% growth): Data scientists and engineers “who can collect, organize, analyze and architect disparate sources of data.” Hadoop and Apache Spark are two areas with particularly strong demand.
If you enjoyed this article, please consider sharing it!
From DSC: If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:
For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:
Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
Peaces of scripture, with links to Biblegateway.com or other sites
Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
A person could turn the app’s notifications on or off at any time. The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.
If you enjoyed this article, please consider sharing it!
As a contribution toward preparing the United States for a future in which AI plays a growing role, this report surveys the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further action s by Federal agencies and other actors. A companion document lays out a strategic plan for Federally-funded research and development in AI. Additionally, in the coming months, the Administration will release a follow-on report exploring in greater depth the effect of AI-driven automation on jobs and the economy.
The report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, which was chartered in May 2016 to foster interagency coordination, to provide technical and policy advice on topics related to AI, and to monitor the development of AI technologies across industry, the research community, and the Federal Government. The report was reviewed by the NSTC Committee on Technology, which concurred with its contents. The report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five public workshops co-hosted with universities and other associations that are referenced in this report.
In the coming years, AI will continue to contribute to economic growth and will be a valuable tool for improving the world, as long as industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits.
If you enjoyed this article, please consider sharing it!
From DSC: Here’s an idea that came to my mind the other day as I was walking by a person who was trying to put some books back onto the shelves within our library.
From DSC: Perhaps this idea is not very timely…as many collections of books will likely continue to be digitized and made available electronically. But preservation is still a goal for many libraries out there.
Today, the IoT sits at the peak of Gartner’s Hype Cycle. It’s probably not surprising that industry is abuzz with the promise of streaming sensor data. The oft quoted “50 billion connected devices by 2020!” has become a rallying cry for technology analysts, chip vendors, network providers, and other proponents of a deeply connected, communicating world. What is surprising is that academia has been relatively slow to join the parade, particularly when the potential impacts are so exciting. Like most organizations that manage significant facilities, universities stand to benefit by adopting the IoT as part of their management strategy. The IoT also affords new opportunities to improve the customer experience. For universities, this means the ability to provide new student services and improve on those already offered. Perhaps most surprisingly, the IoT represents an opportunity to better engage a diverse student base in computer science and engineering, and to amplify these programs through meaningful interdisciplinary collaboration.
The potential benefits of the IoT to the academic community extend beyond facilities management to improving our students’ experience. The lowest hanging fruit can be harvested by adapting some of the smart city applications that have emerged. What student hasn’t shown up late to class after circling the parking lot looking for a space? Ask any student at a major university if it would improve their campus experience to be able to check on their smart phones which parking spots were available. The answer will be a resounding “yes!” and there’s nothing futuristic about it. IoT parking management systems are commercially available through a number of vendors. This same type of technology can be adapted to enable students to find open meeting rooms, computer facilities, or café seating. What might be really exciting for students living in campus dormitories: A guarantee that they’ll never walk down three flights of stairs balancing two loads of dirty laundry to find that none of the washing machines are available. On many campuses, the washing machines are already network-connected to support electronic payment; availability reporting is a straightforward extension.
This is the week when artificially intelligent assistants start getting serious.
On Tuesday, Google is expected to announce the final details for Home, its connected speaker with the new Google Assistant built inside.
But first Amazon, which surprised everyone last year by practically inventing the AI-in-a-can platform, will release a new version of the Echo Dot, a cheaper and smaller model of the full-sized Echo that promises to put the company’s Alexa assistant in every room in your house.
The Echo Dot has all the capabilities of the original Echo, but at a much cheaper price, and with a compact form factor that’s designed to be tucked away. Because of its size (it looks like a hockey puck from the future), its sound quality isn’t as good as the Echo, but it can hook up to an external speaker through a standard audio cable or Bluetooth.
Today, we have machines that assemble cars, make candy bars, defuse bombs, and a myriad of other things. They can dispense our drinks, facilitate our bank deposits, and find the movies we want to watch with a touch of the screen.
Automation allows all kinds of amazing things, but it is all done with virtually no personality. Building a chatbot with the ability to be conversational with emotion is crucial to getting people to gain trust in the technology. And now there are plenty of tools and resources available to rapidly create and launch chatbots with the personality customers want and businesses needs.
Jordi Torras is CEO and Founder of Inbenta, a company that specializes in NLP, semantic search and chatbots to improve customer experience. We spoke to him ahead of his presentation at the Virtual Assistant Summit in San Francisco, to learn about the recent explosion of chatbots and virtual assistants, and what we can expect to see in the future.
Today, we are just beginning to scratch the surface of what is possible with artificial intelligence (A.I.) and how individuals will interact with its various forms. Every single aspect of our society — from cars to houses to products to services — will be reimagined and redesigned to incorporate A.I.
A child born in the year 2030 will not comprehend why his or her parents once had to manually turn on the lights in the living room. In the future, the smart home will seamlessly know the needs, wants, and habits of the individuals who live in the home prior to them taking an action.
Before we arrive at this future, it is helpful to take a step back and reimagine how we design cars, houses, products, and services. We are just beginning to see glimpses of this future with the Amazon Echo and Google Home smart voice assistants.
So, Seven Dreamers Laboratories, in collaboration with Panasonic and Daiwa House Industry, have created just such a machine. However, folding laundry correctly turns out to be quite a complicated task, and so an artificial intelligence was required to make it a reliable process.
Laundry folding is actually a five stage process, including:
The grabbing and spreading seems pretty easy, but then the machine needs to understand what type of clothing it needs to fold. That recognizing stage requires both image recognition and AI. The image recognition classifies the type of clothing, then the AI figures out which processes to use in order to start folding.
During a delightful “cold spell” in Austin at the end of September, a few hundred chatbot enthusiasts joined together for the first talkabot.ai conference.
As a participant both writing about and building chatbots, I’m excited to share a mix of valuable actionable insights and strategic vision directions picked up from speakers and attendees as well as behind the scenes discussions with the organizers from Howdy.
In a very congenial and collaborative atmosphere, a number of valuable recurring themes stood out from a variety of expert speakers ranging from chatbot builders to tool makers to luminaries from adjacent industries.
The way humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Alexa, the voice service that powers Amazon Echo, enables customers to interact with the world around them in a more intuitive way using only their voice.
The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI. The inaugural competition is focused on creating a socialbot, a new Alexa skill that converses coherently and engagingly with humans on popular topics and news events. Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Through the innovative work of students, Alexa customers will have novel, engaging conversations. And, the immediate feedback from Alexa customers will help students improve their algorithms much faster than previously possible.
… Amazon will award the winning team $500,000. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans on popular topics for 20 minutes.
If you enjoyed this article, please consider sharing it!
From DSC: I have attended theNext Generation Learning Spaces Conferencefor the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.
For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.
The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.
Key takeaways for the panel discussion:
Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
An update on the state of the approaching ed tech landscape
Creative, new thinking: What might our next generation learning environments look like in 5-10 years?
I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check outthe conferenceandregister soon to take advantage of the early bird discounts.
If you enjoyed this article, please consider sharing it!