The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).
From DSC: Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies. The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.
Amazon.com Inc. unveiled technology that will let shoppers grab groceries without having to scan and pay for them — in one stroke eliminating the checkout line.
The company is testing the new system at what it’s calling an Amazon Go store in Seattle, which will open to the public early next year. Customers will be able to scan their phones at the entrance using a new Amazon Go mobile app. Then the technology will track what items they pick up or even return to the shelves and add them to a virtual shopping cart in real time, according a video Amazon posted on YouTube. Once the customers exit the store, they’ll be charged on their Amazon account automatically.
Online retail king Amazon.com (AMZN) is taking dead aim at the physical-store world Monday, introducing Amazon Go, a retail convenience store format it is developing that will use computer vision and deep-learning algorithms to let shoppers just pick up what they want and exit the store without any checkout procedure.
Shoppers will merely need to tap the Amazon Go app on their smartphones, and their virtual shopping carts will automatically tabulate what they owe, and deduct that amount from their Amazon accounts, sending you a receipt. It’s what the company has deemed “just walk out technology,” which it said is based on the same technology used in self-driving cars. It’s certain to up the ante in the company’s competition with Wal-Mart (WMT), Target (TGT) and the other retail leaders.
Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.
DeepMind is putting the entire source code for its training environment — which it previously called Labyrinth and has now renamed as DeepMind Lab — on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.
Beacon technology, which was practically left for dead after failing to deliver on its promise to revolutionize the retail industry, is making a comeback.
Beacons are puck-size gadgets that can send helpful tips, coupons and other information to people’s smartphones through Bluetooth. They’re now being used in everything from bank branches and sports arenas to resorts, airports and fast-food restaurants. In the latest sign of the resurgence, Mobile Majority, an advertising startup, said on Monday that it was buying Gimbal Inc., a beacon maker it bills as the largest independent source of location data other than Google and Apple Inc.
…
Several recent developments have sparked the latest boom. Companies like Google parent Alphabet Inc. are making it possible for people to use the feature without downloading any apps, which had been a major barrier to adoption, said Patrick Connolly, an analyst at ABI. Introduced this year, Google Nearby Notifications lets developers tie an app or a website to a beacon to send messages to consumers even when they have no app installed. … But in June, Cupertino, California-based Mist Systems began shipping a software-based product that simplified the process. Instead of placing 10 beacons on walls and ceilings, for example, management using Mist can install one device every 2,000 feet (610 meters), then designate various points on a digital floor plan as virtual beacons, which can be moved with a click of a mouse.
Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.
“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”
That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.
Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.
Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, they run ML models at massive scale, computing trillions of predictions every day.
At the 2016 Deep Learning Summit in Boston, Andrew Tulloch, Research Engineer at Facebook, talked about some of the tools and tricks Facebook use for scaling both the training and deployment of some of their deep learning models at Facebook. He also covered some useful libraries that they’d open-sourced for production-oriented deep learning applications. Tulloch’s session can be watched in full below.
The Artificial Intelligence Gold Rush— from foresightr.com by Mark Vickers Big companies, venture capital firms and governments are all banking on AI
Excerpt:
Let’s start with some of the brand-name organizations laying down big bucks on artificial intelligence.
Amazon: Sells the successful Echo home speaker, which comes with the personal assistant Alexa.
Alphabet (Google): Uses deep learning technology to power Internet searches and developed AlphaGo, an AI that beat the world champion in the game of Go.
Apple: Developed the popular virtual assistant Siri and is working on other phone-related AI applications, such as facial recognition.
Baidu: Wants to use AI to improve search, recognize images of objects and respond to natural language queries.
Boeing: Works with Carnegie Mellon University to develop machine learning capable of helping it design and build planes more efficiently.
Facebook: Wants to create the “best AI lab in the world.” Has its personal assistant, M, and focuses heavily on facial recognition.
IBM: Created the Jeopardy-winning Watson AI and is leveraging its data analysis and natural language capabilities in the healthcare industry.
Intel: Has made acquisitions to help it build specialized chips and software to handle deep learning.
Microsoft: Works on chatbot technology and acquired SwiftKey, which predicts what users will type next.
Nokia: Has introduced various machine learning capabilities to its portfolio of customer-experience software.
Nvidia: Builds computer chips customized for deep learning.
Salesforce: Took first place at the Stanford Question Answering Dataset, a test of machine learning and comprehension, and has developed the Einstein model that learns from data.
Shell: Launched a virtual assistant to answer customer questions.
Tesla Motors: Continues to work on self-driving automobile technologies.
Twitter: Created an AI-development team called Cortex and acquired several AI startups.
IBM’s seemingly ubiquitous Watson is now infiltrating education, through AI powered software that ‘reads’ the needs of individual students in order to engage them through tailored learning approaches.
This is not to be taken lightly, as it opens the door to a new breed of technologies that will spearhead the education or re-education of the workforce of the future.
As outlined in the 2030 report, despite robots or AI displacing a big chunk of the workforce, they will also play a major role in creating job opportunities as never before.In such a competitive landscape, workers of all kinds, white or blue collar to begin with, should come readied with new, versatile and contemporary skills.
The point is, the very AI that will leave someone jobless, will also help him to re-adapt into a new job’s requirements.It will also prepare the new generations through the use of such optimal methodologies that will once more give meaning to the aging and counter-productive schooling system which has the students’ skills disengaged from the needs of the industry and which still segregates students into ‘good’ and ‘bad’. Might it be that ‘bad’ students become just like that due to the system’s inability to stimulate their interest?
From DSC: If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:
For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:
Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
Peaces of scripture, with links to Biblegateway.com or other sites
Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
Etc.
A person could turn the app’s notifications on or off at any time. The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.
The recent announcement of Salesforce Einstein — dubbed “artificial intelligence for everyone” — sheds new light on the new and pervasive usage of artificial intelligence in every aspect of businesses.
Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customized for every single customer, and it will learn, self-tune, and get smarter with every interaction and additional piece of data. Most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.
…
Chatbots, or conversational bots, are the “other” trending topic in the field of artificial intelligence. At the juncture of consumer and business, they provide the ability for an AI-based system to interact with users through a headless interface. It does not matter whether a messaging app is used, or a speech-to-text system, or even another app — the chatbot is front-end agnostic.
Since the user does not have the ability to provide context around the discussion, he just asks questions in natural language to an AI-driven backend that is tasked with figuring this context and looking for the right answer.
For many months IBM has gone to recruiting-industry conferences to say that the famous Watson will be at some point used for talent-acquisition, but that it hasn’t happened quite yet.
It’s here.
IBM is first using Watson for its RPO customers, and then rolling it out as a product for the larger community, perhaps next spring. One of my IBM contacts, Recruitment Innovation Global Leader Yates Baker, tells me that the current version is a work in progress like the first iPhone (or perhaps like that Siri-for-recruiting tool).
There are three parts: recruiting, marketing, and sourcing.
With the new iOS 10, Siri can control third-party apps, like Uber and WhatsApp. With the release of MacOS Sierra on Tuesday, Siri finally lands on the desktop, where it can take care of basic operating system tasks, send emails and more. With WatchOS 3 and the new Apple Watch, Siri is finally faster on the wrist. And with Apple’s Q-tip-looking AirPods arriving in October, Siri can whisper sweet nothings in your inner ear with unprecedented wireless freedom. Think Joaquin Phoenix’s earpiece in the movie “Her.”
The groundwork is laid for an AI assistant to stake a major claim in your life, and finally save you time by doing menial tasks. But the smarter Siri becomes in some places, the dumber it seems in others—specifically compared with Google’s and Amazon’s voice assistants. If I hear “I’m sorry, Joanna, I’m afraid I can’t answer that” one more time…
YORKTOWN HEIGHTS, N.Y., Sept. 20, 2016 /PRNewswire/ — IBM Research (NYSE: IBM) today announced a multi-year collaboration with the Department of Brain & Cognitive Sciences at MIT to advance the scientific field of machine vision, a core aspect of artificial intelligence. The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension’s (BM3C) goal will be to develop cognitive computing systems that emulate the human ability to understand and integrate inputs from multiple sources of audio and visual information into a detailed computer representation of the world that can be used in a variety of computer applications in industries such as healthcare, education, and entertainment.
The BM3C will address technical challenges around both pattern recognition and prediction methods in the field of machine vision that are currently impossible for machines alone to accomplish. For instance, humans watching a short video of a real-world event can easily recognize and produce a verbal description of what happened in the clip as well as assess and predict the likelihood of a variety of subsequent events, but for a machine, this ability is currently impossible.
Satya Nadella on Microsoft’s new age of intelligence — from fastcompany.com by Harry McCracken How the software giant aims to tie everything from Cortana to Office to HoloLens to Azure servers into one AI experience.
Excerpt:
“Microsoft was born to do a certain set of things. We’re about empowering people in organizations all over the world to achieve more. In today’s world, we want to use AI to achieve that.”
That’s Microsoft CEO Satya Nadella, crisply explaining the company’s artificial-intelligence vision to me this afternoon shortly after he hosted a keynote at Microsoft’s Ignite conference for IT pros in Atlanta. But even if Microsoft only pursues AI opportunities that it considers to be core to its mission, it has a remarkably broad tapestry to work with. And the examples that were part of the keynote made that clear.
Virtual reality technology holds enormous potential to change the future for a number of fields, from medicine, business, architecture to manufacturing.
Psychologists and other medical professionals are using VR to heighten traditional therapy methods and find effective solutions for treatments of PTSD, anxiety and social disorders. Doctors are employing VR to train medical students in surgery, treat patients’ pains and even help paraplegics regain body functions.
In business, a variety of industries are benefiting from VR. Carmakers are creating safer vehicles, architects are constructing stronger buildings and even travel agencies are using it to simplify vacation planning.
Google has unveiled a new interactive online exhibit that take users on a tour of 10 Downing street in London — home of the U.K. Prime Minister.
The building has served as home to countless British political leaders, from Winston Churchill and Margaret Thatcher through to Tony Blair and — as of a few months ago — Theresa May. But, as you’d expect in today’s security-conscious age, gaining access to the residence isn’t easy; the street itself is gated off from the public. This is why the 10 Downing Street exhibit may capture the imagination of politics aficionados and history buffs from around the world.
The tour features 360-degree views of the various rooms, punctuated by photos and audio and video clips.
In a slightly more grounded environment, the HoloLens is being used to assist technicians in elevator repairs.
Traversal via elevator is such a regular part of our lifestyles, its importance is rarely recognized…until they’re not working as they should be. ThyssenKrupp AG, one of the largest suppliers for elevators, recognizes how essential they are as well as how the simplest malfunctions can deter the lives of millions. Announced on their blog, Microsoft is partnering with Thyssenkrupp to equip 24,000 of their technicians with HoloLens.
Insert from DSC re: the above piece re: HoloLens:
Will technical communicators need to augment their skillsets? It appears so.
But in a world where no moment is too small to record with a mobile sensor, and one in which time spent in virtual reality keeps going up, interesting parallels start to emerge with our smartphones and headsets.
Let’s look at how the future could play out in the real world by observing three key drivers: VR video adoption, mobile-video user needs and the smartphone camera rising tide.
“Individuals with autism may become overwhelmed and anxious in social situations,” research clinician Dr Nyaz Didehbani said.
“The virtual reality training platform creates a safe place for participants to practice social situations without the intense fear of consequence,” said Didehbani.
The participants who completed the training demonstrated improved social cognition skills and reported better relationships, researchers said.
From DSC:
The articles listed inthis PDF documentdemonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:
Educate and prepare our youth in K-12
Educate and prepare our young men and women studying within higher education
One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.
Soon, hundreds of millions of mobile users in China will have direct access to an augmented reality smartphone platform on their smartphones.
Baidu, China’s largest search engine, unveiled an AR platform today called DuSee that will allow China’s mobile users the opportunity to test out smartphone augmented reality on their existing devices. The company also detailed that they plan to integrate the technology directly into their flagship apps, including the highly popular Mobile Baidu search app.
From DSC: With “a billion iOS devices out in the world“, I’d say that’s a good, safe call…at least for one of the avenues/approaches via which AR will be offered.
Recently it’s appeared that augmented reality (AR) is gaining popularity as a professional platform, with Visa testing it as a brand new e-commerce solution and engineering giant Aecom piloting a project that will see the technology used in their construction projects across three continents. Now, it’s academia’s turn with Deakin University in Australia announcing that it plans to use the technology as a teaching tool in its medicine and engineering classrooms.
As reported by ITNews, AR technology will be introduced to Deakin University’s classes from December, with the first AR apps to be used during the university’s summer programme which runs from November to March, before the technology is distributed more widely in the first semester of 2017.
Creating realistic interactions with objects and people in virtual reality is one of the industry’s biggest challenges right now, but what about for augmented reality?
That’s an area that researchers from Massachusetts Institute of Technology’s (MIT’s) Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently made strides in with what they call Interactive Dynamic Video (IDV). First designed with video in mind, PhD student Abe Davis has created a unique concept that could represent a way to not only interact with on-screen objects, but for those objects to also realistically react to the world around them.
If that sounds a little confusing, then we recommend taking a look at…
Venice Architecture Biennale 2016: augmented reality will revolutionise the architecture and construction industries according to architect Greg Lynn, who used Microsoft HoloLens to design his contribution to the US Pavilion at the Venice Biennale (+ movie).
.
Augmented Reality In Healthcare Will Be Revolutionary — from medicalfuturist.com Augmented reality is one of the most promising digital technologies at present – look at the success of Pokémon Go – and it has the potential to change healthcare and everyday medicine completely for physicians and patients alike.
Excerpt:
Nurses can find veins easier with augmented reality The start-up company AccuVein is using AR technology to make both nurses’ and patients’ lives easier. AccuVein’s marketing specialist, Vinny Luciano said 40% of IVs (intravenous injections) miss the vein on the first stick, with the numbers getting worse for children and the elderly. AccuVein uses augmented reality by using a handheld scanner that projects over skin and shows nurses and doctors where veins are in the patients’ bodies. Luciano estimates that it’s been used on more than 10 million patients, making finding a vein on the first stick 3.5x more likely. Such technologies could assist healthcare professionals and extend their skills.
But even with a wealth of hardware partners over the years, Urbach says he’d never tried a pair of consumer VR glasses that could effectively trick his brain until he began working with Osterhout Design Group (ODG).
ODG has previously made military night-vision goggles, and enterprise-focused glasses that overlay digital objects onto the real world. But now the company is partnering with OTOY, and will break into the consumer AR/VR market with a model of glasses codenamed “Project Horizon.”
The glasses work by using a pair of micro OLED displays to reflect images into your eyes at 120 frames-per-second. And the quality blew Urbach away, he tells Business Insider.
You could overlay images onto the real world in a way that didn’t appear “ghost-like.” We have the ability to do true opacity matching,” he says.
Live streaming VR events continue to make the news. First, it was the amazing Reggie Watts performance on AltspaceVR. Now the startup Rivet has launched an iOS app (sorry, Android still to come) for live streams of concerts. As the musician, record producer and visual artist Brian Eno once said,
You can’t really imagine music without technology.
In the near future, we may not be able to imagine a live performance without the option of a live stream in virtual reality.
While statistics on VR use in K-12 schools and colleges have yet to be gathered, the steady growth of the market is reflected in the surge of companies (including zSpace, Alchemy VRand Immersive VR Education) solely dedicated to providing schools with packaged educational curriculum and content, teacher training and technological tools to support VR-based instruction in the classroom. Myriad articles, studies and conference presentations attest to the great success of 3D immersion and VR technology in hundreds of classrooms in educationally progressive schools and learning labs in the U.S. and Europe.
…
Much of this early foray into VR-based learning has centered on the hard sciences — biology, anatomy, geology and astronomy — as the curricular focus and learning opportunities are notably enriched through interaction with dimensional objects, animals and environments. The World of Comenius project, a biology lesson at a school in the Czech Republic that employed a Leap Motion controller and specially adapted Oculus Rift DK2 headsets, stands as an exemplary model of innovative scientific learning.
In other areas of education, many classes have used VR tools to collaboratively construct architectural models, recreations of historic or natural sites and other spatial renderings. Instructors also have used VR technology to engage students in topics related to literature, history and economics by offering a deeply immersive sense of place and time, whether historic or evolving.
“Perhaps the most utopian application of this technology will be seen in terms of bridging cultures and fostering understanding among young students.”
The promise of 5G will fuel growth in video streaming and IoT devices, report claims — from digitaltrends.com by Christian de Looper Excerpt:
The smartphone has largely taken over our digital lives, but if the Ericsson Mobility Report is anything to go by, mobile devices and other smart gadgets will continue to grow in prominence over the course of the next decade. Both the Internet of Things, video, and mobile internet use are expected to rise in prominence. According to Ericsson, IoT devices are set to overtake mobile in the connected devices category by 2018. The IoT space will maintain a hefty compound annual growth rate of 23 percent between 2015 and 2021. Part of this growth has to do with the introduction of 5G networks, which are expected to launch at some point in 2020.
The SIIA CODiE Awards for 2016 — with thanks to Neha Jaiswal from uCertify for this resource; uCertify, as you will see, did quite well
Since 1986, the SIIA CODiE Awards have recognized more than 1,000 software and information companies for achieving excellence. The CODiE Awards remain the only peer-recognized program in the content, education, and software industries so each CODiE Award win serves as incredible market validation for a product’s innovation, vision, and overall industry impact.
From DSC:
Below are some questions and thoughts that are going through my mind:
Will “class be in session” soon on tools like Prysm & Bluescape?
Will this type of setup be the next platform that we’ll use to meet our need to be lifelong learners? That is, will what we know of today as Learning Management Systems (LMS) and Content Management Systems (CMS) morph into this type of setup?
Via platforms/operating systems like tvOS, will our connected TVs turn into much more collaborative devices, allowing us to contribute content with learners from all over the globe?
Prysm is already available on mobile devices and what we consider a television continues to morph
Will second and third screens be used in such setups? What functionality will be assigned to the main/larger screens? To the mobile devices?
Will colleges and universities innovate into such setups? Or will organizations like LinkedIn.com/Lynda.com lead in this space? Or will it be a bit of both?
How will training, learning and development groups leverage these tools/technologies?
Are there some opportunities for homeschoolers here?
Along these lines, are are some videos/images/links for you:
To enhance the Prysm Visual Workplace, Prysm today announced an integration with Microsoft OneDrive for Business and Office 365. Using the OneDrive for Business API from Microsoft, Prysm has made it easy for customers to connect Prysm to their existing OneDrive for Business environments to make it a seamless experience for end users to access, search for, and sync with content from OneDrive for Business. Within a Prysm Visual Workplace project, users may now access, work within and download content from Office 365 using Prysm’s built-in web capabilities.
We have now been Bring Your Own Device (BYOD) for three years, and boy, do the students bring it. They bring it all! We have iPads, Surface, iPhones, Droids, Chromebooks, Macs, and PC laptops. Here’s my current thinking.
Music is for everyone. So this year for Music In Our Schools month, we wanted to make learning music a bit more accessible to everyone by using technology that’s open to everyone: the web. Chrome Music Lab is a collection of experiments that let anyone, at any age, explore how music works. They’re collaborations between musicians and coders, all built with the freely available Web Audio API. These experiments are just a start. Check out each experiment to find open-source code you can use to build your own.
I love the School Report scheme that the BBC run via Newsround. We all remember the Newsrounds of our youth. For me it was John Craven who made me watch it whenever it was on. It was this report I saw recently on eight things teachers should learn, which got me thinking about eight things I thought teachers should learn about edtech.
My work sees me regularly helping teachers learn different things related to the use of technology and so in this post, I’m going to talk about the eight things I think teachers should learn with #edtech to help support their use of technology to enhance learning in the classroom.
As we move toward interacting more with students who have an individualized education program (IEP) indicating that they need additional time on tests and quizzes or just need to deal with life issues, it is imperative that the learning management system (LMS) depended upon by an instructor and student alike be properly configured for such accommodations. Canvas and Moodle are currently two of the most popular learning management systems, and both offer the ability to make adjustments to quiz functions within the course without compromising the overall structure of the course. In this article, we will examine how to do so and offer some tips on situations where they are relevant.
[The] Chrome web store is packed full of all kinds of educational apps and extensions some of which are also integrated with Google Drive. For those of you looking for a handy resource of Chrome apps to use with students in class, check out this comprehensive chart. In today’s post we are sharing with you a collection of some practical Chrome extensions to unleash learners creativity. Using these resources, students will be able to engage in a number of creative literacy activities that will allow them to multimodally communicate their thoughts, share their ideas and develop new learning skills.
How do you work technology into the pedagogy, instead of just using something cool? That task can be especially daunting in language arts literacy classrooms where reading and writing skill development is the crux of daily lessons. However, as 1:1 technology initiatives roll out, integrating technology into the classroom is our reality.
With hundreds of sites, apps, Chrome extensions, and platforms available, choosing the right ones can seem overwhelming. As an eighth-grade language arts teacher, I’ve experienced this myself. Following are four tools that can help provide immediate formative assessment data as well as top-of-the-rotation feedback to help students develop personal learning goals.
If, like my school, you’re in a “Chromebook District,” these suggested tools will work well because all integrate perfectly when you sign in with your Google ID, limiting the need for multiple passwords. This saves a lot of student confusion, too.
This giggly play session actually was a serious math lesson about big and small and non-standard measurements. Dreamed up by Richardson and kindergarten teacher Carol Hunt, it aims to get the children to think of animal steps as units of measurement, using them to mark how many it takes each animal to get from a starting line to the target.
Teachers call such melding of art and traditional subjects “art integration,” and it’s a new and increasingly popular way of bringing the arts into the classroom. Instead of art as a stand-alone subject, teachers are using dance, drama and the visual arts to teach a variety of academic subjects in a more engaging way.
Paul Pattison and Luke Minaker knew they were onto something when they got an email from the mother of a nine-year-old who read the first instalment of their interactive story, Weirdwood Manor.
“She wrote that she couldn’t get her son to pick up a book,” said Pattison, technical director of All Play No Work, producer of the iPad app. “She got the app for her son and he went through it in two nights. He finished both books.
“And then because we don’t have book 3 out yet, unprompted by her he went over to the bookshelf and pulled off a paperback and started reading chapter books again.”
Every year, Microsoft holds a developer event called “Build.” And recently, those events have gone from snoozers to exciting showcases. Microsoft has a winner with Windows 10 (as long as you ignore the phones), a robust personal assistant in Cortana (that works just fine on a laptop), and a wild holographic future to plan with HoloLens. It’s a lot to take in, and at this year’s Build Microsoft we got updates on all of it. And a few surprises.
Going in, we weren’t totally sure what would be coming next for Windows 10, but it turns out there’s a lot that Microsoft has planned. It’s not just that there are new apps, there are also new bots, which will help people handle all sorts of small tasks. In fact, those bots and Microsoft’s vision of how they should work stole the entire show. Windows, Xbox: you’re cool, but the future is bots.
To that end, Microsoft has published a new Bot Framework, which makes it easier to build chatbots using either C# or Node.js. Working with the tools isn’t so easy that anyone could do it, but they can help reduce some of the difficulties of conversing with a computer.
It was one of the main announcements from Nadella’s keynote address at Microsoft’s Build developer conference Wednesday.
Also see:
Microsoft’s Build 2016 message: ‘we love Cortana’ (but should users?) — from thenextweb.com by Nate Swanner Excerpt:
Build 2016 has one clear takeaway: Cortana is what matters, at least for now. At almost every product or service announcement at this morning’s keynote, Microsoft made a point to mention that it would also work with Cortana. At a deep dive event for press later in the day, Microsoft further highlighted its commitment to the digital assistant. .
From DSC: Questions/relevance for those working higher ed:
Are Computer Science programs able to keep up with the pace of these Human Computer Interaction (HCI)-related changes? The changes in AI/cognitive computing? Are courses being created to address these new skills? These developments also impact those teaching about user experience design, application/product design, and more. .
How will such personal assistants be used by the students? By faculty members?