The digital era has utterly changed the way readers interact with the news.
Traditional news outlets struggle to remain relevant as the media sector’s influence is refocused online.
Journalism in the U.S. faces a number of challenges that blockchain technology has the potential to address and possibly solve — if the technology actually can achieve what it promises.
2018 itself has seen journalism move into uncharted waters as the industry comes up against issues, stemming from the continued digital migration of news organizations.
Because blockchain functions as a platform to facilitate peer-to-peer transactions, there are a few news organizations that believe blockchain technology finally will enable micropayments to be widely adopted in the U.S.
Global installed base of smart speakers to surpass 200 million in 2020, says GlobalData
The global installed base for smart speakers will hit 100 million early next year, before surpassing the 200 million mark at some point in 2020, according to GlobalData, a leading data and analytics company.
The company’s latest report: ‘Smart Speakers – Thematic Research’ states that nearly every leading technology company is either already producing a smart speaker or developing one, with Facebook the latest to enter the fray (launching its Portal device this month). The appetite for smart speakers is also not limited by geography, with China in particular emerging as a major marketplace.
Ed Thomas, Principal Analyst for Technology Thematic Research at GlobalData, comments: “It is only four years since Amazon unveiled the Echo, the first wireless speaker to incorporate a voice-activated virtual assistant. Initial reactions were muted but the device, and the Alexa virtual assistant it contained, quickly became a phenomenon, with the level of demand catching even Amazon by surprise.”
Smart speakers give companies like Amazon, Google, Apple, and Alibaba access to a vast amount of highly valuable user data. They also allow users to get comfortable interacting with artificial intelligence (AI) tools in general, and virtual assistants in particular, increasing the likelihood that they will use them in other situations, and they lock customers into a broader ecosystem, making it more likely that they will buy complementary products or access other services, such as online stores.
Thomas continues: “Smart speakers, particularly lower-priced models, are gateway devices, in that they give consumers the opportunity to interact with a virtual assistant like Amazon’s Alexa or Google’s Assistant, in a “safe” environment. For tech companies serious about competing in the virtual assistant sector, a smart speaker is becoming a necessity, hence the recent entry of Apple and Facebook into the market and the expected arrival of Samsung and Microsoft over the next year or so.”
In terms of the competitive landscape for smart speakers, Amazon was the pioneer and is still a dominant force, although its first-mover advantage has been eroded over the last year or so. Its closest challenger is Google, but neither company is present in the fastest-growing geographic market, China. Alibaba is the leading player there, with Xiaomi also performing well.
Thomas concludes: “With big names like Samsung and Microsoft expected to launch smart speakers in the next year or so, the competitive landscape will continue to fluctuate. It is likely that we will see two distinct markets emerge: the cheap, impulse-buy end of the spectrum, used by vendors to boost their ecosystems; and the more expensive, luxury end, where greater focus is placed on sound quality and aesthetics. This is the area of the market at which Apple has aimed the HomePod and early indications are that this is where Samsung’s Galaxy Home will also look to make an impact.”
Alphabet ’s Google has sold millions of voice-enabled speakers, but it inched toward the future at a Tuesday launch event when it introduced the Home Hub smart screen.
Google isn’t the first company to roll out a screen-enabled home device with voice-assistant technology— Amazon.com released its Echo Show in July 2017. Meanwhile, Lenovo has gotten good reviews for its Smart Display, and Facebook introduced the Portal on Monday.
For the most part, though, consumers have stuck to voice-only devices, and it will be up to Google and its rivals to convince them that an added screen is worth paying for. They’ll also have to reassure consumers that they can trust big tech to maintain their privacy, an admittedly harder task these days after recent security issues at Google and Facebook.
Amazon was right to realize early on that consumers aren’t always comfortable buying items they can’t even see pictures of, and that it’s hard to remember directions you’ve heard but not seen.
Google has announced the Google Hub, its first smart home speaker with a screen. It’s available for pre-order at Best Buy, Target, and Walmart for $149. The Hub will be released on October 22.
The Hub has a 7″ touchscreen and sits on a base with a built-in speaker and microphones, which you can use to play music, watch videos, get directions, and control smart home accessories with your voice.
Its biggest advantage is its ability to hook into Google’s first-party services, like YouTube and Google Maps, which none of its competitors can use.
If you’re an Android or Chromecast user, trust Google more than Amazon, or want a smaller smart home speaker with a screen, the Google Hub is now your best bet.
Google is going to shut down the consumer version of Google+ over the next 10 months, the company writes in a blog post today. The decision follows the revelation of a previously undisclosed security flaw that exposed users’ profile data that was remedied in March 2018.
Mae Jemison, the first woman of color to go into space, stood in the center of the room and prepared to become digital. Around her, 106 cameras captured her image in 3-D, which would later render her as a life-sized hologram when viewed through a HoloLens headset.
Jemison was recording what would become the introduction for a new exhibit at the Intrepid Sea, Air, and Space Museum, which opens tomorrow as part of the Smithsonian’s annual Museum Day. In the exhibit, visitors will wear HoloLens headsets and watch Jemison materialize before their eyes, taking them on a tour of the Space Shuttle Enterprise—and through space history. They’re invited to explore artifacts both physical (like the Enterprise) and digital (like a galaxy of AR stars) while Jemison introduces women throughout history who have made important contributions to space exploration.
Interactive museum exhibits like this are becoming more common as augmented reality tech becomes cheaper, lighter, and easier to create.
Using either an Oculus Go standalone device or a mobile Gear VR headset, users will be able to login to the Oculus Venues app and join other users for an immersive live stream of various developer keynotes and adrenaline-pumping esports competitions.
From DSC: What are the ramifications of this for the future of webinars, teaching and learning, online learning, MOOCs and more…?
Apple’s iOS 12 has finally landed. The big update appeared for everyone on Monday, Sept. 17, and hiding within are some pretty amazing augmented reality upgrades for iPhones, iPads, and iPod touches. We’ve been playing with them ever since the iOS 12 beta launched in June, and here are the things we learned that you’ll want to know about.
For now, here’s everything AR-related that Apple has included in iOS 12. There are some new features aimed to please AR fanatics as well as hook those new to AR into finally getting with the program. But all of the new AR features rely on ARKit 2.0, the latest version of Apple’s augmented reality framework for iOS.
In a pilot program at Berkeley College, members of a Virtual Reality Faculty Interest Group tested the use of virtual reality to immerse students in a variety of learning experiences. During winter 2018, seven different instructors in nearly as many disciplines used inexpensive Google Cardboard headsets along with apps on smartphones to virtually place students in North Korea, a taxicab and other environments as part of their classwork.
Participants used free mobile applications such as Within, the New York Times VR, Discovery VR, Jaunt VR and YouTube VR. Their courses included critical writing, international business, business essentials, medical terminology, international banking, public speaking and crisis management.
STEM students engaged in scientific disciplines, such as biochemistry and neuroscience, are often required by their respective degrees to spend a certain amount of time engaged in an official laboratory environment. Unfortunately, crowded universities and the rise of online education have made it difficult for these innovators-in-training to access properly equipped labs and log their necessary hours.
Cue Google VR Labs, a series of comprehensive virtual lab experiences available on the Google Daydream platform. Developed as part of partnership between Google and simulation education company Labster, the in-depth program boasts 30 interactive lab experiences in which biology students can engage in a series of hands-on scientific activities in a realistic environment.
These actions can include everything from the use of practical tools, such as DNA sequencers and microscopes, to reality-bending experiences only capable in a virtual environment, like traveling to the surface of the newly discovered Astakos IV exoplanet or examining and altering DNA on a molecular level.
Overhyped by some, drastically underestimated by others, few emerging technologies have generated the digital ink like virtual reality (VR), augmented reality (AR), and mixed reality (MR). Still lumbering through the novelty phase and roller coaster-like hype cycles, the technologies are only just beginning to show signs of real world usefulness with a new generation of hardware and software applications aimed at the enterprise and at end users like you. On the line is what could grow to be a $108 billion AR/VR industry as soon as 2021. Here’s what you need to know.
The reason is that VR environments by nature demand a user’s full attention, which make the technology poorly suited to real-life social interaction outside a digital world. AR, on the other hand, has the potential to act as an on-call co-pilot to everyday life, seamlessly integrating into daily real-world interactions. This will become increasingly true with the development of the AR Cloud.
The AR Cloud Described by some as the world’s digital twin, the AR Cloud is essentially a digital copy of the real world that can be accessed by any user at any time.
For example, it won’t be long before whatever device I have on me at a given time (a smartphone or wearable, for example) will be equipped to tell me all I need to know about a building just by training a camera at it (GPS is operating as a poor-man’s AR Cloud at the moment).
What the internet is for textual information, the AR Cloud will be for the visible world. Whether it will be open source or controlled by a company like Google is a hotly contested issue.
Augmented reality will have a bigger impact on the market and our daily lives than virtual reality — and by a long shot. That’s the consensus of just about every informed commentator on the subject.
Despite all the hype in recent years about the potential for virtual reality in education, an emerging technology known as mixed reality has far greater promise in and beyond the classroom.
Unlike experiences in virtual reality, mixed reality interacts with the real world that surrounds us. Digital objects become part of the real world. They’re not just digital overlays, but interact with us and the surrounding environment.
If all that sounds like science fiction, a much-hyped device promises some of those features later this year. The device is by a company called Magic Leap, and it uses a pair of goggles to project what the company calls a “lightfield” in front of the user’s face to make it look like digital elements are part of the real world. The expectation is that Magic Leap will bring digital objects in a much more vivid, dynamic and fluid way compared to other mixed-reality devices such as Microsoft’s Hololens.
Now think about all the other things you wished you had learned this way and imagine a dynamic digital display that transforms your environment and even your living room or classroom into an immersive learning lab. It is learning within a highly dynamic and visual context infused with spatial audio cues reacting to your gaze, gestures, gait, voice and even your heartbeat, all referenced with your geo-location in the world. Unlike what happens with VR, where our brain is tricked into believing the world and the objects in it are real, MR recognizes and builds a map of your actual environment.
Also see:
virtualiteach.com Exploring The Potential for the Vive Focus in Education
On the big screen it’s become commonplace to see a 3D rendering or holographic projection of an industrial floor plan or a mechanical schematic. Casual viewers might take for granted that the technology is science fiction and many years away from reality. But today we’re going to outline where these sophisticated virtual replicas – Digital Twins – are found in the real world, here and now. Essentially, we’re talking about a responsive simulated duplicate of a physical object or system. When we first wrote about Digital Twin technology, we mainly covered industrial applications and urban infrastructure like transit and sewers. However, the full scope of their presence is much broader, so now we’re going to break it up into categories.
Digital twin refers to a digital replica of physical assets (physical twin), processes and systems that can be used for various purposes.[1] The digital representation provides both the elements and the dynamics of how an Internet of Things device operates and lives throughout its life cycle.[2]
Digital twins integrate artificial intelligence, machine learning and software analytics with data to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition or position. This learning system, learns from itself, using sensor data that conveys various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment in which it may be a part of. A digital twin also integrates historical data from past machine usage to factor into its digital model.
In various industrial sectors, twins are being used to optimize the operation and maintenance of physical assets, systems and manufacturing processes.[3] They are a formative technology for the Industrial Internet of Things, where physical objects can live and interact with other machines and people virtually.[4]
Walt Disney Animation Studio is set to debut its first VR short film, Cycles, this August in Vancouver, the Association for Computing Machinery announced today. The plan is for it to be a headliner at the ACM’s computer graphics conference (SIGGRAPH), joining other forms of VR, AR and MR entertainment in the conference’s designated Immersive Pavilion.
This film is a first for both Disney and its director, Jeff Gipson, who joined the animation team in 2013 to work as a lighting artist on films like Frozen, Zootopia and Moana. The objective of this film, Gipson said in the statement released by ACM, is to inspire a deep emotional connection with the story.
“We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story,” said Gipson.
Oculus is officially launching Oculus TV, its dedicated hub for watching flatscreen video in virtual reality, on the standalone Oculus Go headset. Oculus TV was announced at last month’s F8 conference, and it ties together a lot of existing VR video options, highlighting Oculus’ attempts to emphasize non-gaming uses of VR. The free app features a virtual home theater with what Oculus claims is the equivalent of a 180-inch TV screen. It offers access to several streaming video services, including subscription-based platforms like Showtime and free web television services like Pluto TV as well as video from Oculus’ parent company Facebook.
Lehi, UT, May 29, 2018 (GLOBE NEWSWIRE) — Today, fast-growing augmented reality startup, Seek, is launching Seek Studio, the world’s first mobile augmented reality studio, allowing anybody with a phone and no coding expertise required, to create their own AR experiences and publish them for the world to see. With mobile AR now made more readily available, average consumers are beginning to discover the magic that AR can bring to the palm of their hand, and Seek Studio turns everyone into a creator.
To make the process incredibly easy, Seek provides templates for users to create their first AR experiences. As an example, a user can select a photo on their phone, outline the portion of the image they want turned into a 3D object and then publish it to Seek. They will then be able to share it with their friends through popular social networks or text. A brand could additionally upload a 3D model of their product and publish it to Seek, providing an experience for their customers to easily view that content in their own home. Seek Studio will launch with 6 templates and will release new ones every few days over the coming months to constantly improve the complexity and types of experiences possible to create within the platform.
Apple unveiled its new augmented reality file format, as well as ARKit 2.0, at its annual WWDC developer conference today. Both will be available to users later this year with iOS 12.
The tech company partnered with Pixar to develop the AR file format Universal Scene Description (USDZ) to streamline the process of sharing and accessing augmented reality files. USDZ will be compatible with tools like Adobe, Autodesk, Sketchfab, PTC, and Quixel. Adobe CTO Abhay Parasnis spoke briefly on stage about how the file format will have native Adobe Creative Cloud support, and described it as the first time “you’ll be able to have what you see is what you get (WYSIWYG) editing” for AR objects.
With a starting focus on University-level education and vocational schools in sectors such as mechanical engineering, VivEdu branched out to K-12 education in 2018, boasting a comprehensive VR approach to learning science, technology, engineering, mathematics, and art for kids.
That roadmap, of course, is just beginning. Which is where the developers—and those arm’s-length iPads—come in. “They’re pushing AR onto phones to make sure they’re a winner when the headsets come around,” Miesnieks says of Apple. “You can’t wait for headsets and then quickly do 10 years’ worth of R&D on the software.”
To fully realize the potential will require a broad ecosystem. Adobe is partnering with technology leaders to standardize interaction models and file formats in the rapidly growing AR ecosystem. We’re also working with leading platform vendors, open standards efforts like usdz and glTF as well as media companies and the creative community to deliver a comprehensive AR offering. usdz is now supported by Apple, Adobe, Pixar and many others while glTF is supported by Google, Facebook, Microsoft, Adobe and other industry leaders.
There are a number of professionals who would find the ability to quickly and easily create floor plans to be extremely useful. Estate agents, interior designers and event organisers would all no doubt find such a capability to be extremely valuable. For those users, the new feature added to iStaging’s VR Maker app might be of considerable interest.
The new VR Maker feature utilises Apple’s ARKit toolset to recognise spaces, such as walls and floors and can provide accurate measurements. By scanning each wall of a space, a floor plan can be produced quickly and easily.
I’ve interviewed nine investors who have provided their insights on where the VR industry has come, as well as the risks and opportunities that exist in 2018 and beyond. We’ve asked them what opportunities are available in the space — and what tips they have for startups.
Augmented reality (AR) hasn’t truly permeated the mainstream consciousness yet, but the technology is swiftly being adopted by global industries. It’ll soon be unsurprising to find a pair of AR glasses strapped to a helmet sitting on the heads of service workers, and RealWear, a company at the forefront on developing these headsets, thinks it’s on the edge of something big.
…
VOICE ACTIVATION
What’s most impressive about the RealWear HMT-1Z1 is how you control it. There’s no touch-sensitive gestures you need to learn — it’s all managed with voice, and better yet, there’s no need for a hotword like “Hey Google.” The headset listens for certain commands. For example, from the home screen just say “show my files” to see files downloaded to the device, and you can go back to the home screen by saying “navigate home.” When you’re looking at documents — like schematics — you can say “zoom in” or “zoom out” to change focus. It worked almost flawlessly, even in a noisy environment like the AWE show floor.
David Scowsill‘s experience in the aviation industry spans over 30 years. He has worked for British Airways, American Airlines, Easy Jet, Manchester Airport, and most recently the World Travel and Tourism Council, giving him a unique perspective on how Augmented and Virtual Reality (AVR) can impact the aviation industry.
These technologies have the power to transform the entire aviation industry, providing benefits to companies and consumers. From check-in, baggage drop, ramp operations and maintenance, to pilots and flight attendants, AVR can accelerate training, improve safety, and increase efficiency.
London-based design studio Marshmallow Laser Feast is using VR to let us reconnect with nature. With headsets, you can see a forest through the eyes of different animals and experience the sensations they feel. Creative Director Ersinhan Ersin took the stage at TNW Conference last week to show us how and why they created the project, titled In the Eyes of the Animal.
Have you already taken a side when it comes to XR wearables? Whether you prefer AR glasses or VR headsets likely depends on the application you need. But wouldn’t it be great to have a device that could perform as both? As XR tech advances, we think crossovers will start popping up around the world.
A Beijing startup called AntVR recently rocketed past its Kickstarter goal for an AR/VR visor. Their product, the Mix, uses tinted lenses to toggle between real world overlay and full immersion. It’s an exciting prospect. But rather than digging into the tech (or the controversy surrounding their name, their marketing, and a certain Marvel character) we’re looking at what this means for how XR devices are developed and sold.
Google is bringing AR tech to its Expeditions app with a new update going live today. Last year, the company introduced its GoogleExpeditions AR Pioneer Program, which brought the app into classrooms across the country; with this launch the functionality is available to all.
Expeditions will have more than 100 AR tours in addition to the 800 VR tours already available. Examples include experiences that let users explore Leonardo Da Vinci’s inventions and ones that let you interact with the human skeletal system.
At four recent VR conferences and events there was a palpable sense that despite new home VR devices getting the majority of marketing and media attention this year, the immediate promise and momentum is in the location-based VR (LBVR) attractions industry. The VR Arcade Conference (April 29th and 30th), VRLA (May 4th and 5th), the Digital Entertainment Group’s May meeting (May 1), and FoIL (Future of Immersive Leisure, May 16th and 17th) all highlighted a topic that suddenly no one can stop talking about: location-based VR (LBVR). With hungry landlords giving great deals for empty retail locations, VRcades, which are inexpensive to open (like Internet Cafes), are popping up all over the country. As a result, VRcade royalties for developers are on the rise, so they are shifting their attention accordingly to shorter experiences optimized for LBVR, which is much less expensive than building a VR app for the home.
Below are some excerpted slides from her presentation…
Also see:
20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
Excerpt:
Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.
“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.
An AI Bot for the Teacher — with thanks to Karthik Reddy for this resource
Artificial intelligence is the stuff of science fiction – if you are old enough, you will remember those Terminator movies a good few years ago, where mankind was systematically being wiped out by computers.
The truth is that AI, though not quite at Terminator level yet, is already a fact and something that most of us have encountered already. If you have ever used the virtual assistant on your phone or the Ask Google feature, you have used AI.
Some companies are using it as part of their sales and marketing strategies. An interesting example is Lowe’s Home Improvement that, instead of chatbots, uses actual robots into their physical stores. These robots are capable of helping customers locate products that they’re interested in, taking a lot of the guesswork out of the entire shopping experience.
Of course, there are a lot of different potential applications for AI that are very interesting. Imagine an AI teaching assistant, for example. They could help grade papers, fact check and assist with lesson planning, etc., all to make our harassed teachers’ lives a little easier.
Chatbots could be programmed as tutors to help kids better understand core topics if they are struggling with them, ensuring that they don’t hold the rest of the class up. And, for kids who have a real affinity with the subject, help them learn more about what they are interested in.
It could also help enhance long distance training. Imagine if your students could get instant answers to basic questions through a simple chatbot. Sure, if they were still not getting it, they would come through to you – the chatbot cannot replace a real, live, teacher after all. But it could save you a lot of time and frustration.
Here, of course, we have only skimmed the surface of what artificial intelligence is capable of. Why not look through this infographic to see how different brands have been using this tech, and see what possible applications of it we might expect.
Amazon has a bunch of data on you, but you’ve provided it all over the years.
It has a record of everything you’ve purchased, hundreds of items it thinks you’ll like, everything you’ve asked Amazon Alexa and more.
You can’t download a single file that has all of your data, so we’ll show you how to find everything Amazon knows about you.
Excerpt:
Unlike Facebook, Twitter and Google, Amazon doesn’t offer an easy way to download a file of everything it knows about you. Instead, you’ll need to do some digging.
I did a bit of that for you, to show you an example of the sort of data Amazon might have on you if, like me, you use its products and services frequently.
We’ve already published posts showing the data that Facebook, Google and Twitter have compiled. Before we begin, here’s how to find out what those companies know about you:
As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.
That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.
…
We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.
But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.
5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).
It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.
But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.
Artificial Intelligence is the talk of the town. It has evolved past merely being a buzzword in 2016, to be used in a more practical manner in 2017. As 2018 rolls out, we will gradually notice AI transitioning into a necessity. We have prepared a detailed report, on what we can expect from AI in the upcoming year. So sit back, relax, and enjoy the ride through the future. (Don’t forget to wear your VR headgear! )
Here are 18 things that will happen in 2018 that are either AI driven or driving AI:
Artificial General Intelligence may gain major traction in research.
We will turn to AI enabled solution to solve mission-critical problems.
Machine Learning adoption in business will see rapid growth.
Safety, ethics, and transparency will become an integral part of AI application design conversations.
Mainstream adoption of AI on mobile devices
Major research on data efficient learning methods
AI personal assistants will continue to get smarter
Race to conquer the AI optimized hardware market will heat up further
We will see closer AI integration into our everyday lives.
The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications.
Deep learning will continue to play a significant role in AI development progress.
AI will be on both sides of the cybersecurity challenge.
Augmented reality content will be brought to smartphones.
Reinforcement learning will be applied to a large number of real-world situations.
Robotics development will be powered by Deep Reinforcement learning and Meta-learning
Rise in immersive media experiences enabled by AI.
A large number of organizations will use Digital Twin.
Inside AI — from inside.com The year 2017 has been full of interesting news about Artificial Intelligence, so to close out the year, we’re doing two special retrospective issues covering the highlights.
Excerpt:
A Reality Check For IBM’s A.I. Ambitions. MIT Tech Review. This is a must read piece about the failures, and continued promise, of Watson. Some of the press about Watson has made IBM appear behind some of the main tech leaders, but, keep in mind that Google, Amazon, Facebook, and others don’t do the kinds of customer facing projects IBM is doing with Watson. When you look at how the tech giants are positioned, I think IBM has been vastly underestimated, given that they have the one thing few others do – large scale enterprise A.I. projects. Whether it all works today, or not, doesn’t matter. The experience and expertise they are building is a competitive advantage in a market that is very young where no other companies are doing these types of projects that will soon enough be mainstream.
The Business of Artificial Intelligence. Harvard Business Review. This cover story for the latest edition of HBR explains why artificial intelligence is the most powerful general purpose technology to come around in a long time. It also looks into some of the key ways to think about applying A.I. at work, and how to expect the next phase of this technology to play out.
The Robot Revolution Is Coming. Just Be Patient. Bloomberg. We keep hearing that robots and A.I. are about to make us super productive. But when? Sooner than we think, according to this.
There are few electronic devices with which you cannot order a Domino’s pizza. When the craving hits, you can place an order via Twitter, Slack, Facebook Messenger, SMS, your tablet, your smartwatch, your smart TV, and even your app-enabled Ford. This year, the pizza monger added another ordering tool: If your home is one of the 20 million with a voice assistant, you can place a regular order through Alexa or Google Home. Just ask for a large extra-cheese within earshot, and voila—your pizza is in the works.
Amazon’s Alexa offers more than 25,000 skills—the set of actions that serve as applications for voice technology. Yet Domino’s is one of a relatively small number of brands that has seized the opportunity to enter your home by creating a skill of its own. Now that Amazon Echoes and Google Homes are in kitchens and living rooms across the country, they open a window into user behavior that marketers previously only dreamt of. But brands’ efforts to engage consumers directly via voice have been scattershot. The list of those that have tried is sparse: some banks; a couple of fast food chains; a few beauty companies; retailers here and there. Building a marketing plan for Alexa has been a risky venture. That’s because, when it comes to our virtual assistants, no one knows what the *&^& is going on.
But if 2017 was the year that Alexa hit the mainstream, 2018 will be the year that advertisers begin to take her seriously by investing time and money in figuring out how to make use of her.
8 emerging AI jobs for IT pros— from enterprisersproject.com by Kevin Casey What IT jobs will be hot in the age of AI? Take a sneak peek at roles likely to be in demand
Excerpt:
If you’re watching the impact of artificial intelligence on the IT organization, your interest probably starts with your own job. Can robots do what you do? But more importantly, you want to skate where the puck is headed. What emerging IT roles will AI create? We talked to AI and IT career experts to get a look at some emerging roles that will be valuable in the age of AI.
This past June, Fortune Magazine asked all the CEOs of the Fortune 500 what they believed the biggest challenge facing their companies was. Their biggest concern for 2017: “The rapid pace of technological change” said 73% of those polled, up from 64% in 2016. Cyber security came in only a far second, at 61%, even after all the mega hacks of the past year.
But apart from the Big 8 technology companies – Google, Facebook, Microsoft, Amazon, IBM, Baidu, Tencent, and Alibaba – business leaders, especially of earlier generations, may feel they don’t know enough about AI to make informed decisions.
For the first time, artificial intelligence has been used to discover two new exoplanets. One of the discoveries, made by Nasa’s Kepler mission, brings the Kepler-90 solar system to a total of 8 planets – the first solar system found with the same number as our own.
A new Forrester Research report, Predictions 2018: Automation Alters The Global Workforce, outlines 10 predictions about the impact of AI and automation on jobs, work processes and tasks, business success and failure, and software development, cybersecurity, and regulatory compliance.
We will see a surge in white-collar automation, half a million new digital workers (bots) in the US, and a shift from manual to automated IT and data management. “Companies that master automation will dominate their industries,” Forrester says. Here’s my summary of what Forrester predicts will be the impact of automation in 2018:
Automation will eliminate 9% of US jobs but will create 2% more. In 2018, 9% of US jobs will be lost to automation, partly offset by a 2% growth in jobs supporting the “automation economy.”Specifically impacted will be back-office and administrative, sales, and call center employees. A wide range of technologies, from robotic process automation and AI to customer self-service and physical robots will impact hiring and staffing strategies as well as create a need for new skills.
Your next entry-level compliance staffer will be a robot.
From DSC:
Are we ready for a net loss of 7% of jobs in our workforce due to automation — *next year*? Last I checked, it was November 2017, and 2018 will be here before we know it.
***Are we ready for this?! ***
AS OF TODAY, can we reinvent ourselves fast enough given our current educational systems, offerings, infrastructures, and methods of learning?
My answer: No, we can’t. But we need to be able to — and very soon!
There are all kinds of major issues and ramifications when people lose their jobs — especially this many people and jobs! The ripple effects will be enormous and very negative unless we introduce new ways for how people can learn new things — and quickly!
That’s why I’m big on trying to establish a next generation learning platform, such as the one that I’ve been tracking and proposing out at Learning from the Living [Class] Room. It’s meant to provide societies around the globe with a powerful, next generation learning platform — one that can help people reinvent themselves quickly, cost-effectively, conveniently, & consistently! It involves providing, relevant, up-to-date streams of content that people can subscribe to — and drop at any time. It involves working in conjunction with subject matter experts who work with teams of specialists, backed up by suites of powerful technologies. It involves learning with others, at any time, from any place, at any pace. It involves more choice, more control. It involves blockchain-based technologies to feed cloud-based learner profiles and more.
But likely, bringing such a vision to fruition will require a significant amount of collaboration. In my mind, some of the organizations that should be at the table here include:
Some of the largest players in the tech world, such as Amazon, Google, Apple, IBM, Microsoft, and/or Facebook
Some of the vendors that already operate within the higher ed space — such as Salesforce.com, Ellucian, and/or Blackboard
Some of the most innovative institutions of higher education — including their faculty members, instructional technologists, instructional designers, members of administration, librarians, A/V specialists, and more
The U.S. Federal Government — for additional funding and the development of policies to make this vision a reality