Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.
From DSC: Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example,look at what’s happening in China already these days!
The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.
Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.). What’s the unspoken message behind those cameras? What’s being stated by their very presence around us?
No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.
Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”
Per this week’s Next e-newsletter from edsurge.com
Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.
The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.
It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.
Also see:
As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber Excerpt:
Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.
In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.
Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.
Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.
It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.
Near-Futures Scenarios (2018 – 2028):
Optimistic: Faceprints make us safer, and they bring us back to physical offices and stores.
Pragmatic:As faceprint adoption grows, legal challenges mount.
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.
Catastrophic: Faceprints are used for widespread surveillance and authoritative control.
How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.
The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.
This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.
…
In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.
“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.
We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.
Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.
A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.
What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.
The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.
Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.
Below are four key categories of pragmatic AI and ways they are being applied today.
1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots
Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.
…
Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.
HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.
At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.
Lehi, UT, May 29, 2018 (GLOBE NEWSWIRE) — Today, fast-growing augmented reality startup, Seek, is launching Seek Studio, the world’s first mobile augmented reality studio, allowing anybody with a phone and no coding expertise required, to create their own AR experiences and publish them for the world to see. With mobile AR now made more readily available, average consumers are beginning to discover the magic that AR can bring to the palm of their hand, and Seek Studio turns everyone into a creator.
To make the process incredibly easy, Seek provides templates for users to create their first AR experiences. As an example, a user can select a photo on their phone, outline the portion of the image they want turned into a 3D object and then publish it to Seek. They will then be able to share it with their friends through popular social networks or text. A brand could additionally upload a 3D model of their product and publish it to Seek, providing an experience for their customers to easily view that content in their own home. Seek Studio will launch with 6 templates and will release new ones every few days over the coming months to constantly improve the complexity and types of experiences possible to create within the platform.
Apple unveiled its new augmented reality file format, as well as ARKit 2.0, at its annual WWDC developer conference today. Both will be available to users later this year with iOS 12.
The tech company partnered with Pixar to develop the AR file format Universal Scene Description (USDZ) to streamline the process of sharing and accessing augmented reality files. USDZ will be compatible with tools like Adobe, Autodesk, Sketchfab, PTC, and Quixel. Adobe CTO Abhay Parasnis spoke briefly on stage about how the file format will have native Adobe Creative Cloud support, and described it as the first time “you’ll be able to have what you see is what you get (WYSIWYG) editing” for AR objects.
With a starting focus on University-level education and vocational schools in sectors such as mechanical engineering, VivEdu branched out to K-12 education in 2018, boasting a comprehensive VR approach to learning science, technology, engineering, mathematics, and art for kids.
That roadmap, of course, is just beginning. Which is where the developers—and those arm’s-length iPads—come in. “They’re pushing AR onto phones to make sure they’re a winner when the headsets come around,” Miesnieks says of Apple. “You can’t wait for headsets and then quickly do 10 years’ worth of R&D on the software.”
To fully realize the potential will require a broad ecosystem. Adobe is partnering with technology leaders to standardize interaction models and file formats in the rapidly growing AR ecosystem. We’re also working with leading platform vendors, open standards efforts like usdz and glTF as well as media companies and the creative community to deliver a comprehensive AR offering. usdz is now supported by Apple, Adobe, Pixar and many others while glTF is supported by Google, Facebook, Microsoft, Adobe and other industry leaders.
There are a number of professionals who would find the ability to quickly and easily create floor plans to be extremely useful. Estate agents, interior designers and event organisers would all no doubt find such a capability to be extremely valuable. For those users, the new feature added to iStaging’s VR Maker app might be of considerable interest.
The new VR Maker feature utilises Apple’s ARKit toolset to recognise spaces, such as walls and floors and can provide accurate measurements. By scanning each wall of a space, a floor plan can be produced quickly and easily.
I’ve interviewed nine investors who have provided their insights on where the VR industry has come, as well as the risks and opportunities that exist in 2018 and beyond. We’ve asked them what opportunities are available in the space — and what tips they have for startups.
Augmented reality (AR) hasn’t truly permeated the mainstream consciousness yet, but the technology is swiftly being adopted by global industries. It’ll soon be unsurprising to find a pair of AR glasses strapped to a helmet sitting on the heads of service workers, and RealWear, a company at the forefront on developing these headsets, thinks it’s on the edge of something big.
…
VOICE ACTIVATION
What’s most impressive about the RealWear HMT-1Z1 is how you control it. There’s no touch-sensitive gestures you need to learn — it’s all managed with voice, and better yet, there’s no need for a hotword like “Hey Google.” The headset listens for certain commands. For example, from the home screen just say “show my files” to see files downloaded to the device, and you can go back to the home screen by saying “navigate home.” When you’re looking at documents — like schematics — you can say “zoom in” or “zoom out” to change focus. It worked almost flawlessly, even in a noisy environment like the AWE show floor.
David Scowsill‘s experience in the aviation industry spans over 30 years. He has worked for British Airways, American Airlines, Easy Jet, Manchester Airport, and most recently the World Travel and Tourism Council, giving him a unique perspective on how Augmented and Virtual Reality (AVR) can impact the aviation industry.
These technologies have the power to transform the entire aviation industry, providing benefits to companies and consumers. From check-in, baggage drop, ramp operations and maintenance, to pilots and flight attendants, AVR can accelerate training, improve safety, and increase efficiency.
London-based design studio Marshmallow Laser Feast is using VR to let us reconnect with nature. With headsets, you can see a forest through the eyes of different animals and experience the sensations they feel. Creative Director Ersinhan Ersin took the stage at TNW Conference last week to show us how and why they created the project, titled In the Eyes of the Animal.
Have you already taken a side when it comes to XR wearables? Whether you prefer AR glasses or VR headsets likely depends on the application you need. But wouldn’t it be great to have a device that could perform as both? As XR tech advances, we think crossovers will start popping up around the world.
A Beijing startup called AntVR recently rocketed past its Kickstarter goal for an AR/VR visor. Their product, the Mix, uses tinted lenses to toggle between real world overlay and full immersion. It’s an exciting prospect. But rather than digging into the tech (or the controversy surrounding their name, their marketing, and a certain Marvel character) we’re looking at what this means for how XR devices are developed and sold.
Google is bringing AR tech to its Expeditions app with a new update going live today. Last year, the company introduced its GoogleExpeditions AR Pioneer Program, which brought the app into classrooms across the country; with this launch the functionality is available to all.
Expeditions will have more than 100 AR tours in addition to the 800 VR tours already available. Examples include experiences that let users explore Leonardo Da Vinci’s inventions and ones that let you interact with the human skeletal system.
At four recent VR conferences and events there was a palpable sense that despite new home VR devices getting the majority of marketing and media attention this year, the immediate promise and momentum is in the location-based VR (LBVR) attractions industry. The VR Arcade Conference (April 29th and 30th), VRLA (May 4th and 5th), the Digital Entertainment Group’s May meeting (May 1), and FoIL (Future of Immersive Leisure, May 16th and 17th) all highlighted a topic that suddenly no one can stop talking about: location-based VR (LBVR). With hungry landlords giving great deals for empty retail locations, VRcades, which are inexpensive to open (like Internet Cafes), are popping up all over the country. As a result, VRcade royalties for developers are on the rise, so they are shifting their attention accordingly to shorter experiences optimized for LBVR, which is much less expensive than building a VR app for the home.
Below are some excerpted slides from her presentation…
Also see:
20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
Excerpt:
Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.
“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.
Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.
The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.
The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.
About a dozen Google employees reportedly left the company over its insistence on developing AI for the US military through a program called Project Maven. Meanwhile 4,000 others signed a petition demanding the company stop.
It looks like there’s some internal confusion over whether the company’s “Don’t Be Evil” motto covers making machine learning systems to aid warfare.
FOR MONTHS, A growing faction of Google employees has tried to force the company to drop out of a controversial military program called Project Maven. More than 4,000 employees, including dozens of senior engineers, have signed a petition asking Google to cancel the contract. Last week, Gizmodo reported that a dozen employees resigned over the project. “There are a bunch more waiting for job offers (like me) before we do so,” one engineer says. On Friday, employees communicating through an internal mailing list discussed refusing to interview job candidates in order to slow the project’s progress.
Other tech giants have recently secured high-profile contracts to build technology for defense, military, and intelligence agencies. In March, Amazon expanded its newly launched “Secret Region” cloud services supporting top-secret work for the Department of Defense. The same week that news broke of the Google resignations, Bloomberg reported that Microsoft locked down a deal with intelligence agencies. But there’s little sign of the same kind of rebellion among Amazon and Microsoft workers.
Facebook SEATTLE (AP) – The American Civil Liberties Union and other privacy advocates are asking Amazon to stop marketing a powerful facial recognition tool to police, saying law enforcement agencies could use the technology to “easily build a system to automate the identification and tracking of anyone.”
The tool, called Rekognition, is already being used by at least one agency – the Washington County Sheriff’s Office in Oregon – to check photographs of unidentified suspects against a database of mug shots from the county jail, which is a common use of such technology around the country.
From DSC: Google’s C-Suite — as well as the C-Suites at Microsoft, Amazon, and other companies — needs to be very careful these days, as they could end up losing the support/patronage of a lot of people — including more of their own employees. It’s not an easy task to know how best to build and use technologies in order to make the world a better place…to create a dream vs. a nightmare for our future. But just because we can build something, doesn’t mean we should.
What is conversational commerce? Why is it such a big opportunity? How does it work? What does the future look like? How can I get started? These are the questions I’m going to answer for you right now.
…
The guide covers:
An introduction to conversational commerce.
Why conversational commerce is such a big opportunity.
Complete breakdown of how conversational commerce works.
Extensive examples of conversational commerce using chatbots and voicebots.
How artificial intelligence impacts conversational commerce.
What the future of conversational commerce will look like.
Definition: Conversational commerce is an automated technology, powered by rules and sometimes artificial intelligence, that enables online shoppers and brands to interact with one another via chat and voice interfaces.
Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.
It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.
AI for Good — from re-work.co by Ali Shah, Head of Emerging Technology and Strategic Direction – BBC
Excerpt:
What AI for good is really trying to ask is how we might develop and apply AI so that it makes a positive difference to society. Since the material question is about the change in society we would like to see, then we must first define the change we are hoping for before we can judge how AI might help. There are many areas of society that we might choose to consider, but I will focus on two interrelated issues.
Microsoft just demonstrated a meeting room of the future at the company’s Build developer conference.
…
It all starts with a 360-degree camera and microphone array that can detect anyone in a meeting room, greet them, and even transcribe exactly what they say in a meeting regardless of language.
…
Microsoft takes the meeting room scenario even further, though. The company is using its artificial intelligence tools to then act on what meeting participants say.
From DSC: Whoa! Many things to think about here. Consider the possibilities for global/blended/online-based learning (including MOOCs) with technologies associated with translation, transcription, and identification.
But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive.
…
As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.
Microsoft Remote Assist — Collaborate in mixed reality to solve problems faster
With Microsoft Remote Assist we set out to create a HoloLens app that would help our customers collaborate remotely with heads-up, hands-free video calling, image sharing, and mixed-reality annotations. During the design process, we spent a lot of time with Firstline Workers. We asked ourselves, “How can we help Firstline Workers share what they see with an expert while staying hands-on to solve problems and complete tasks together, faster.” It was important to us that Firstline Workers are able to reach experts on whatever device they are using at the time, including PCs, phones, or tablets.
Microsoft Layout — Design spaces in context with mixed reality
With Microsoft Layout our goal was to build an app that would help people use HoloLens to bring designs from concept to completion using some of the superpowers mixed reality makes possible. With Microsoft Layout customers can import 3-D models to easily create and edit room layouts in real-world scale. Further, you can experience designs as high-quality holograms in physical space or in virtual reality and share and edit with stakeholders in real time.
From DSC: Those involved with creating/enhancing learning spaces may want to experiment with Microsoft Layout.
The new updates allow for collaborative AR experiences, such as playing multiplayer games or painting a AR community mural using a capability called Cloud Anchors.
“You can see what you can’t imagine,” said Aaron Herridge, a graduate student in Creighton’s medical physics master’s program and a RaD Lab intern who is helping develop the lab’s virtual reality program. “It’s an otherworldly experience,” Herridge says. “But that’s the great plus of virtual reality. It can take you places that you couldn’t possibly go in real life. And in physics, we always say that if you can’t visualize it, you can’t do the math. It’s going to be a huge educational leap.”
“We’re always looking for ways to help students get the real feeling for astronomy,” Gabel said. “Visualizing space from another planet, like Mars, or from Earth’s moon, is a unique experience that goes beyond pencil and paper or a two-dimensional photograph in a textbook.
BAE created a guided step-by-step training solution for HoloLens to teach workers how to assemble a green energy bus battery.
From DSC: How long before items that need some assembling come with such experiences/training-related resources?
VR and AR: The Ethical Challenges Ahead— from er.educause.edu by Emory Craig and Maya Georgieva Immersive technologies will raise new ethical challenges, from issues of access, privacy, consent, and harassment to future scenarios we are only now beginning to imagine.
Excerpt:
As immersive technologies become ever more realistic with graphics, haptic feedback, and social interactions that closely align with our natural experience, we foresee the ethical debates intensifying. What happens when the boundaries between the virtual and physical world are blurred? Will VR be a tool for escapism, violence, and propaganda? Or will it be used for social good, to foster empathy, and as a powerful new medium for learning?
Augmented reality might not be able to cure cancer (yet), but when combined with a machine learning algorithm, it can help doctors diagnose the disease. Researchers at Google have developed an augmented reality microscope (ARM) that takes real-time data from a neural network trained to detect cancerous cells and displays it in the field of view of the pathologist viewing the images.
Click on the image to get a larger image in a PDF file format.
From DSC: So regardless of what was being displayed up on any given screen at the time, once a learner was invited to use their devices to share information, a graphical layer would appear on the learner’s mobile device — as well as up on the image of the screens (but the actual images being projected on the screens would be shown in the background in a muted/pulled back/25% opacity layer so the code would “pop” visually-speaking) — letting him or her know what code to enter in order to wirelessly share their content up to a particular screen. This could be extra helpful when you have multiple screens in a room.
For folks at Microsoft: I could have said Mixed Reality here as well.
From DSC:
This application looks to be very well done and thought out! Wow!
Check out the video entitled “Interactive Ink – Enables digital handwriting“ — and you may also wonder whether this could be a great medium/method of having to “write things down” for better information processing in our minds, while also producing digital work for easier distribution and sharing!
Wow! Talk about solid user experience design and interface design! Nicely done.
Below is an excerpt of the information from Bella Pietsch from anthonyBarnum Public Relations
Imagine a world where users interact with their digital devices seamlessly, and don’t suffer from lag and delayed response time. I work with MyScript, a company whose Interactive Ink tech creates that world of seamless handwritten interactivity by combining the flexibility of pen and paper with the power and productivity of digital processing.
According to a recent forecast, the global handwriting recognition market is valued at a trillion-plus dollars and is expected to grow at an almost 16 percent compound annual growth rate by 2025. To add additional context, the new affordable iPad with stylus support was just released, allowing users to work with the $99 Apple Pencil, which was previously only supported by the iPad Pro.
Check out the demo of Interactive Ink using an Apple Pencil, Microsoft Surface Pen, Samsung S Pen or Google Pixelbook Pen here.
Interactive Ink’s proficiencies are the future of writing and equating. Developed by MyScript Labs, Interactive Ink is a form of digital ink technology which allows ink editing via simple gestures and providing device reflow flexibility. Interactive Ink relies on real-time predictive handwriting recognition, driven by artificial intelligence and neural network architectures.
Here’s something every tech company agrees on: the world needs more AI engineers. Microsoft is the latest firm to try to answer this demand, and this week, it launched a new course on its tech accreditation scheme (known as the Microsoft Professional Program) dedicated to artificial intelligence.
The course has 10 modules, each taking between eight and 16 hours to complete online. They cover a range of sub-disciplines, including computer vision, data analysis, speech recognition, and natural language processing. Interestingly, there’s also an ethics course (a topic Microsoft is paying close attention as it pivots to focus on AI) as well as a module on machine learning in Azure, the company’s cloud platform.
As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.
That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.
…
We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.
But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.
5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).
It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.
But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.
The path to opportunity is changing
The short shelf life of skills and a tightening labor market are giving rise to a multitude of skill gaps. Businesses are fighting to stay ahead of the curve, trying to hold onto their best talent and struggling to fill key positions. Individuals are conscious of staying relevant in the age of automation.
Enter the talent development function.
These organizational leaders create learning opportunities to enable employee growth and achievement. They have the ability to guide their organizations to success in tomorrow’s labor market, but they can’t do it alone.
… Our research answers the talent developer’s most pressing questions: * How are savvy talent development leaders adapting to the pace of change in today’s dynamic world of work?
* Why do employees demand learning and development resources, but don’t make the time to learn?
* How do executives think about learning and development?
* Are managers the missing link to successful learning programs?
From DSC: Even though this piece is a bit of a sales pitch for Lynda.com — a great service I might add — it’s still worth checking out. I say this because it brings up a very real trend that I’m trying to bring more awareness to — i.e., the pace of change has changed. Our society is not ready for this new, exponential pace of change. Technologies are impacting jobs and how we do our jobs, and will likely do so for the next several decades. Skills gaps are real and likely growing larger. Corporations need to do their part in helping higher education revise/develop curriculum and they need to offer funds to create new types of learning labs/environments. They need to offer more internships and opportunities to learn new skills.
Over 143 million workers in the U.S. have LinkedIn profiles; over 20,000 companies in the U.S. use LinkedIn to recruit; over 3 million jobs are posted on LinkedIn in the U.S. every month; and members can add over 50,000 skills to their profiles to showcase their professional brands. That gives us unique and valuable insight into U.S. workforce trends.
The LinkedIn Workforce Report is a monthly report on employment trends in the U.S. workforce, and this month’s report looks at our latest data through December 2017. It’s divided into two sections: a National section that provides insights into hiring, skills gaps, and migration trends across the country, and a City section that provides insights into localized employment trends in 20 of the largest U.S. metro areas: Atlanta, Austin, Boston, Chicago, Cleveland-Akron, Dallas-Ft. Worth, Denver, Detroit, Houston, Los Angeles, Miami-Ft. Lauderdale, Minneapolis-St. Paul, Nashville, New York City, Philadelphia, Phoenix, San Francisco Bay Area, Seattle, St. Louis, and Washington, D.C.