‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace — from nytimes.com by Steve Lohr

Excerpt:

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

 

 

AI for Virtual Medical Assistants – 4 Current Applications — from techemergence.com by Kumba Sennaar

Excerpt:

In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.

This article will set out to determine the answers to the following questions:

  • What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
  • How is the healthcare market implementing these AI applications?

 

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says — from nytimes.com by Natasha Singer

Excerpt:

In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.

On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”

 

Back from January:

 

 

 

Campus Technology recently announced the recipients of the 2018 Campus Technology Impact Awards.

 

Campus Technology recently announced the recipients of the 2018 Campus Technology Impact Awards.

 

Categories include:

  • Teaching and Learning
  • Education Futurists
  • Student Systems & Services
  • Administration
  • IT Infrastructure & Systems

 

From DSC:
Having served as one of the judges for these competitions during the last several years, I really appreciate the level of innovation that’s been displayed by many of the submissions and the individuals/institutions behind them. 

 

 

Vanguard Projects: Expanding Teaching and Learning Horizons
Authors — from er.educause.edu by Malcolm Brown
Members of the higher education community provided a long list of projects that exemplify the six developments in educational technology from this year’s Horizon Report. A few of the projects are highlighted to convey a sense of the range and direction of current innovations in teaching and learning in higher education.

Excerpt:

This list summarizes the distribution across the six developments (as laid out in the preview):

  • Mixed reality: 31.4%
  • Makerspaces: 28.6%
  • Adaptive learning technology: 16.2%
  • Analytics technologies: 15.2%
  • Artificial intelligence: 4.8%
  • Robotics: 3.8%

 

 

 

The title of this article being linked to is: Augmented and virtual reality mean business: Everything you need to know

 

Augmented and virtual reality mean business: Everything you need to know — from zdnet by Greg Nichols
An executive guide to the technology and market drivers behind the hype in AR, VR, and MR.

Excerpt:

Overhyped by some, drastically underestimated by others, few emerging technologies have generated the digital ink like virtual reality (VR), augmented reality (AR), and mixed reality (MR).  Still lumbering through the novelty phase and roller coaster-like hype cycles, the technologies are only just beginning to show signs of real world usefulness with a new generation of hardware and software applications aimed at the enterprise and at end users like you. On the line is what could grow to be a $108 billion AR/VR industry as soon as 2021. Here’s what you need to know.

 

The reason is that VR environments by nature demand a user’s full attention, which make the technology poorly suited to real-life social interaction outside a digital world. AR, on the other hand, has the potential to act as an on-call co-pilot to everyday life, seamlessly integrating into daily real-world interactions. This will become increasingly true with the development of the AR Cloud.

The AR Cloud
Described by some as the world’s digital twin, the AR Cloud is essentially a digital copy of the real world that can be accessed by any user at any time.

For example, it won’t be long before whatever device I have on me at a given time (a smartphone or wearable, for example) will be equipped to tell me all I need to know about a building just by training a camera at it (GPS is operating as a poor-man’s AR Cloud at the moment).

What the internet is for textual information, the AR Cloud will be for the visible world. Whether it will be open source or controlled by a company like Google is a hotly contested issue.

 

Augmented reality will have a bigger impact on the market and our daily lives than virtual reality — and by a long shot. That’s the consensus of just about every informed commentator on the subject.

 

 

 

Mixed reality will transform learning (and Magic Leap joins act one) — from edsurge.com by Maya Georgieva

Excerpt:

Despite all the hype in recent years about the potential for virtual reality in education, an emerging technology known as mixed reality has far greater promise in and beyond the classroom.

Unlike experiences in virtual reality, mixed reality interacts with the real world that surrounds us. Digital objects become part of the real world. They’re not just digital overlays, but interact with us and the surrounding environment.

If all that sounds like science fiction, a much-hyped device promises some of those features later this year. The device is by a company called Magic Leap, and it uses a pair of goggles to project what the company calls a “lightfield” in front of the user’s face to make it look like digital elements are part of the real world. The expectation is that Magic Leap will bring digital objects in a much more vivid, dynamic and fluid way compared to other mixed-reality devices such as Microsoft’s Hololens.

 

The title of the article being linked to here is Mixed reality will transform learning (and Magic Leap joins act one)

 

Now think about all the other things you wished you had learned this way and imagine a dynamic digital display that transforms your environment and even your living room or classroom into an immersive learning lab. It is learning within a highly dynamic and visual context infused with spatial audio cues reacting to your gaze, gestures, gait, voice and even your heartbeat, all referenced with your geo-location in the world. Unlike what happens with VR, where our brain is tricked into believing the world and the objects in it are real, MR recognizes and builds a map of your actual environment.

 

 

 

Also see:

virtualiteach.com
Exploring The Potential for the Vive Focus in Education

 

virtualiteach.com

 

 

 

Digital Twins Doing Real World Work — from stambol.com

Excerpt:

On the big screen it’s become commonplace to see a 3D rendering or holographic projection of an industrial floor plan or a mechanical schematic. Casual viewers might take for granted that the technology is science fiction and many years away from reality. But today we’re going to outline where these sophisticated virtual replicas – Digital Twins – are found in the real world, here and now. Essentially, we’re talking about a responsive simulated duplicate of a physical object or system. When we first wrote about Digital Twin technology, we mainly covered industrial applications and urban infrastructure like transit and sewers. However, the full scope of their presence is much broader, so now we’re going to break it up into categories.

 

The title of the article being linked to here is Digital twins doing real world work

 

Digital twin — from Wikipedia

Digital twin refers to a digital replica of physical assets (physical twin), processes and systems that can be used for various purposes.[1] The digital representation provides both the elements and the dynamics of how an Internet of Things device operates and lives throughout its life cycle.[2]

Digital twins integrate artificial intelligence, machine learning and software analytics with data to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition or position. This learning system, learns from itself, using sensor data that conveys various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment in which it may be a part of. A digital twin also integrates historical data from past machine usage to factor into its digital model.

In various industrial sectors, twins are being used to optimize the operation and maintenance of physical assets, systems and manufacturing processes.[3] They are a formative technology for the Industrial Internet of Things, where physical objects can live and interact with other machines and people virtually.[4]

 

 

Disney to debut its first VR short next month — from techcrunch.com by Sarah Wells

Excerpt:

Walt Disney Animation Studio is set to debut its first VR short film, Cycles, this August in Vancouver, the Association for Computing Machinery announced today. The plan is for it to be a headliner at the ACM’s computer graphics conference (SIGGRAPH), joining other forms of VR, AR and MR entertainment in the conference’s designated Immersive Pavilion.

This film is a first for both Disney and its director, Jeff Gipson, who joined the animation team in 2013 to work as a lighting artist on films like Frozen, Zootopia and Moana. The objective of this film, Gipson said in the statement released by ACM, is to inspire a deep emotional connection with the story.

“We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story,” said Gipson.

 

 

 

 

 

The title of this article is: Schools can not get facial recognition tech for free. Should they?

Schools can not get facial recognition tech for free. Should they? — from wired.com by Issie Lapowsky

Excerpt:

Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.

 

From DSC:
Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example, look at what’s happening in China already these days!

The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.

Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.).  What’s the unspoken message behind those cameras?  What’s being stated by their very presence around us?

No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.

 

 

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”

 

 

 

The title of this article being linked here is: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras
Per this week’s Next e-newsletter from edsurge.com

Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.

The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.

It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.

 

 

Also see:

  • As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber
    Excerpt:
    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

 

 

 

 

Guiding faculty into immersive environments — from campustechnology.com by David Raths
What’s the best way to get faculty to engage with emerging technologies and incorporate new learning spaces into their teaching? Five institutions share their experiences.

Guiding faculty into immersive environments -- by David Raths

Excerpt:

One of the biggest hurdles for universities has been the high cost of VR-enabled computers and headsets, and some executives say prices must continue to drop before we’ll see more widespread usage. But John Bowditch, director of the Game Research and Immersive Design Lab at Ohio University’s Scripps College of Communication, is already seeing promising developments on that front as he prepares to open a new 20-seat VR classroom. “Probably the best thing about VR in 2018 is that it is a lot more affordable now and that democratizes it,” he said. “We purchased a VR helmet 13 years ago, and it was $12,000 just for the headset. The machine that ran it cost about $20,000. That would be a nonstarter beyond purchasing just one or two. Today, you can get a VR-enabled laptop and headset for under $2,000. That makes it much easier to think about integrating it into classes.”

 

 

Colleges and universities face several hurdles in getting faculty to incorporate virtual reality or immersive experiences in their courses. For one, instructional designers, instructional technologists and directors of teaching and learning centers may not have access to these tools yet, and the budgets aren’t always there to get the labs off the ground, noted Daniel Christian, instructional services director at Western Michigan University‘s Cooley Law School. “Many faculty members’ job plates are already jam-packed — allowing little time to even look at emerging technologies,” he said. “Even if they wanted to experiment with such technologies and potential learning experiences, they don’t have the time to do so. Tight budgets are impacting this situation even further.”

 

 

 

 

Computing in the Camera — from blog.torch3d.com by Paul Reynolds
Mobile AR, with its ubiquitous camera, is set to transform what and how human experience designers create.

One of the points Allison [Woods, CEO, Camera IQ] made repeatedly on that call (and in this wonderful blog post of the same time period) was that the camera is going to be at the center of computing going forward, an indispensable element. Spatial computing could not exist without it. Simple, obvious, straightforward, but not earth shaking. We all heard what she had to say, but I don’t think any of us really understood just how profound or prophetic that statement turned out to be.

 

“[T]he camera will bring the internet and the real world into a single time and space.”

— Allison Woods, CEO, Camera IQ

 

 

The Camera As Platform — from shift.newco.co by Allison Wood
When the operating system moves to the viewfinder, the world will literally change

“Every day two billion people carry around an optical data input device — the smartphone Camera — connected to supercomputers and informed by massive amounts of data that can have nearly limitless context, position, recognition and direction to accomplish tasks.”

– Jacob Mullins, Shasta Ventures

 

 

 

The State Of The ARt At AWE 18 — from forbes.com by Charlie Fink

Excerpt:

The bigger story, however, is how fast the enterprise segment is growing as applications as straightforward as schematics on a head-mounted monocular microdisplay are transforming manufacturing, assembly, and warehousing. Use cases abounded.

After traveling the country and most recently to Europe, I’ve now experienced almost every major VR/AR/MR/XR related conference out there. AWE’s exhibit area was by far the largest display of VR and AR companies to date (with the exception of CES).

 

AR is being used to identify features and parts within cars

 

 

 

 

Student Learning and Virtual Reality: The Embodied Experience — from er.educause.edu by Jaime Hannans, Jill Leafstedt and Talya Drescher

Excerpts:

Specifically, we explored the potential for how virtual reality can help create a more empathetic nurse, which, we hypothesize, will lead to increased development of nursing students’ knowledge, skills, and attitudes. We aim to integrate these virtual experiences into early program coursework, with the intent of changing nursing behavior by providing a deeper understanding of the patient’s perspective during clinical interactions.

In addition to these compelling student reflections and the nearly immediate change in reporting practice, survey findings show that students unanimously felt that this type of patient-perspective VR experience should be integrated and become a staple of the nursing curriculum. Seeing, hearing, and feeling these moments results in significant and memorable learning experiences compared to traditional classroom learning alone. The potential that this type of immersive experience can have in the field of nursing and beyond is only limited by the imagination and creation of other virtual experiences to explore. We look forward to continued exploration of the impact of VR on student learning and to establishing ongoing partnerships with developers.

 

Also see:

 

 

 

Reimagining the Higher Education Ecosystem — from edu2030.agorize.com
How might we empower people to design their own learning journeys so they can lead purposeful and economically stable lives?

Excerpts:

The problem
Technology is rapidly transforming the way we live, learn, and work. Entirely new jobs are emerging as others are lost to automation. People are living longer, yet switching jobs more often. These dramatic shifts call for a reimagining of the way we prepare for work and life—specifically, how we learn new skills and adapt to a changing economic landscape.

The changes ahead are likely to hurt most those who can least afford to manage them: low-income and first generation learners already ill-served by our existing postsecondary education system. Our current system stifles economic mobility and widens income and achievement gaps; we must act now to ensure that we have an educational ecosystem flexible and fair enough to help all people live purposeful and economically stable lives. And if we are to design solutions proportionate to this problem, new technologies must be called on to scale approaches that reach the millions of vulnerable people across the country.

 

The challenge
How might we empower people to design their own learning journeys so they can lead purposeful and economically stable lives?

The Challenge—Reimagining the Higher Education Ecosystem—seeks bold ideas for how our postsecondary education system could be reimagined to foster equity and encourage learner agency and resilience. We seek specific pilots to move us toward a future in which all learners can achieve economic stability and lead purposeful lives. This Challenge invites participants to articulate a vision and then design pilot projects for a future ecosystem that has the following characteristics:

Expands access: The educational system must ensure that all people—including low-income learners who are disproportionately underserved by the current higher education system—can leverage education to live meaningful and economically stable lives.

Draws on a broad postsecondary ecosystem: While college and universities play a vital role in educating students, there is a much larger ecosystem in which students learn. This ecosystem includes non-traditional “classes” or alternative learning providers, such as MOOCs, bootcamps, and online courses as well as on-the-job training and informal learning. Our future learning system must value the learning that happens in many different environments and enable seamless transitions between learning, work, and life.

 

From DSC:
This is where I could see a vision similar to Learning from the Living [Class] Room come into play. It would provide a highly affordable, accessible platform, that would offer more choice, and more control to learners of all ages. It would be available 24×7 and would be a platform that supports lifelong learning. It would combine a variety of AI-enabled functionalities with human expertise, teaching, training, motivation, and creativity.

It could be that what comes out of this challenge will lay the groundwork for a future, massive new learning platform.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Also see:

 

 

AR & VR — Education’s marvelous revolution — from verizoninternet.com

Excerpt:

Virtual Reality (VR) and Augmented Reality (AR), often used in video games and mobile apps, are transforming the world—and with that, the way we learn. These technologies have the capability to change students’ outlook on the world and the way they engage with it. After all, why would you learn about outer space from a classroom when you could learn about it from the International Space Station?

As AR and VR technology become more widely available and user-friendly, interest and market value have spiked across the world. In 2017, interest in VR hardware such as PlayStation VR, HTC Vive, Oculus Rift, and Samsung Gear VR spiked around the globe.2 In China in particular, AR and VR are booming.

With AR and VR, geographical distances are no longer an obstacle. Interactive experiences, tutorial videos, and learning apps work just as well, whether the teacher and student are in the same room, or across the world from each other.

From their site, here are some additional resources:

 

 

Study shows VR increases learning — from Donald Clark

 Excerpt:

I have argued that several conditions for good learning are likely to be enhanced by VR. First there’s increased attention, where the learner is literally held fast within the created environment and cannot be distracted by external stimuli. Second is experiential learning, where one has to ‘do’ something where that active component leads to higher retention. Third is emotion, the affective component in learning, which is better achieved where the power to induce empathy, excitement, calm and so on is easier. Fourth is context, where providing an albeit simulated context aids retention and recall. Fifth is transfer, where all of these conditions lead to greater transfer of knowledge and skills to the real world.

 

 

Example Use Cases of How to Use Virtual Reality (VR) for Training — from instavr.co

Some of the topics covered include:

  • Employee Onboarding (and Cross-Training)
  • Preparing for Rare or Unexpected Events
  • Employee Testing
  • Customer/Client Interaction Practice

 

 

8 of the Wildest Augmented Reality Glasses You Haven’t Seen Yet — from next.reality.news by Adario Strange

 

 

 

 

Computers that never forget a face — from Future Today Institute

Excerpts:

In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.

Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.

Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.

It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.

Near-Futures Scenarios (2018 – 2028):

OptimisticFaceprints make us safer, and they bring us back to physical offices and stores.  

Pragmatic: As faceprint adoption grows, legal challenges mount. 
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.

CatastrophicFaceprints are used for widespread surveillance and authoritative control.

 

 

 

How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent
Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.

 

 

 

Preparing students for workplace of the future  — from educationdive.com by Shalina Chatlani

Excerpt:

The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.

This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.

In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.

“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.

 

 

Amazon’s A.I. camera could help people with memory loss recognize old friends and family — from cnbc.com by Christina Farr

  • Amazon’s DeepLens is a smart camera that can recognize objects in front of it.
  • One software engineer, Sachin Solkhan, is trying to figure out how to use it to help people with memory loss.
  • Users would carry the camera to help them recognize people they know.

 

 

Microsoft acquired an AI startup that helps it take on Google Duplex — from qz.com by Dave Gershgorn

Excerpt:

We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.

Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.

 

 

Researchers developed an AI to detect DeepFakes — from thenextweb.com by Tristan Greene

Excerpt:

A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.

The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.

 

 

Bringing It Down To Earth: Four Ways Pragmatic AI Is Being Used Today — from forbes.com by Carlos Melendez

Excerpt:

Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.

Below are four key categories of pragmatic AI and ways they are being applied today.

1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots

 

 

Billable Hour ‘Makes No Sense’ in an AI World — from biglawbusiness.com by Helen Gunnarsson

Excerpt:

Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

 

 

Deep Learning Tool Tops Dermatologists in Melanoma Detection — from healthitanalytics.com
A deep learning tool achieved greater accuracy than dermatologists when detecting melanoma in dermoscopic images.

 

 

Apple’s plans to bring AI to your phone — from wired.com by Tom Simonite

Excerpt:

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian