5 influencers predict AI’s impact on business in 2019 — from martechadvisor.com by Christine Crandell

Excerpt:

With Artificial Intelligence (AI) already proving its worth to adopters, it’s not surprising that an increasing number of companies will implement and leverage AI in 2019. Now, it’s no longer a question of whether AI will take off. Instead, it’s a question of which companies will keep up. Here are five predictions from five influencers on the impact AI will have on businesses in 2019, writes Christine Crandell, President, New Business Strategies.

 

 

Should we be worried about computerized facial recognition? — from newyorker.com by David Owen
The technology could revolutionize policing, medicine, even agriculture—but its applications can easily be weaponized.

 

Facial-recognition technology is advancing faster than the people who worry about it have been able to think of ways to manage it. Indeed, in any number of fields the gap between what scientists are up to and what nonscientists understand about it is almost certainly greater now than it has been at any time since the Manhattan Project. 

 

From DSC:
This is why law schools, legislatures, and the federal government need to become much more responsive to emerging technologies. The pace of technological change has changed. But have other important institutions of our society adapted to this new pace of change?

 

 

Andrew Ng sees an eternal springtime for AI — from zdnet.com by Tiernan Ray
Former Google Brain leader and Baidu chief scientist Andrew Ng lays out the steps companies should take to succeed with artificial intelligence, and explains why there’s unlikely to be another “AI winter” like in times past.

 

 

Google Lens now recognizes over 1 billion products — from venturebeat.com by Kyle Wiggers with thanks to Marie Conway for her tweet on this

Excerpt:

Google Lens, Google’s AI-powered analysis tool, can now recognize over 1 billion products from Google’s retail and price comparison portal, Google Shopping. That’s four times the number of objects Lens covered in October 2017, when it made its debut.

Aparna Chennapragada, vice president of Google Lens and augmented reality at Google, revealed the tidbit in a retrospective blog post about Google Lens’ milestones.

 

Amazon Customer Receives 1,700 Audio Files Of A Stranger Who Used Alexa — from npr.org by Sasha Ingber

Excerpt:

When an Amazon customer in Germany contacted the company to review his archived data, he wasn’t expecting to receive recordings of a stranger speaking in the privacy of a home.

The man requested to review his data in August under a European Union data protection law, according to a German trade magazine called c’t. Amazon sent him a download link to tracked searches on the website — and 1,700 audio recordings by Alexa that were generated by another person.

“I was very surprised about that because I don’t use Amazon Alexa, let alone have an Alexa-enabled device,” the customer, who was not named, told the magazine. “So I randomly listened to some of these audio files and could not recognize any of the voices.”

 

 

Why should anyone believe Facebook anymore? — from wired.com by Fred Vogelstein

Excerpt:

Just since the end of September, Facebook announced the biggest security breach in its history, affecting more than 30 million accounts. Meanwhile, investigations in November revealed that, among other things, the company had hired a Washington firm to spread its own brand of misinformation on other platforms, including borderline anti-Semitic stories about financier George Soros. Just two weeks ago, a cache of internal emails dating back to 2012 revealed that at times Facebook thought a lot more about how to make money off users’ data than about how to protect it.

Now, according to a New York Times investigation into Facebook’s data practices published Tuesday, long after Facebook said it had taken steps to protect user data from the kinds of leakages that made Cambridge Analytica possible, the company continued to sustain special, undisclosed data-sharing arrangements with more than 150 companies—some into this year. Unlike with Cambridge Analytica, the Times says, Facebook provided access to its users’ data knowingly and on a greater scale.

 

What has enabled them to deliver these apologies, year after year, was that these sycophantic monologues were always true enough to be believable. The Times’ story calls into question every one of those apologies—especially the ones issued this year.

There’s a simple takeaway from all this, and it’s not a pretty one: Facebook is either a mendacious, arrogant corporation in the mold of a 1980s-style Wall Street firm, or it is a company in much more disarray than it has been letting on. 

It’s hard to process this without finally realizing what it is that’s made us so angry with Silicon Valley, and Facebook in particular, in 2018: We feel lied to, like these companies are playing us, their users, for chumps, and they’re also laughing at us for being so naive.

 

 

Also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt:

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

 

 

From DSC:
When a professor walks into the room, the mobile device that the professor is carrying notifies the system to automatically establish his or her preferred settings for the room — and/or voice recognition allows a voice-based interface to adjust the room’s settings:

  • The lights dim to 50%
  • The projector comes on
  • The screen comes down
  • The audio is turned up to his/her liking
  • The LMS is logged into with his/her login info and launches the class that he/she is teaching at that time of day
  • The temperature is checked and adjusted if too high or low
  • Etc.
 

The WT2 in-ear translator will be available in January, real-time feedback soon — from wearable-technologies.com by Cathy Russey

Excerpt:

Shenzhen, China & Pasadena, CA-based startup Timekettle wants to solve the language barrier problem. So, the company developed WT2 translator – an in-ear translator for real-time, natural and hands-free communication. The company just announced they’ll be shipping the new translator in January, 2019.

 

 

 

From DSC:
How long before voice drives most appliances, thermostats, etc?

Hisense is bringing Android and AI smarts to its 2019 TV range — from techradar.com by Stephen Lambrechts
Some big announcements planned for CES 2019

Excerpt (emphasis DSC):

Hisense has announced that it will unveil the next evolution of its VIDAA smart TV platform at CES 2019 next month, promising to take full advantage of artificial intelligence with version 3.0.

Each television in Hisense’s 2019 ULED TV lineup will boast the updated VIDAA 3.0 AI platform, with Amazon Alexa functionality fully integrated into the devices, meaning you won’t need an Echo device to use Alexa voice control features.

 

 

 

5 things you will see in the future “smart city” — from interestingengineering.com by Taylor Donovan Barnett
The Smart City is on the horizon and here are some of the crucial technologies part of it.

5 Things You Will See in the Future of the Smart City

Excerpt:

A New Framework: The Smart City
So, what exactly is a smart city? A smart city is an urban center that hosts a wide range of digital technology across its ecosystem. However, smart cities go far beyond just this definition.

Smart cities use technology to better population’s living experiences, operating as one big data-driven ecosystem.

The smart city uses that data from the people, vehicles, buildings etc. to not only improve citizens lives but also minimize the environmental impact of the city itself, constantly communicating with itself to maximize efficiency.

So what are some of the crucial components of the future smart city? Here is what you should know.

 

 

 

Google Glass wasn’t a failure. It raised crucial concerns. — from wired.com by Rose Eveleth

Excerpts:

So when Google ultimately retired Glass, it was in reaction to an important act of line drawing. It was an admission of defeat not by design, but by culture.

These kinds of skirmishes on the front lines of surveillance might seem inconsequential — but they can not only change the behavior of tech giants like Google, they can also change how we’re protected under the law. Each time we invite another device into our lives, we open up a legal conversation over how that device’s capabilities change our right to privacy. To understand why, we have to get wonky for a bit, but it’s worth it, I promise.

 

But where many people see Google Glass as a cautionary tale about tech adoption failure, I see a wild success. Not for Google of course, but for the rest of us. Google Glass is a story about human beings setting boundaries and pushing back against surveillance…

 

IN THE UNITED States, the laws that dictate when you can and cannot record someone have a several layers. But most of these laws were written when smartphones and digital home assistants weren’t even a glimmer in Google’s eye. As a result, they are mostly concerned with issues of government surveillance, not individuals surveilling each other or companies surveilling their customers. Which means that as cameras and microphones creep further into our everyday lives, there are more and more legal gray zones.

 

From DSC:
We need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have.

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

Addendum on 12/27/18: — also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt (emphasis DSC):

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 

The information below is from Heather Campbell at Chegg
(emphasis DSC)


 

Chegg Math Solver is an AI-driven tool to help the student understand math. It is more than just a calculator – it explains the approach to solving the problem. So, students won’t just copy the answer but understand and can solve similar problems at the same time. Most importantly,students can dig deeper into a problem and see why it’s solved that way. Chegg Math Solver.

In every subject, there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important concepts and terms are for a given subject, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand these terms and concepts, we’ve provided thousands of definitions, written and compiled by Chegg experts. Chegg Definition.

 

 

 

 

 


From DSC:
I see this type of functionality as a piece of a next generation learning platform — a piece of the Living from the Living [Class] Room type of vision. Great work here by Chegg!

Likely, students will also be able to take pictures of their homework, submit it online, and have that image/problem analyzed for correctness and/or where things went wrong with it.

 

 


 

 

Alexa, get me the articles (voice interfaces in academia) — from blog.libux.co by Kelly Dagan

Excerpt:

Credit to Jill O’Neill, who has written an engaging consideration of applications, discussions, and potentials for voice-user interfaces in the scholarly realm. She details a few use case scenarios: finding recent, authoritative biographies of Jane Austen; finding if your closest library has an item on the shelf now (and whether it’s worth the drive based on traffic).

Coming from an undergraduate-focused (and library) perspective, I can think of a few more:

  • asking if there are any group study rooms available at 7 pm and making a booking
  • finding out if [X] is open now (Archives, the Cafe, the Library, etc.)
  • finding three books on the Red Brigades, seeing if they are available, and saving the locations
  • grabbing five research articles on stereotype threat, to read later

 

Also see:

 

 

 
 
 

Elon Musk receives FCC approval to launch over 7,500 satellites into space — from digitaltrends.com by Kelly Hodgkins

new satellite-based network would cover the entire globe -- is that a good thing?

Excerpt (emphasis DSC):

The FCC this week unanimously approved SpaceX’s ambitious plan to launch 7,518 satellites into low-Earth orbit. These satellites, along with 4,425 previously approved satellites, will serve as the backbone for the company’s proposed Starlink broadband network. As it does with most of its projects, SpaceX is thinking big with its global broadband network. The company is expected to spend more than $10 billion to build and launch a constellation of satellites that will provide high-speed internet coverage to just about every corner of the planet.

 

To put this deployment in perspective, there are currently only 1,886 active satellites presently in orbit. These new SpaceX satellites will increase the number of active satellites six-fold in less than a decade. 

 

 

New simulation shows how Elon Musk’s internet satellite network might work — from digitaltrends.com by Luke Dormehl

Excerpt:

From Tesla to Hyperloop to plans to colonize Mars, it’s fair to say that Elon Musk thinks big. Among his many visionary ideas is the dream of building a space internet. Called Starlink, Musk’s ambition is to create a network for conveying a significant portion of internet traffic via thousands of satellites Musk hopes to have in orbit by the mid-2020s. But just how feasible is such a plan? And how do you avoid them crashing into one another?

 



 

From DSC:
Is this even the FCC’s call to make?

One one hand, such a network could be globally helpful, positive, and full of pros. But on the other hand, I wonder…what are the potential drawbacks with this proposal? Will nations across the globe launch their own networks — each of which consists of thousands of satellites?

While I love Elon’s big thinking, the nations need to weigh in on this one.

 

 

LinkedIn Learning Opens Its Platform (Slightly) [Young]

LinkedIn Learning Opens Its Platform (Slightly) — from edsurge by Jeff Young

Excerpt (emphasis DSC):

A few years ago, in a move toward professional learning, LinkedIn bought Lynda.com for $1.5 billion, adding the well-known library of video-based courses to its professional social network. Today LinkedIn officials announced that they plan to open up their platform to let in educational videos from other providers as well—but with a catch or two.

The plan, announced Friday, is to let companies or colleges who already subscribe to LinkedIn Learning add content from a select group of other providers. The company or college will still have to subscribe to those other services separately, so it’s essentially an integration—but it does mark a change in approach.

For LinkedIn, the goal is to become the front door for employees as they look for micro-courses for professional development.

 

LinkedIn also announced another service for its LinkedIn Learning platform called Q&A, which will give subscribers the ability to pose a question they have about the video lessons they’re taking. The question will first be sent to bots, but if that doesn’t yield an answer the query will be sent on to other learners, and in some cases the instructor who created the videos.

 

 

Also see:

LinkedIn becomes a serious open learning experience platform — from clomedia.com by Josh Bersin
LinkedIn is becoming a dominant learning solution with some pretty interesting competitive advantages, according to one learning analyst.

Excerpt:

LinkedIn has become quite a juggernaut in the corporate learning market. Last time I checked the company had more than 17 million users, 14,000 corporate customers, more than 3,000 courses and was growing at high double-digit rates. And all this in only about two years.

And the company just threw down the gauntlet; it’s now announcing it has completely opened up its learning platform to external content partners. This is the company’s formal announcement that LinkedIn Learning is not just an amazing array of content, it is a corporate learning platform. The company wants to become a single place for all organizational learning content.

 

LinkedIn now offers skills-based learning recommendations to any user through its machine learning algorithms. 

 

 



Is there demand for staying relevant? For learning new skills? For reinventing oneself?

Well…let’s see.

 

 

 

 

 

 



From DSC:
So…look out higher ed and traditional forms of accreditation — your window of opportunity may be starting to close. Alternatives to traditional higher ed continue to appear on the scene and gain momentum. LinkedIn — and/or similar organizations in the future — along with blockchain and big data backed efforts may gain traction in the future and start taking away some major market share. If employers get solid performance from their employees who have gone this route…higher ed better look out. 

Microsoft/LinkedIn/Lynda.com are nicely positioned to be a major player who can offer society a next generation learning platform at an incredible price — offering up-to-date, microlearning along with new forms of credentialing. It’s what I’ve been calling the Amazon.com of higher ed (previously the Walmart of Education) for ~10 years. It will take place in a strategy/platform similar to this one.

 



Also, this is what a guerilla on the back looks like:

 

This is what a guerilla on the back looks like!

 



Also see:

  • Meet the 83-Year-Old App Developer Who Says Edtech Should Better Support Seniors — from edsurge.com by Sydney Johnson
    Excerpt (emphasis DSC):
    Now at age 83, Wakamiya beams with excitement when she recounts her journey, which has been featured in news outlets and even at Apple’s developer conference last year. But through learning how to code, she believes that experience offers an even more important lesson to today’s education and technology companies: don’t forget about senior citizens.Today’s education technology products overwhelmingly target young people. And while there’s a growing industry around serving adult learners in higher education, companies largely neglect to consider the needs of the elderly.

 

 
 
© 2024 | Daniel Christian