With great tech success, comes even greater responsibility — from techcrunch.com by Ron Miller


As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.



But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.





Why the Public Overlooks and Undervalues Tech’s Power — from morningconsult.com by Joanna Piacenza
Some experts say the tech industry is rapidly nearing a day of reckoning


  • 5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
  • Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).

It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.

But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.





Tech companies should stop pretending AI won’t destroy jobs — from technologyreview.com / MIT Technology Review by Kai-Fu Lee
No matter what anyone tells you, we’re not ready for the massive societal upheavals on the way.

Excerpt (emphasis DSC):

The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-­collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.

And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.


From DSC:
If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.

However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust. Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.


Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.



To Fight Fatal Infections, Hospitals May Turn to Algorithms — from scientificamerican.com by John McQuaid
Machine learning could speed up diagnoses and improve accuracy


The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.



Lawyer-Bots Are Shaking Up Jobs — from technologyreview.com by Erin Winick


Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.

As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.


“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”


So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.

People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.



Lessons From Artificial Intelligence Pioneers — from gartner.com by Christy Pettey

CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.

Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.

Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations

“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”

So what lessons can we learn from these early AI pioneers?



Why Artificial Intelligence Researchers Should Be More Paranoid — from wired.com by Tom Simonite


What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.



How to Prepare College Graduates for an AI World — from wsj.com by
Northeastern University President Joseph Aoun says schools need to change their focus, quickly


WSJ: What about adults who are already in the workforce?

DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.

That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.



Inside Amazon’s Artificial Intelligence Flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.


Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.





10 Breakthrough Technologies 2018 -- from MIT Technology Review


10 Breakthrough Technologies 2018 — from MIT Technology Review


Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the 10 technology advances we think will shape the way we work and live now and for years to come.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.

  1. 3-D Metal Printing
  2. Artificial Embryos
  3. Sensing City
  4. AI for Everybody
  5. Dueling Neural Networks
  6. Babel-Fish Earbuds
    In the cult sci-fi classic The Hitchhiker’s Guide to the Galaxy, you slide a yellow Babel fish into your ear to get translations in an instant. In the real world, Google has come up with an interim solution: a $159 pair of earbuds, called Pixel Buds. These work with its Pixel smartphones and Google Translate app to produce practically real-time translation. One person wears the earbuds, while the other holds a phone. The earbud wearer speaks in his or her language—English is the default—and the app translates the talking and plays it aloud on the phone. The person holding the phone responds; this response is translated and played through the earbuds.
  7. Zero-Carbon Natural Gas
  8. Perfect Online Privacy
  9. Genetic Fortune-Telling
  10. Materials’ Quantum Leap




Fake videos are on the rise. As they become more realistic, seeing shouldn’t always be believing — from latimes.com by David Pierson Fe


It’s not hard to imagine a world in which social media is awash with doctored videos targeting ordinary people to exact revenge, extort or to simply troll.

In that scenario, where Twitter and Facebook are algorithmically flooded with hoaxes, no one could fully believe what they see. Truth, already diminished by Russia’s misinformation campaign and President Trump’s proclivity to label uncomplimentary journalism “fake news,” would be more subjective than ever.

The danger there is not just believing hoaxes, but also dismissing what’s real.

The consequences could be devastating for the notion of evidentiary video, long considered the paradigm of proof given the sophistication required to manipulate it.

“This goes far beyond ‘fake news’ because you are dealing with a medium, video, that we traditionally put a tremendous amount of weight on and trust in,” said David Ryan Polgar, a writer and self-described tech ethicist.





From DSC:
Though I’m typically pro-technology, this is truly disturbing. There are certainly downsides to technology as well as upsides — but it’s how we use a technology that can make the real difference. Again, this is truly disturbing.



AI plus human intelligence is the future of work — from forbes.com by Jeanne Meister


  • 1 in 5 workers will have AI as their co worker in 2022
  • More job roles will change than will be become totally automated so HR needs to prepare today

As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.

How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…

The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”


…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.



From DSC:
Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.

Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.




Artificial intelligence is going to supercharge surveillance – from theverge.com by James Vincent
What happens when digital eyes get the brains to match?

From DSC:
Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent is a bit unnerving.




Artificial intelligence | 2018 AI predictions — from thomsonreuters.com


  • AI brings a new set of rules to knowledge work
  • Newsrooms embrace AI
  • Lawyers assess the risks of not using AI
  • Deep learning goes mainstream
  • Smart cars demand even smarter humans
  • Accountants audit forward
  • Wealth managers look to AI to compete and grow




Chatbots and Virtual Assistants in L&D: 4 Use Cases to Pilot in 2018 —  from bottomlineperformance.com by Steven Boller


  1. Use a virtual assistant like Amazon Alexa or Google Assistant to answer spoken questions from on-the-go learners.
  2. Answer common learner questions in a chat window or via SMS.
  3. Customize a learning path based on learners’ demographic information.
  4. Use a chatbot to assess learner knowledge.




Suncorp looks to augmented reality for insurance claims — from itnews.com.au by Ry Crozier with thanks to Woontack Woo for this resource


Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.

The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.

“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.




6 important AI technologies to look out for in 2018 — from itproportal.com by  Olga Egorsheva
Will businesses and individuals finally make AI a part of their daily lives?






The legal and ethical minefield of AI: ‘Tech has the power to do harm as well as good’ — from theguardian.com by Joanna Goodman


Artificial intelligence and machine learning tools are already embedded in our lives, but how should businesses that use such technology manage the associated risks?

As artificial intelligence (AI) penetrates deeper into business operations and services, even supporting judicial decision-making, are we approaching a time when the greatest legal mind could be a machine? According to Prof Dame Wendy Hall, co-author of the report Growing the Artificial Intelligence Industry in the UK, we are just at the beginning of the AI journey and now is the time to set boundaries.

“All tech has the power to do harm as well as good,” Hall says. “So we have to look at regulating companies and deciding what they can and cannot do with the data now.”

AI and robotics professor Noel Sharkey highlights the “legal and moral implications of entrusting human decisions to algorithms that we cannot fully understand”. He explains that the narrow AI systems that businesses currently use (to draw inferences from large volumes of data) apply algorithms that learn from experience and feed back to real-time and historical data. But these systems are far from perfect.

Potential results include flawed outcomes or reasoning, but difficulties also arise from the lack of transparency. This supports Hall’s call for supervision and regulation. Businesses that use AI in their operations need to manage the ethical and legal risks, and the legal profession will have a major role to play in assessing and apportioning risk, responsibility and accountability.



Also see:





Top 10 Technology Trends for 2018: IEEE Computer Society Predicts the Future of Tech — from computer.org


The top 10 technology trends predicted to reach adoption in 2018 are:

  1. Deep learning (DL)
  2. Digital currencies.
  3. Blockchain.
  4. Industrial IoT.
  5. Robotics.
  6. Assisted transportation.
  7. Assisted reality and virtual reality (AR/VR).
  8. Ethics, laws, and policies for privacy, security, and liability.
  9. Accelerators and 3D.
  10. Cybersecurity and AI.

Existing Technologies: We did not include the following technologies in our top 10 list as we assume that they have already experienced broad adoption:

A. Data science
B. “Cloudification”
C. Smart cities
D. Sustainability
E. IoT/edge computing




Also relevant/see:




AI: Embracing the promises and realities — from the Allegis Group


What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:

  • According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses.  (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
  • The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
  • 47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
  • In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.

Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30

While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.






The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.





Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.





The Ivory Tower Can’t Keep Ignoring Tech — from nytimes.com by Cathy O’Neil


We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental health disorders, sentencing algorithms that fail twice as often for black defendants as for white defendants, statistically flawed public teacher assessments or oppressive scheduling algorithms. And we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.



There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.



There’s one solution for the short term. We urgently need an academic institute focused on algorithmic accountability. First, it should provide a comprehensive ethical training for future engineers and data scientists at the undergraduate and graduate levels, with case studies taken from real-world algorithms that are choosing the winners from the losers. Lecturers from humanities, social sciences and philosophy departments should weigh in.




Somewhat related:




Cameras are Watching and Machines are Learning: The Beginning — from medium.com by Brian Brackeen
You better believe their eyes

This is a new series about cameras and their relationship to face recognition, machine learning, and how, in the future, the ways in which we interact with technology will be radically different.

Excerpt (emphasis DSC):

First, the data.
LDV Capital, a venture capital firm focussed on Visual Technologies, recently published a 19 page report thick with some pretty eye opening data around cameras.

Specifically, how many cameras we can expect to have watching us, what they are watching us for, and how those insights will be used.

According to their study, by 2022 there will be more than 44,354,881,622 (that’s 44 BILLION) cameras in use globally, collecting even more billions of images for visual collateral. This is incredible — but what’s interesting — is that most of these images will never be seen by human eyes.




From DSC:
Though the author asserts there will be great business opportunities surrounding this trend, I’m not sure that I’m comfortable with it. Embedded cameras everywhere…hmmm…what he calls a privilege (in the quote below), I see as an overstepping of boundaries.

We have the privilege of experiencing the actual evolution of a device that we have come to know as one thing, for all of our lives to this point, into something completely different, to the extent that the word “camera”, itself, is becoming outdated.

How do you feel about this trend?




Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian