Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

India Just Swore in Its First Robot Police Officer — from futurism.com by Dan Robitzski
RoboCop, meet KP-Bot.

Excerpt:

RoboCop
India just swore in its first robotic police officer, which is named KP-Bot.

The animatronic-looking machine was granted the rank of sub-inspector on Tuesday, and it will operate the front desk of Thiruvananthapuram police headquarters, according to India Today.

 

 

From DSC:
Whoa….hmmm…note to the ABA and to the legal education field — and actually to anyone involved in developing laws — we need to catch up. Quickly.

My thoughts go to the governments and to the militaries around the globe. Are we now on a slippery slope? How far along are the militaries of the world in integrating robotics and AI into their weapons of war? Quite far, I think.

Also, at the higher education level, are Computer Science and Engineering Departments taking their responsibilities seriously in this regard? What kind of teaching is being done (or not done) in terms of the moral responsibilities of their code? Their robots?

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

AR will spark the next big tech platform — call it Mirrorworld — from wired.com by Kevin Kelly

Excerpt:

It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call place­ness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”

The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.

 

Also see:
Google Maps in augmented reality points you in the right direction — from mashable.com by Sasha Lekach

 

 

Bobst launches augmented reality helpline — from proprint.com.au by Sheree Young

Excerpt:

Swiss packaging and label equipment supplier Bobst has launched a new augmented reality smart headset to help answer customer questions.

Rapid problem solving thanks to a new augmented reality helpline service introduced by Swiss packaging and label equipment supplier Bobst stands to save printers time and money, the company says.

The Helpline Plus AR innovation provides a remote assistance service to Bobst’s customers using a smart headset with augmented reality glasses. The technology is being gradually rolled out globally, Bobst says.

Customers can use the headset to contact technical experts and iron out any issues they may be having as well as receive real time advice and support.

 

 

 

LinkedIn 2019 Talent Trends: Soft Skills, Transparency and Trust — from linkedin.com by Josh Bersin

Excerpts:

This week LinkedIn released its 2019 Global Talent Trends research, a study that summarizes job and hiring data across millions of people, and the results are quite interesting. (5,165 talent and managers responded, a big sample.)

In an era when automation, AI, and technology has become more pervasive, important (and frightening) than ever, the big issue companies face is about people: how we find and develop soft skills, how we create fairness and transparency, and how we make the workplace more flexible, humane, and honest.

The most interesting part of this research is a simple fact: in today’s world of software engineering and ever-more technology, it’s soft skills that employers want. 91% of companies cited this as an issue and 80% of companies are struggling to find better soft skills in the market.

What is a “soft skill?” The term goes back twenty years when we had “hard skills” (engineering and science) so we threw everything else into the category of “soft.” In reality soft skills are all the human skills we have in teamwork, leadership, collaboration, communication, creativity, and person to person service. It’s easy to “teach” hard skills, but soft skills must be “learned.”

 

 

Also see:

Employers Want ‘Uniquely Human Skills’ — from campustechnology.com by Dian Schaffhauser

Excerpt:

According to 502 hiring managers and 150 HR decision-makers, the top skills they’re hunting for among new hires are:

  • The ability to listen (74 percent);
  • Attention to detail and attentiveness (70 percent);
  • Effective communication (69 percent);
  • Critical thinking (67 percent);
  • Strong interpersonal abilities (65 percent); and
  • Being able to keep learning (65 percent).
 

When the future comes to West Michigan, will we be ready?


 

UIX: When the future comes to West Michigan, will we be ready? — from rapidgrowthmedia.com by Matthew Russell

Excerpts (emphasis DSC):

“Here in the United States, if we were to personify things a bit, it’s almost like society is anxiously calling out to an older sibling (i.e., emerging technologies), ‘Heh! Wait up!!!'” Christian says. “This trend has numerous ramifications.”

Out of those ramifications, Christian names three main points that society will have to address to fully understand, make use of, and make practical, future technologies.

  1. The need for the legal/legislative side of the world to close the gap between what’s possible and what’s legal
  2. The need for lifelong learning and to reinvent oneself
  3. The need to make pulse-checking/futurism an essential tool in the toolbox of every member of the workforce today and in the future

 

When the future comes to West Michigan, will we be ready?

Photos by Adam Bird

 

From DSC:
The key thing that I was trying to relay in my contribution towards Matthew’s helpful article was that we are now on an exponential trajectory of technological change. This trend has ramifications for numerous societies around the globe, and it involves the legal realm as well. Hopefully, all of us in the workforce are coming to realize our need to be constantly pulse-checking the relevant landscapes around us. To help make that happen, each of us needs to be tapping into the appropriate “streams of content” that are relevant to our careers so that our knowledgebases are as up-to-date as possible. We’re all into lifelong learning now, right?

Along these lines, increasingly there is a need for futurism to hit the mainstream. That is, when the world is moving at 120+mph, the skills and methods that futurists follow must be better taught and understood, or many people will be broadsided by the changes brought about by emerging technologies. We need to better pulse-check the relevant landscapes, anticipate the oncoming changes, develop potential scenarios, and then design the strategies to respond to those potential scenarios.

 

 

Online curricula helps teachers tackle AI in the classroom — from educationdive.com by Lauren Barack

Dive Brief:

  • Schools may already use some form of artificial intelligence (AI), but hardly any have curricula designed to teach K-12 students how it works and how to use it, wrote EdSurge. However, organizations such as the International Society for Technology in Education (ISTE) are developing their own sets of lessons that teachers can take to their classrooms.
  • Members of “AI for K-12” — an initiative co-sponsored by the Association for the Advancement of Artificial Intelligence and the Computer Science Teachers Association — wrote in a paper that an AI curriculum should address five basic ideas:
    • Computers use sensors to understand what goes on around them.
    • Computers can learn from data.
    • With this data, computers can create models for reasoning.
    • While computers are smart, it’s hard for them to understand people’s emotions, intentions and natural languages, making interactions less comfortable.
    • AI can be a beneficial tool, but it can also harm society.
  • These kinds of lessons are already at play among groups including the Boys and Girls Club of Western Pennsylvania, which has been using a program from online AI curriculum site ReadyAI. The education company lent its AI-in-a-Box kit, which normally sells for $3,000, to the group so it could teach these concepts.

 

AI curriculum is coming for K-12 at last. What will it include? — from edsurge.com by Danielle Dreilinger

Excerpt:

Artificial intelligence powers Amazon’s recommendations engine, Google Translate and Siri, for example. But few U.S. elementary and secondary schools teach the subject, maybe because there are so few curricula available for students. Members of the “AI for K-12” work group wrote in a recent Association for the Advancement of Artificial Intelligence white paper that “unlike the general subject of computing, when it comes to AI, there is little guidance for teaching at the K-12 level.”

But that’s starting to change. Among other advances, ISTE and AI4All are developing separate curricula with support from General Motors and Google, respectively, according to the white paper. Lead author Dave Touretzky of Carnegie Mellon has developed his own curriculum, Calypso. It’s part of the “AI-in-a-Box” kit, which is being used by more than a dozen community groups and school systems, including Carter’s class.

 

 

 

 

Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they’re working on — from businessinsider.com by Avery Hartmans

Summary (emphasis DSC):

  • Amazon’s vice president of Alexa, Steve Rabuchin, has confirmed that yes, there really are 10,000 Amazon employees working on Alexa and the Echo.
  • Those employees are focused on things like machine learning and making Alexa more knowledgeable.
  • Some employees are working on giving Alexa a personality, too.

 

 

From DSC:
How might this trend impact learning spaces? For example, I am interested in using voice to intuitively “drive” smart classroom control systems:

  • “Alexa, turn on the projector”
  • “Alexa, dim the lights by 50%”
  • “Alexa, open Canvas and launch my Constitutional Law I class”

 

 

 

Gartner survey shows 37% of organizations have implemented AI in some form — from gartner.com
Despite talent shortages, the percentage of enterprises employing AI grew 270% over the past four years

Excerpt:

The number of enterprises implementing artificial intelligence (AI) grew 270 percent in the past four years and tripled in the past year, according to the Gartner, Inc. 2019 CIO Survey. Results showed that organizations across all industries use AI in a variety of applications, but struggle with acute talent shortages.

 

The deployment of AI has tripled in the past year — rising from 25 percent in 2018 to 37 percent today. The reasons for this big jump is that AI capabilities have matured significantly and thus enterprises are more willing to implement the technology. “We still remain far from general AI that can wholly take over complex tasks, but we have now entered the realm of AI-augmented work and decision science — what we call ‘augmented intelligence,’” Mr. Howard added.

 

Key Findings from the “2019 CIO Survey: CIOs Have Awoken to the Importance of AI”

  • The percentage of enterprises deploying artificial intelligence (AI) has tripled in the past year.
  • CIOs picked AI as the top game-changer technology.
  • Enterprises use AI in a wide variety of applications.
  • AI suffers from acute talent shortages.

 

 

From DSC:
In this posting, I discussed an idea for a new TV show — a program that would be both entertaining and educational. So I suppose that this posting is a Part II along those same lines. 

The program that came to my mind at that time was a program that would focus on significant topics and issues within American society — offered up in a debate/presentation style format.

I had envisioned that you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc. These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

OR

…as I revist that idea today…perhaps the show could feature humans versus an artificial intelligence such as IBM’s Project Debater:

 

 

Project Debater is the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity.

 

 

 

The five most important new jobs in AI, according to KPMG — from qz.com by Cassie Werber

Excerpt:

Perhaps as a counter to the panic that artificial intelligence will destroy jobs, consulting firm KPMG published a list (on 1/8/19) of what it predicts will soon become the five most sought-after AI roles. The predictions are based on the company’s own projects and those on which it advises. They are:

  • AI Architect – Responsible for working out where AI can help a business, measuring performance and—crucially— “sustaining the AI model over time.” Lack of architects “is a big reason why companies cannot successfully sustain AI initiatives,” KMPG notes.
  • AI Product Manager – Liaises between teams, making sure ideas can be implemented, especially at scale. Works closely with architects, and with human resources departments to make sure humans and machines can all work effectively.
  • Data Scientist – Manages the huge amounts of available data and designs algorithms to make it meaningful.
  • AI Technology Software Engineer – “One of the biggest problems facing businesses is getting AI from pilot phase to scalable deployment,” KMPG writes. Software engineers need to be able both to build scalable technology and understand how AI actually works.
  • AI Ethicist – AI presents a host of ethical challenges which will continue to unfold as the technology develops. Creating guidelines and ensuring they’re upheld will increasingly become a full-time job.

 

While it’s all very well to list the jobs people should be training and hiring for, it’s another matter to actually create a pipeline of people ready to enter those roles. Brad Fisher, KPMG’s US lead on data and analytics and the lead author of the predictions, tells Quartz there aren’t enough people getting ready for these roles.

 

Fisher has a steer for those who are eyeing AI jobs but have yet to choose an academic path: business process skills can be “trained,” he said, but “there is no substitute for the deep technical skillsets, such as mathematics, econometrics, or computer science, which would prepare someone to be a data scientist or a big-data software engineer.”

 

From DSC:
I don’t think institutions of higher education (as well as several other types of institutions in our society) are recognizing that the pace of technological change has changed, and that there are significant ramifications to those changes upon society. And if these institutions have picked up on it, you can hardly tell. We simply aren’t used to this pace of change.

Technologies change quickly. People change slowly. And, by the way, that is not a comment on how old someone is…change is hard at almost any age.

 

 

 

 

 

 

Presentation Translator for PowerPoint — from Microsoft (emphasis below from DSC:)

Presentation Translator breaks down the language barrier by allowing users to offer live, subtitled presentations straight from PowerPoint. As you speak, the add-in powered by the Microsoft Translator live feature, allows you to display subtitles directly on your PowerPoint presentation in any one of more than 60 supported text languages. This feature can also be used for audiences who are deaf or hard of hearing.

 

Additionally, up to 100 audience members in the room can follow along with the presentation in their own language, including the speaker’s language, on their phone, tablet or computer.

 

From DSC:
Up to 100 audience members in the room can follow along with the presentation in their own language! Wow!

Are you thinking what I’m thinking?! If this could also address learners and/or employees outside the room as well, this could be an incredibly powerful piece of a next generation, global learning platform! 

Automatic translation with subtitles — per the learner’s or employee’s primary language setting as established in their cloud-based learner profile. Though this posting is not about blockchain, the idea of a cloud-based learner profile reminds me of the following graphic I created in January 2017.

A couple of relevant quotes here:

A number of players and factors are changing the field. Georgia Institute of Technology calls it “at-scale” learning; others call it the “mega-university” — whatever you call it, this is the advent of the very large, 100,000-plus-student-scale online provider. Coursera, edX, Udacity and FutureLearn (U.K.) are among the largest providers. But individual universities such as Southern New Hampshire, Arizona State and Georgia Tech are approaching the “at-scale” mark as well. One could say that’s evidence of success in online learning. And without question it is.

But, with highly reputable programs at this scale and tuition rates at half or below the going rate for regional and state universities, the impact is rippling through higher ed. Georgia Tech’s top 10-ranked computer science master’s with a total expense of less than $10,000 has drawn more than 10,000 qualified majors. That has an impact on the enrollment at scores of online computer science master’s programs offered elsewhere. The overall online enrollment is up, but it is disproportionately centered in affordable scaled programs, draining students from the more expensive, smaller programs at individual universities. The dominoes fall as more and more high-quality at-scale programs proliferate.

— Ray Schroeder

 

 

Education goes omnichannel. In today’s connected world, consumers expect to have anything they want available at their fingertips, and education is no different. Workers expect to be able to learn on-demand, getting the skills and knowledge they need in that moment, to be able to apply it as soon as possible. Moving fluidly between working and learning, without having to take time off to go to – or back to – school will become non-negotiable.

Anant Agarwal

 

From DSC:
Is there major change/disruption ahead? Could be…for many, it can’t come soon enough.

 

 

Ten HR trends in the age of artificial intelligence — from fortune.com by Jeanne Meister
The future of HR is both digital and human as HR leaders focus on optimizing the combination of human and automated work. This is driving a new HR priority: requiring leaders and teams to develop fluency in artificial intelligence while they re-imagine HR to be more personal, human, and intuitive.

Excerpt from 21 More Jobs Of the Future (emphasis DSC):

Voice UX Designer: This role will leverage voice as a platform to deliver an “optimal” dialect and sound that is pleasing to each of the seven billion humans on the planet. The Voice UX Designer will do this by creating a set of AI tools and algorithms to help individuals find their “perfect voice” assistant.

Head of Business Behavior: The head of business behavior will analyze employee behavioral data such as performance data along with data gathered through personal, environmental and spatial sensors to create strategies to improve employee experience, cross company collaboration, productivity and employee well-being.

The question for HR leaders is: What are new job roles in HR that are on the horizon as A.I. becomes integrated into the workplace?

Chief Ethical and Humane Use Officer: This job role is already being filled by Salesforce announcing its first Chief Ethical and Humane Officer this month. This new role will focus on developing strategies to use technology in an ethical and humane way. As practical uses of AI have exploded in recent years, we look for more companies to establish new jobs focusing on ethical uses of AI to ensure AI’s trustworthiness, while also helping to diffuse fears about it.

A.I. Trainer: This role allows the existing knowledge you have about a job to be ready for A.I. to use.  Creating knowledge for an A.I. supported workplace requires individuals to tag or “annotate” discrete knowledge nuggets so the correct data is served up in a conversational interface. This role is increasingly important as the role of a recruiter is augmented by AI.

 

 

Also see:

  • Experts Weigh in on Merits of AI in Education — from by Dian Schaffhauser
    Excerpt:
    Will artificial intelligence make most people better off over the next decade, or will it redefine what free will means or what a human being is? A new report by the Pew Research Center has weighed in on the topic by conferring with some 979 experts, who have, in summary, predicted that networked AI “will amplify human effectiveness but also threaten human autonomy, agency and capabilities.”

    These same experts also weighed in on the expected changes in formal and informal education systems. Many mentioned seeing “more options for affordable adaptive and individualized learning solutions,” such as the use of AI assistants to enhance learning activities and their effectiveness.

 

 

Top six AI and automation trends for 2019 — from forbes.com by Daniel Newman

Excerpt:

If your company hasn’t yet created a plan for AI and automation throughout your enterprise, you have some work to do. Experts believe AI will add nearly $16 trillion to the global economy by 2030, and 20 % of companies surveyed are already planning to incorporate AI throughout their companies next year. As 2018 winds down, now is the time to take a look at some trends and predictions for AI and automation that I believe will dominate the headlines in 2019—and to think about how you may incorporate them into your own company.

 

Also see — and an insert here from DSC:

Kai-Fu has a rosier picture than I do in regards to how humanity will be impacted by AI. One simply needs to check out today’s news to see that humans have a very hard time creating unity, thinking about why businesses exist in the first place, and being kind to one another…

 

 

 

How AI can save our humanity 

 

 

 

Big tech may look troubled, but it’s just getting started — from nytimes.com by David Streitfeld

Excerpt:

SAN JOSE, Calif. — Silicon Valley ended 2018 somewhere it had never been: embattled.

Lawmakers across the political spectrum say Big Tech, for so long the exalted embodiment of American genius, has too much power. Once seen as a force for making our lives better and our brains smarter, tech is now accused of inflaming, radicalizing, dumbing down and squeezing the masses. Tech company stocks have been pummeled from their highs. Regulation looms. Even tech executives are calling for it.

The expansion underlines the dizzying truth of Big Tech: It is barely getting started.

 

“For all intents and purposes, we’re only 35 years into a 75- or 80-year process of moving from analog to digital,” said Tim Bajarin, a longtime tech consultant to companies including Apple, IBM and Microsoft. “The image of Silicon Valley as Nirvana has certainly taken a hit, but the reality is that we the consumers are constantly voting for them.”

 

Big Tech needs to be regulated, many are beginning to argue, and yet there are worries about giving that power to the government.

Which leaves regulation up to the companies themselves, always a dubious proposition.

 

 

 
© 2025 | Daniel Christian