Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

Google is bringing translation to its Home speakers — from businessinsider.com by Peter Newman

Excerpt:

Google has added real-time translation capabilities to its Google Home smart speakers, the Home Hub screened speaker, as well as other screened devices from third parties, according to Android Police.

 

Also see:

 

 

AI bias: 9 questions leaders should ask — from enterprisersproject.com by Kevin Casey
Artificial intelligence bias can create problems ranging from bad business decisions to injustice. Use these questions to fight off potential biases in your AI systems.

Excerpt:

People questions to ask about AI bias
1. Who is building the algorithms?
2. Do your AI & ML teams take responsibility for how their work will be used?
3. Who should lead an organization’s effort to identify bias in its AI systems?
4. How is my training data constructed?

Data questions to ask about AI bias
5. Is the data set comprehensive?
6. Do you have multiple sources of data?

Management questions to ask about AI bias
7. What proportion of resources is appropriate for an organization to devote to assessing potential bias?
8. Have you thought deeply about what metrics you use to evaluate your work?
9. How can we test for bias in training data?

 

 

 

 

Emerging technology trends can seem both elusive and ephemeral, but some become integral to business and IT strategies—and form the backbone of tomorrow’s technology innovation. The eight chapters of Tech Trends 2019 look to guide CIOs through today’s most promising trends, with an eye toward innovation and growth and a spotlight on emerging trends that may well offer new avenues for pursuing strategic ambitions.

 

 

Online curricula helps teachers tackle AI in the classroom — from educationdive.com by Lauren Barack

Dive Brief:

  • Schools may already use some form of artificial intelligence (AI), but hardly any have curricula designed to teach K-12 students how it works and how to use it, wrote EdSurge. However, organizations such as the International Society for Technology in Education (ISTE) are developing their own sets of lessons that teachers can take to their classrooms.
  • Members of “AI for K-12” — an initiative co-sponsored by the Association for the Advancement of Artificial Intelligence and the Computer Science Teachers Association — wrote in a paper that an AI curriculum should address five basic ideas:
    • Computers use sensors to understand what goes on around them.
    • Computers can learn from data.
    • With this data, computers can create models for reasoning.
    • While computers are smart, it’s hard for them to understand people’s emotions, intentions and natural languages, making interactions less comfortable.
    • AI can be a beneficial tool, but it can also harm society.
  • These kinds of lessons are already at play among groups including the Boys and Girls Club of Western Pennsylvania, which has been using a program from online AI curriculum site ReadyAI. The education company lent its AI-in-a-Box kit, which normally sells for $3,000, to the group so it could teach these concepts.

 

AI curriculum is coming for K-12 at last. What will it include? — from edsurge.com by Danielle Dreilinger

Excerpt:

Artificial intelligence powers Amazon’s recommendations engine, Google Translate and Siri, for example. But few U.S. elementary and secondary schools teach the subject, maybe because there are so few curricula available for students. Members of the “AI for K-12” work group wrote in a recent Association for the Advancement of Artificial Intelligence white paper that “unlike the general subject of computing, when it comes to AI, there is little guidance for teaching at the K-12 level.”

But that’s starting to change. Among other advances, ISTE and AI4All are developing separate curricula with support from General Motors and Google, respectively, according to the white paper. Lead author Dave Touretzky of Carnegie Mellon has developed his own curriculum, Calypso. It’s part of the “AI-in-a-Box” kit, which is being used by more than a dozen community groups and school systems, including Carter’s class.

 

 

 

 

What does it say when a legal blockchain eBook has 1.7M views? — from legalmosaic.com by Mark A. Cohen

Excerpts (emphasis DSC):

Blockchain For Lawyers,” a recently-released eBook by Australian legal tech company Legaler, drew 1.7M views in two weeks. What does that staggering number say about blockchain, legal technology, and the legal industry? Clearly, blockchain is a hot legal topic, along with artificial intelligence (AI), and legal tech generally.

Legal practice and delivery are each changing. New practice areas like cryptocurrency, cybersecurity, and Internet law are emerging as law struggles to keep pace with the speed of business change in the digital age. Concurrently, several staples of traditional practice–research, document review, etc.– are becoming automated and/or no longer performed by law firm associates. There is more “turnover” of practice tasks, more reliance on machines and non-licensed attorneys to mine data and provide domain expertise used by lawyers, and more collaboration than ever before. The emergence of new industries demands that lawyers not only provide legal expertise in support of new areas but also that they possess intellectual agility to master them quickly. Many practice areas law students will encounter have yet to be created. That means that all lawyers will be required to be more agile than their predecessors and engage in ongoing training.

 

 

 

Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they’re working on — from businessinsider.com by Avery Hartmans

Summary (emphasis DSC):

  • Amazon’s vice president of Alexa, Steve Rabuchin, has confirmed that yes, there really are 10,000 Amazon employees working on Alexa and the Echo.
  • Those employees are focused on things like machine learning and making Alexa more knowledgeable.
  • Some employees are working on giving Alexa a personality, too.

 

 

From DSC:
How might this trend impact learning spaces? For example, I am interested in using voice to intuitively “drive” smart classroom control systems:

  • “Alexa, turn on the projector”
  • “Alexa, dim the lights by 50%”
  • “Alexa, open Canvas and launch my Constitutional Law I class”

 

 

 

Gartner survey shows 37% of organizations have implemented AI in some form — from gartner.com
Despite talent shortages, the percentage of enterprises employing AI grew 270% over the past four years

Excerpt:

The number of enterprises implementing artificial intelligence (AI) grew 270 percent in the past four years and tripled in the past year, according to the Gartner, Inc. 2019 CIO Survey. Results showed that organizations across all industries use AI in a variety of applications, but struggle with acute talent shortages.

 

The deployment of AI has tripled in the past year — rising from 25 percent in 2018 to 37 percent today. The reasons for this big jump is that AI capabilities have matured significantly and thus enterprises are more willing to implement the technology. “We still remain far from general AI that can wholly take over complex tasks, but we have now entered the realm of AI-augmented work and decision science — what we call ‘augmented intelligence,’” Mr. Howard added.

 

Key Findings from the “2019 CIO Survey: CIOs Have Awoken to the Importance of AI”

  • The percentage of enterprises deploying artificial intelligence (AI) has tripled in the past year.
  • CIOs picked AI as the top game-changer technology.
  • Enterprises use AI in a wide variety of applications.
  • AI suffers from acute talent shortages.

 

 

From DSC:
In this posting, I discussed an idea for a new TV show — a program that would be both entertaining and educational. So I suppose that this posting is a Part II along those same lines. 

The program that came to my mind at that time was a program that would focus on significant topics and issues within American society — offered up in a debate/presentation style format.

I had envisioned that you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc. These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

OR

…as I revist that idea today…perhaps the show could feature humans versus an artificial intelligence such as IBM’s Project Debater:

 

 

Project Debater is the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian