Per Jane Hart on LinkedIn:

Top 200 Tools for Learning 2019 is now published, together with:

PLUS analysis of how these tools are being used in different context, new graphics, and updated comments on the tools’ pages that show how people are using the tools.

 

 

 

Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

DSC: Holy smokes!!! How might this be applied to education/learning/training in the 21st century!?!

DC: Holy smokes!!! How might this be applied to education/learning/training in the 21st century!?!

 

“What if neither distance nor language mattered? What if technology could help you be anywhere you need to be and speak any language? Using AI technology and holographic experiences this is possible, and it is revolutionary.”

 

 

Also see:

Microsoft has a wild hologram that translates HoloLens keynotes into Japanese — from theverge.com by
Azure and HoloLens combine for a hint at the future

Excerpt:

Microsoft has created a hologram that will transform someone into a digital speaker of another language. The software giant unveiled the technology during a keynote at the Microsoft Inspire partner conference [on 7/17/19] in Las Vegas. Microsoft recently scanned Julia White, a company executive for Azure, at a Mixed Reality capture studio to transform her into an exact hologram replica.

The digital version appeared onstage to translate the keynote into Japanese. Microsoft has used its Azure AI technologies and neural text-to-speech to make this possible. It works by taking recordings of White’s voice, in order to create a personalized voice signature, to make it sound like she’s speaking Japanese.

 

 

 

Microsoft’s new AI wants to help you crush your next presentation — from pcmag.com by Jake Leary
PowerPoint is receiving a slew of updates, including one that aims to help you improve your public speaking.

Excerpt:

Microsoft [on 6/18/19] announced several PowerPoint upgrades, the most notable of which is an artificial intelligence tool that aims to help you overcome pre-presentation jitters.

The Presenter Coach AI listens to you practice and offers real-time feedback on your pace, word choice, and more. It will, for instance, warn you if you’re using filler words like “umm” and “ahh,” profanities, non-inclusive language, or reading directly from your slides. At the end of your rehearsal, it provides a report with tips for future attempts. Presenter Coach arrives later this summer.

 

 

 Also see:

Microsoft is building a virtual assistant for work. Google is building one for everything else — from qz.com by Dave Gershgorn

Excerpts:

In the early days of virtual personal assistants, the goal was to create a multipurpose digital buddy—always there, ready to take on any task. Now, tech companies are realizing that doing it all is too much, and instead doubling down on what they know best.

Since the company has a deep understanding of how organizations work, Microsoft is focusing on managing your workday with voice, rearranging meetings and turning the dials on the behemoth of bureaucracy in concert with your phone.

 

Voice is the next major platform, and being first to it is an opportunity to make the category as popular as Apple made touchscreens. To dominate even one aspect of voice technology is to tap into the next iteration of how humans use computers.

 

 

From DSC:
What affordances might these developments provide for our future learning spaces?

Will faculty members’ voices be recognized to:

  • Sign onto the LMS?
  • Dim the lights?
  • Turn on the projector(s) and/or display(s)?
  • Other?

Will students be able to send the contents of their mobile devices to particular displays via their voices?

Will voice be mixed in with augmented reality (i.e., the students and their devices can “see” which device to send their content to)?

Hmmm…time will tell.

 

 

Blockchain: The move from freedom to the rigid, dominant system in learning — from oeb.global by Inge de Waard
In this post Inge de Waard gives an overview of current Blockchain options from industry and looks at its impact on universities as well as philosophises on its future.

Excerpt:

I mentioned a couple of Blockchain certification options already, but an even more advanced blockchain in learning example has entered on my radar too. It is a Russian implementation called Disciplina. This platform combines education (including vocational training), recruiting (comparable with what LinkedIn is doing with its economic graph) and careers for professionals. All of this is combined into a blockchain solution that keeps track of all the learners’ journey. The platform includes not only online courses as we know it but also coaching. After each training, you get a certificate.

TeachMePlease, which is a partner of Disciplina, enables teachers and students to find each other for specific professional training as well as curriculum-related children’s schooling. Admittedly, these initiatives are still being rolled out in terms of courses, but it clearly shows where the next learning will be located: in an umbrella above all the universities and professional academies. At present, the university courses are being embedded into course offerings by corporations that roll out a layer post-university, or post-vocational schooling.

Europe embraces blockchain, as can be seen with their EU Blockchain observatory and forum. And in a more national action, Malta is storing their certifications in a blockchain nationwide as well. We cannot deny that blockchain is getting picked up by both companies and governments. Universities have been piloting several blockchain certification options, and they also harbour some of the leading voices in the debate on blockchain certification.

 

Also see:

AI in education -- April 2019 by Inge de Waard

Future proof learning -- the Skills 3.0 project

 

Also see:

  • 7 blockchain mistakes and how to avoid them — from computerworld.com by Lucas Mearian
    The blockchain industry is still something of a wild west, with many cloud service offerings and a large universe of platforms that can vary greatly in their capabilities. So enterprises should beware jumping to conclusions about the technology.
 
 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

To attract talent, corporations turn to MOOCs — from edsurge.com by by Wade Tyler Millward

Excerpt:

When executives at tech giants Salesforce and Microsoft decided in fall 2017 to turn to an online education platform to help train potential users of products for their vendors, they turned to Pierre Dubuc and his team in fall 2017.

Two years later, Dubuc’s company, OpenClassrooms, has closed deals with both of them. Salesforce has worked with OpenClassrooms to create and offer a developer-training course to help people learn how to use the Salesforce platform. In a similar vein, Microsoft will use the OpenClassrooms platform for a six-month course in artificial intelligence. If students complete the AI program, they are guaranteed a job within six months or get their money back. They also earn masters-level diploma accredited in Europe.

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

Microsoft debuts Ideas in Word, a grammar and style suggestions tool powered by AI — from venturebeat.com by Kyle Wiggers; with thanks to Mr. Jack Du Mez for his posting on this over on LinkedIn

Excerpt:

The first day of Microsoft’s Build developer conference is typically chock-full of news, and this year was no exception. During a keynote headlined by CEO Satya Nadella, the Seattle company took the wraps off a slew of updates to Microsoft 365, its lineup of productivity-focused, cloud-hosted software and subscription services. Among the highlights were a new AI-powered grammar and style checker in Word Online, dubbed Ideas in Word, and dynamic email messages in Outlook Mobile.

Ideas in Word builds on Editor, an AI-powered proofreader for Office 365 that was announced in July 2016 and replaced the Spelling & Grammar pane in Office 2016 later that year. Ideas in Words similarly taps natural language processing and machine learning to deliver intelligent, contextually aware suggestions that could improve a document’s readability. For instance, it’ll recommend ways to make phrases more concise, clear, and inclusive, and when it comes across a particularly tricky snippet, it’ll put forward synonyms and alternative phrasings.

 

Also see:

 

 
© 2025 | Daniel Christian