Per Jane Hart on LinkedIn:

Top 200 Tools for Learning 2019 is now published, together with:

PLUS analysis of how these tools are being used in different context, new graphics, and updated comments on the tools’ pages that show how people are using the tools.

 

 

 

Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Google brings AI to studying with Socratic — from zdnet.com by Stephanie Condon
Ahead of the new school year, Google is re-launching a mobile learning app it acquired last year.

Excerpt:

Google this week started rolling out a revamped version of a mobile learning app, called Socratic, that the tech giant acquired last year. The updated app, with new machine learning-powered features, coincides with the start of the school year, as well as other Google for Education initiatives.

Socratic aims to help both high school and university students in their studies outside of the classroom. If students need help answering a study question, they can now use the Socratic app to ask a question with their voice, or to take a picture of a question in their study materials. The app will then find relevant material from across the web.

 

Also see:

  • The School of Tomorrow Will Revolve Around AI — from datafloq.com
    Excerpt:
    We live in exponential times, and merely having a digital strategy focused on continuous innovation is no longer enough to thrive in a constantly changing world. To transform an organisation and contribute to building a secure and rewarding networked society, collaboration among employees, customers, business units and even things is increasingly becoming key.Especially with the availability of new technologies such as artificial intelligence, organisations now, more than ever before, need to focus on bringing together the different stakeholders to co-create the future. Big data empowers customers and employees, the Internet of Things will create vast amounts of data and connects all devices, while artificial intelligence creates new human-machine interactions. In today’s world, every organisation is a data organisation, and AI is required to make sense of it all.

Addendum on 8/23/19

 

6 basic Youtube tips everyone should know — from hongkiat.com by Kelvon Yeezy

Example tips:

1. Share video starting at a specific point . <– A brief insert from DSC: This is especially helpful to teachers, trainers, and professors
If you want to share a YouTube video in a way that it starts from a certain point, you can do so in a couple of simple steps.

Just pause the video at the point from where you want the other user to start watching it and right-click on the video screen. A menu will appear from which you can choose Copy video URL at the current time. The copied link will open the video starting from that specific time.

 

 

4. More accurate video search
There are millions of videos on Youtube. So trying to find that specific Youtube video you want to watch is an adventure in itself. In this quest, you might find yourself crawling through dozens of pages hoping to find the video you actually want to watch.

If you don’t want to go through all this hassle, then simply add allintitle: before the keywords you are using to search for the video. This basically gives you only those videos that include the chosen keywords.

 

 

 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

Take a tour of Google Earth with speakers of 50 different indigenous languages — from fastcompany.com by Melissa Locker

Excerpt:

After the United Nations declared 2019 the International Year of Indigenous Languages, Google decided to help draw attention to the indigenous languages spoken around the globe and perhaps help preserve some of the endangered ones too. To that end, the company recently launched its first audio-driven collection, a new Google Earth tour complete with audio recordings from more than 50 indigenous language speakers from around the world.

 

 

 
 

Penn State World Campus Taps Google Cloud to Build Virtual Advising Assistant — from campustechnology.com by Rhea Kelly

Excerpt:

At the start of the spring 2020 semester this January, Penn State World Campus will have a new artificial intelligence tool for answering the most common requests from its undergraduate students. A virtual assistant will help academic advisers at the online institution screen student e-mails for certain keywords and phrases, and then automatically pull relevant information for the advisers to send to students. For instance, the AI will be trained to assist advisers when students inquire how to change their major, change their Penn State campus, re-enroll in the university or defer their semester enrollment date, according to a news announcement.

 

Reflections on “Clay Shirky on Mega-Universities and Scale” [Christian]

Clay Shirky on Mega-Universities and Scale — from philonedtech.com by Clay Shirky
[This was a guest post by Clay Shirky that grew out of a conversation that Clay and Phil had about IPEDS enrollment data. Most of the graphs are provided by Phil.]

Excerpts:

Were half a dozen institutions to dominate the online learning landscape with no end to their expansion, or shift what Americans seek in a college degree, that would indeed be one of the greatest transformations in the history of American higher education. The available data, however, casts doubt on that idea.

Though much of the conversation around mega-universities is speculative, we already know what a mega-university actually looks like, one much larger than any university today. It looks like the University of Phoenix, or rather it looked like Phoenix at the beginning of this decade, when it had 470,000 students, the majority of whom took some or all of their classes online. Phoenix back then was six times the size of the next-largest school, Kaplan, with 78,000 students, and nearly five times the size of any university operating today.

From that high-water mark, Phoenix has lost an average of 40,000 students every year of this decade.

 

From DSC:
First of all, I greatly appreciate both Clay’s and Phil’s thought leadership and their respective contributions to education and learning through the years. I value their perspectives and their work.  Clay and Phil offer up a great article here — one worth your time to read.  

The article made me reflect on what I’ve been building upon and tracking for the last decade — a next generation ***PLATFORM*** that I believe will represent a powerful piece of a global learning ecosystem. I call this vision, “Learning from the Living [Class] Room.” Though the artificial intelligence-backed platform that I’m envisioning doesn’t yet fully exist — this new era and type of learning-based platform ARE coming. The emerging signs, technologies, trends — and “fingerprints”of it, if you will — are beginning to develop all over the place.

Such a platform will:

  • Be aimed at the lifelong learner.
  • Offer up major opportunities to stay relevant and up-to-date with one’s skills.
  • Offer access to the program offerings from many organizations — including the mega-universities, but also, from many other organizations that are not nearly as large as the mega-universities.
  • Be reliant upon human teachers, professors, trainers, subject matter experts, but will be backed up by powerful AI-based technologies/tools. For example, AI-based tools will pulse-check the open job descriptions and the needs of business and present the top ___ areas to go into (how long those areas/jobs last is anyone’s guess, given the exponential pace of technological change).

Below are some quotes that I want to comment on:

Not nothing, but not the kind of environment that will produce an educational Amazon either, especially since the top 30 actually shrank by 0.2% a year.

 

Instead of an “Amazon vs. the rest” dynamic, online education is turning into something much more widely adopted, where the biggest schools are simply the upper end of a continuum, not so different from their competitors, and not worth treating as members of a separate category.

 

Since the founding of William and Mary, the country’s second college, higher education in the U.S. hasn’t been a winner-take-all market, and it isn’t one today. We are not entering a world where the largest university operates at outsized scale, we’re leaving that world; 

 

From DSC:
I don’t see us leaving that world at all…but that’s not my main reflection here. Instead, I’m not focusing on how large the mega-universities will become. When I speak of a forthcoming Walmart of Education or Amazon of Education, what I have in mind is a platform…not one particular organization.

Consider that the vast majority of Amazon’s revenues come from products that other organizations produce. They are a platform, if you will. And in the world of platforms (i.e., software), it IS a winner take all market. 

Bill Gates reflects on this as well in this recent article from The Verge:

“In the software world, particularly for platforms, these are winner-take-all markets.

So it’s all about a forthcoming platform — or platforms. (It could be more than one platform. Consider Apple. Consider Microsoft. Consider Google. Consider Facebook.)

But then the question becomes…would a large amount of universities (and other types of organizations) be willing to offer up their courses on a platform? Well, consider what’s ALREADY happening with FutureLearn:

Finally…one more excerpt from Clay’s article:

Eventually the new ideas lose their power to shock, and end up being widely copied. Institutional transformation starts as heresy and ends as a section in the faculty handbook. 

From DSC:
This is a great point. Reminds me of this tweet from Fred Steube (and I added a piece about Western Telegraph):

 

Some things to reflect upon…for sure.

 

 

 Also see:

Microsoft is building a virtual assistant for work. Google is building one for everything else — from qz.com by Dave Gershgorn

Excerpts:

In the early days of virtual personal assistants, the goal was to create a multipurpose digital buddy—always there, ready to take on any task. Now, tech companies are realizing that doing it all is too much, and instead doubling down on what they know best.

Since the company has a deep understanding of how organizations work, Microsoft is focusing on managing your workday with voice, rearranging meetings and turning the dials on the behemoth of bureaucracy in concert with your phone.

 

Voice is the next major platform, and being first to it is an opportunity to make the category as popular as Apple made touchscreens. To dominate even one aspect of voice technology is to tap into the next iteration of how humans use computers.

 

 

From DSC:
What affordances might these developments provide for our future learning spaces?

Will faculty members’ voices be recognized to:

  • Sign onto the LMS?
  • Dim the lights?
  • Turn on the projector(s) and/or display(s)?
  • Other?

Will students be able to send the contents of their mobile devices to particular displays via their voices?

Will voice be mixed in with augmented reality (i.e., the students and their devices can “see” which device to send their content to)?

Hmmm…time will tell.

 

 

10 important Google Drive tips for teachers and educators — from educatorstechnology.com

Excerpt:

As the creator and owner of a folder, you have different sharing settings at your hand. You can share your folders with specific people via email and allow them to either view the folder or view and edit it. Drive also enables you to prevent collaborators  from changing access and adding new people by simply checking the box next to ‘prevent editors from changing access and adding new people’. Here is how to access sharing options of your folders…

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian