Remote collaboration and virtual conferences, the future of work — from forces.com by Charlie Fink

Excerpts:

Ten weeks ago, Jesse Damiani, writing on Forbes.com, told the story of a college professor who turned his course about XR into a research project about remote collaboration and virtual conferences.

He and his students reimagined the course as an eight-week research sprint exploring how XR tools will contribute to the future of remote work—and the final product will be a book, tentatively titled, Remote Collaboration & Virtual Conferences: The End of Distance and the Future of Work.”

This is a chapter of that book. It will be available on June 15.

The thing everyone wants is not a technology, it’s engagement. The same kind of engagement that you would have in real life, but better, faster, cheaper *and safer* than it was before.

Also see:

 

From DSC:
I saw the piece below from Graham Brown-Martin’s solid, thought-provoking posting entitled, “University as a Service (UaaS)” out at medium.com. My question is: What happens if Professor Scott Galloway is right?!”

Excerpt:

Prof Scott Galloway predicts lucrative future partnerships between the FAANG mega-corporations and major higher education brands emerging as a result of current disruptions. Galloway wonders what a partnership between MIT and Apple would look like?

 

The education conveyor belt of the last century that went school to university to work and a job for life just doesn’t work in an era of rapid transformation. Suppose we truly embrace the notion of continuous or lifelong learning and apply that to the university model. It wouldn’t just stop in your twenties would it?

University as a Service (UaaS), where higher education course and degree modules are unbundled and accessed via a monthly subscription, could be a landing spot for the future of higher education and lifelong learners. 

 


Below are some other items
regarding the future of higher education.


Also relevant/see:

https://info.destinysolutions.com/lp-updating-the-higher-education-playbook-to-stay-relevant-in-2020

Also relevant/see:

 

Also relevant/see:

 

Also relevant/see:

  • Fast Forward: Looking to the Future Workforce and Online Learning — from evolllution.com by Joann Kozyrev (VP Design and Development, Western Governors University) and Amrit Ahluwalia
    Excerpt:
    With employers and students looking to close the gap in workforce skills, it’s critical for them to know what skills are in need the most. Postsecondary institutions need to be the resource to provide learners with the education the workforce needs and to make both parties understand the value of the students’ education. With the remote and online shift, it’s a new territory for institutions handle. In this interview, Joann Kozyrev discusses the impact remote learning has on an online institution, concerns about the future of online learning and how to get people back into the workforce fast and efficiently. 

 

 

How to block Facebook and Google from identifying your face — from cnbc.com by Todd Haselton

Excerpt:

  • A New York Times report over the weekend discussed a company named Clearview AI that can easily recognize people’s faces when someone uploads a picture.
  • It scrapes this data from the internet and sites people commonly use, such as Facebook and YouTube, according to the report.
  • You can stop Facebook and Google from recognizing your face in their systems, which is one step toward regaining your privacy.
  • Still, it probably won’t entirely stop companies like Clearview AI from recognizing you, since they’re not using the systems developed by Google or Facebook.
 

From DSC:
I’ll say it again, just because we can, doesn’t mean we should.

From the article below…we can see another unintended consequence is developing on society’s landscapes. I really wish the 20 and 30 somethings that are being hired by the big tech companies — especially at Amazon, Facebook, Google, Apple, and Microsoft — who are developing these things would ask themselves:

  • “Just because we can develop this system/software/application/etc., SHOULD we be developing it?”
  • What might the negative consequences be? 
  • Do the positive contributions outweigh the negative impacts…or not?

To colleges professors and teachers:
Please pass these thoughts onto your students now, so that this internal questioning/conversations begin to take place in K-16.


Report: Colleges Must Teach ‘Algorithm Literacy’ to Help Students Navigate Internet — from edsurge.com by Rebecca Koenig

Excerpt (emphasis DSC):

If the Ancient Mariner were sailing on the internet’s open seas, he might conclude there’s information everywhere, but nary a drop to drink.

That’s how many college students feel, anyway. A new report published this week about undergraduates’ impressions of internet algorithms reveals students are skeptical of and unnerved by tools that track their digital travels and serve them personalized content like advertisements and social media posts.

And some students feel like they’ve largely been left to navigate the internet’s murky waters alone, without adequate guidance from teachers and professors.

Researchers set out to learn “how aware students are about their information being manipulated, gathered and interacted with,” said Alison Head, founder and director of Project Information Literacy, in an interview with EdSurge. “Where does that awareness drop off?”

They found that many students not only have personal concerns about how algorithms compromise their own data privacy but also recognize the broader, possibly negative implications of tools that segment and customize search results and news feeds.

 

30 influential AI presentations from 2019 — from re-work.co

Excerpt:

It feels as though 2019 has gone by in a flash, that said, it has been a year in which we have seen great advancement in AI application methods and technical discovery, paving the way for future development. We are incredibly grateful to have had the leading minds in AI & Deep Learning present their latest work at our summits in San Francisco, Boston, Montreal and more, so we thought we would share thirty of our highlight videos with you as we think everybody needs to see them!. (Some are hosted on our Videohub and some on our YouTube, but all are free to view!).

Example presenters:

  • Dawn Song, Professor, UC Berkeley.
  • Doina Precup, Research Team Lead, DeepMind.
  • Natalie Jakomis, Group Director of Data, goCompare.
  • Ian Goodfellow, Director, Apple.
  • Timnit Gebru, Ethical AI Team, Google.
  • Cathy Pearl, Head of Conversation Design Outreach, Google.
  • Zoya Bylinskii, Research Scientist, Adobe Research.
  • …and many others
 

FTI 2020 Trend Report for Entertainment, Media, & Technology [FTI]

 

FTI 2020 Trend Report for Entertainment, Media, & Technology — from futuretodayinstitute.com

Our 3rd annual industry report on emerging entertainment, media and technology trends is now available.

  • 157 trends
  • 28 optimistic, pragmatic and catastrophic scenarios
  • 10 non-technical primers and glossaries
  • Overview of what events to anticipate in 2020
  • Actionable insights to use within your organization

KEY TAKEAWAYS

  • Synthetic media offers new opportunities and challenges.
  • Authenticating content is becoming more difficult.
  • Regulation is coming.
  • We’ve entered the post-fixed screen era.
  • Voice Search Optimization (VSO) is the new Search Engine Optimization (SEO).
  • Digital subscription models aren’t working.
  • Advancements in AI will mean greater efficiencies.

 

 

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos — from zdnet.com by Liam Tung
Google has created 3,000 videos using actors and manipulation software to help improve detection.

Excerpt:

Google has released a huge database of deepfake videos that it’s created using paid actors. It hopes the database will bolster systems designed to detect AI-generated fake videos.

With the 2020 US Presidential elections looming, the race is on to build better systems to detect deepfake videos that could be used to manipulate and divide public opinion.

Earlier this month, Facebook and Microsoft announced a $10m project to create deepfake videos to help build systems for detecting them.

 

Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

FTC reportedly hits Facebook with record $5 billion settlement — from wired.com by Issie Lapowsky and Caitlin Kelly

Excerpt:

AFTER MONTHS OF negotiations, the Federal Trade Commission fined Facebook a record-setting $5 billion on Friday for privacy violations, according to multiple reports. The penalty comes after an investigation that lasted over a year, and marks the largest in the agency’s history by an order of magnitude. If approved by the Justice Department’s civil division, it will also be the first substantive punishment for Facebook in the US, where the tech industry has gone largely unregulated. But Washington has taken a harsher stance toward Silicon Valley lately, and Friday’s announcement marks its most aggressive action yet to curb its privacy overreaches.

 

Also see:

 

 

Reflections on “Clay Shirky on Mega-Universities and Scale” [Christian]

Clay Shirky on Mega-Universities and Scale — from philonedtech.com by Clay Shirky
[This was a guest post by Clay Shirky that grew out of a conversation that Clay and Phil had about IPEDS enrollment data. Most of the graphs are provided by Phil.]

Excerpts:

Were half a dozen institutions to dominate the online learning landscape with no end to their expansion, or shift what Americans seek in a college degree, that would indeed be one of the greatest transformations in the history of American higher education. The available data, however, casts doubt on that idea.

Though much of the conversation around mega-universities is speculative, we already know what a mega-university actually looks like, one much larger than any university today. It looks like the University of Phoenix, or rather it looked like Phoenix at the beginning of this decade, when it had 470,000 students, the majority of whom took some or all of their classes online. Phoenix back then was six times the size of the next-largest school, Kaplan, with 78,000 students, and nearly five times the size of any university operating today.

From that high-water mark, Phoenix has lost an average of 40,000 students every year of this decade.

 

From DSC:
First of all, I greatly appreciate both Clay’s and Phil’s thought leadership and their respective contributions to education and learning through the years. I value their perspectives and their work.  Clay and Phil offer up a great article here — one worth your time to read.  

The article made me reflect on what I’ve been building upon and tracking for the last decade — a next generation ***PLATFORM*** that I believe will represent a powerful piece of a global learning ecosystem. I call this vision, “Learning from the Living [Class] Room.” Though the artificial intelligence-backed platform that I’m envisioning doesn’t yet fully exist — this new era and type of learning-based platform ARE coming. The emerging signs, technologies, trends — and “fingerprints”of it, if you will — are beginning to develop all over the place.

Such a platform will:

  • Be aimed at the lifelong learner.
  • Offer up major opportunities to stay relevant and up-to-date with one’s skills.
  • Offer access to the program offerings from many organizations — including the mega-universities, but also, from many other organizations that are not nearly as large as the mega-universities.
  • Be reliant upon human teachers, professors, trainers, subject matter experts, but will be backed up by powerful AI-based technologies/tools. For example, AI-based tools will pulse-check the open job descriptions and the needs of business and present the top ___ areas to go into (how long those areas/jobs last is anyone’s guess, given the exponential pace of technological change).

Below are some quotes that I want to comment on:

Not nothing, but not the kind of environment that will produce an educational Amazon either, especially since the top 30 actually shrank by 0.2% a year.

 

Instead of an “Amazon vs. the rest” dynamic, online education is turning into something much more widely adopted, where the biggest schools are simply the upper end of a continuum, not so different from their competitors, and not worth treating as members of a separate category.

 

Since the founding of William and Mary, the country’s second college, higher education in the U.S. hasn’t been a winner-take-all market, and it isn’t one today. We are not entering a world where the largest university operates at outsized scale, we’re leaving that world; 

 

From DSC:
I don’t see us leaving that world at all…but that’s not my main reflection here. Instead, I’m not focusing on how large the mega-universities will become. When I speak of a forthcoming Walmart of Education or Amazon of Education, what I have in mind is a platform…not one particular organization.

Consider that the vast majority of Amazon’s revenues come from products that other organizations produce. They are a platform, if you will. And in the world of platforms (i.e., software), it IS a winner take all market. 

Bill Gates reflects on this as well in this recent article from The Verge:

“In the software world, particularly for platforms, these are winner-take-all markets.

So it’s all about a forthcoming platform — or platforms. (It could be more than one platform. Consider Apple. Consider Microsoft. Consider Google. Consider Facebook.)

But then the question becomes…would a large amount of universities (and other types of organizations) be willing to offer up their courses on a platform? Well, consider what’s ALREADY happening with FutureLearn:

Finally…one more excerpt from Clay’s article:

Eventually the new ideas lose their power to shock, and end up being widely copied. Institutional transformation starts as heresy and ends as a section in the faculty handbook. 

From DSC:
This is a great point. Reminds me of this tweet from Fred Steube (and I added a piece about Western Telegraph):

 

Some things to reflect upon…for sure.

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

 

Addendum on 4/20/19:

Amazon is now making its delivery drivers take selfies — from theverge.com by Shannon Liao
It will then use facial recognition to double-check

From DSC:
I don’t like this piece re: Amazon’s use of facial recognition at all. Some organization like Amazon asserts that they need facial recognition to deliver services to its customers, and then, the next thing we know, facial recognition gets its foot in the door…sneaks in the back way into society’s house. By then, it’s much harder to get rid of. We end up with what’s currently happening in China. I don’t want to pay for anything with my face. Ever. As Mark Zuckerberg has demonstrated time and again, I don’t trust humankind to handle this kind of power. Plus, the developing surveillance states by several governments is a chilling thing indeed. China is using it to identify/track Muslims.

China using AI to track Muslims

Can you think of some “groups” that people might be in that could be banned from receiving goods and services? I can. 

The appalling lack of privacy that’s going on in several societies throughout the globe has got to be stopped. 

 

 
© 2024 | Daniel Christian