Penn State World Campus Taps Google Cloud to Build Virtual Advising Assistant — from campustechnology.com by Rhea Kelly

Excerpt:

At the start of the spring 2020 semester this January, Penn State World Campus will have a new artificial intelligence tool for answering the most common requests from its undergraduate students. A virtual assistant will help academic advisers at the online institution screen student e-mails for certain keywords and phrases, and then automatically pull relevant information for the advisers to send to students. For instance, the AI will be trained to assist advisers when students inquire how to change their major, change their Penn State campus, re-enroll in the university or defer their semester enrollment date, according to a news announcement.

 

Reflections on “Clay Shirky on Mega-Universities and Scale” [Christian]

Clay Shirky on Mega-Universities and Scale — from philonedtech.com by Clay Shirky
[This was a guest post by Clay Shirky that grew out of a conversation that Clay and Phil had about IPEDS enrollment data. Most of the graphs are provided by Phil.]

Excerpts:

Were half a dozen institutions to dominate the online learning landscape with no end to their expansion, or shift what Americans seek in a college degree, that would indeed be one of the greatest transformations in the history of American higher education. The available data, however, casts doubt on that idea.

Though much of the conversation around mega-universities is speculative, we already know what a mega-university actually looks like, one much larger than any university today. It looks like the University of Phoenix, or rather it looked like Phoenix at the beginning of this decade, when it had 470,000 students, the majority of whom took some or all of their classes online. Phoenix back then was six times the size of the next-largest school, Kaplan, with 78,000 students, and nearly five times the size of any university operating today.

From that high-water mark, Phoenix has lost an average of 40,000 students every year of this decade.

 

From DSC:
First of all, I greatly appreciate both Clay’s and Phil’s thought leadership and their respective contributions to education and learning through the years. I value their perspectives and their work.  Clay and Phil offer up a great article here — one worth your time to read.  

The article made me reflect on what I’ve been building upon and tracking for the last decade — a next generation ***PLATFORM*** that I believe will represent a powerful piece of a global learning ecosystem. I call this vision, “Learning from the Living [Class] Room.” Though the artificial intelligence-backed platform that I’m envisioning doesn’t yet fully exist — this new era and type of learning-based platform ARE coming. The emerging signs, technologies, trends — and “fingerprints”of it, if you will — are beginning to develop all over the place.

Such a platform will:

  • Be aimed at the lifelong learner.
  • Offer up major opportunities to stay relevant and up-to-date with one’s skills.
  • Offer access to the program offerings from many organizations — including the mega-universities, but also, from many other organizations that are not nearly as large as the mega-universities.
  • Be reliant upon human teachers, professors, trainers, subject matter experts, but will be backed up by powerful AI-based technologies/tools. For example, AI-based tools will pulse-check the open job descriptions and the needs of business and present the top ___ areas to go into (how long those areas/jobs last is anyone’s guess, given the exponential pace of technological change).

Below are some quotes that I want to comment on:

Not nothing, but not the kind of environment that will produce an educational Amazon either, especially since the top 30 actually shrank by 0.2% a year.

 

Instead of an “Amazon vs. the rest” dynamic, online education is turning into something much more widely adopted, where the biggest schools are simply the upper end of a continuum, not so different from their competitors, and not worth treating as members of a separate category.

 

Since the founding of William and Mary, the country’s second college, higher education in the U.S. hasn’t been a winner-take-all market, and it isn’t one today. We are not entering a world where the largest university operates at outsized scale, we’re leaving that world; 

 

From DSC:
I don’t see us leaving that world at all…but that’s not my main reflection here. Instead, I’m not focusing on how large the mega-universities will become. When I speak of a forthcoming Walmart of Education or Amazon of Education, what I have in mind is a platform…not one particular organization.

Consider that the vast majority of Amazon’s revenues come from products that other organizations produce. They are a platform, if you will. And in the world of platforms (i.e., software), it IS a winner take all market. 

Bill Gates reflects on this as well in this recent article from The Verge:

“In the software world, particularly for platforms, these are winner-take-all markets.

So it’s all about a forthcoming platform — or platforms. (It could be more than one platform. Consider Apple. Consider Microsoft. Consider Google. Consider Facebook.)

But then the question becomes…would a large amount of universities (and other types of organizations) be willing to offer up their courses on a platform? Well, consider what’s ALREADY happening with FutureLearn:

Finally…one more excerpt from Clay’s article:

Eventually the new ideas lose their power to shock, and end up being widely copied. Institutional transformation starts as heresy and ends as a section in the faculty handbook. 

From DSC:
This is a great point. Reminds me of this tweet from Fred Steube (and I added a piece about Western Telegraph):

 

Some things to reflect upon…for sure.

 

 

 Also see:

Microsoft is building a virtual assistant for work. Google is building one for everything else — from qz.com by Dave Gershgorn

Excerpts:

In the early days of virtual personal assistants, the goal was to create a multipurpose digital buddy—always there, ready to take on any task. Now, tech companies are realizing that doing it all is too much, and instead doubling down on what they know best.

Since the company has a deep understanding of how organizations work, Microsoft is focusing on managing your workday with voice, rearranging meetings and turning the dials on the behemoth of bureaucracy in concert with your phone.

 

Voice is the next major platform, and being first to it is an opportunity to make the category as popular as Apple made touchscreens. To dominate even one aspect of voice technology is to tap into the next iteration of how humans use computers.

 

 

From DSC:
What affordances might these developments provide for our future learning spaces?

Will faculty members’ voices be recognized to:

  • Sign onto the LMS?
  • Dim the lights?
  • Turn on the projector(s) and/or display(s)?
  • Other?

Will students be able to send the contents of their mobile devices to particular displays via their voices?

Will voice be mixed in with augmented reality (i.e., the students and their devices can “see” which device to send their content to)?

Hmmm…time will tell.

 

 

10 important Google Drive tips for teachers and educators — from educatorstechnology.com

Excerpt:

As the creator and owner of a folder, you have different sharing settings at your hand. You can share your folders with specific people via email and allow them to either view the folder or view and edit it. Drive also enables you to prevent collaborators  from changing access and adding new people by simply checking the box next to ‘prevent editors from changing access and adding new people’. Here is how to access sharing options of your folders…

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

From Google: New AR features in Search rolling out later this month.

 

 

Along these lines, see:

 

 
 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

26 incredibly useful things you didn’t know Google Calendar could do — from fastcompany.com by Jr Raphael
Upgrade your agenda with this cornucopia of advanced options, shortcuts, and features for Google Calendar.

Excerpt:

If you rely on Google Calendar like I do—or even if you just use it casually to keep track of occasional appointments—you’ll get more out of it once you’ve discovered all of its advanced tricks and time-saving possibilities. And if you’re too busy to tackle this right now, no worries: I happen to know a spectacular tool for setting reminders and making sure you never forget anything on your agenda.

(Unless otherwise noted, all the instructions mentioned below are specific to Calendar’s web version.)

 

27 Incredibly Useful Things You Didn’t Know Chrome Could Do — from fastcompany.com by Jr Raphael
Give your internet experience a jolt of fresh energy with these easily overlooked features, options, and shortcuts for Google’s browser.

 

Microsoft rolls out healthcare bot: How it will change healthcare industry — from yourtechdiet.com by Brian Curtis

Excerpt:

AI and the Healthcare Industry
This technology is evidently the game changer in the healthcare industry. According to the reports by Frost & Sullivan, the AI market for healthcare is likely to experience a CAGR of 40% by 2021, and has the potential to change industry outcomes by 30-40%, while cutting treatment costs in half.

In the words of Satya Nadella, “AI is the runtime that is going to shape all of what we do going forward in terms of the applications as well as the platform advances”.

Here are a few ways Microsoft’s Healthcare Bot will shape the Healthcare Industry…

 

Also see:

  • Why AI will make healthcare personal — from weforum.org by Peter Schwartz
    Excerpt:
    Digital assistants to provide a 24/7 helping hand
    The digital assistants of the future will be full-time healthcare companions, able to monitor a patient’s condition, transmit results to healthcare providers, and arrange virtual and face-to-face appointments. They will help manage the frequency and dosage of medication, and provide reliable medical advice around the clock. They will remind doctors of patients’ details, ranging from previous illnesses to past drug reactions. And they will assist older people to access the care they need as they age, including hospice care, and help to mitigate the fear and loneliness many elderly people feel.

 

  • Introducing New Alexa Healthcare Skills — from developer.amazon.com by Rachel Jiang
    Excerpts:
    The new healthcare skills that launched today are:Express Scripts (a leading Pharmacy Services Organization)
    Cigna Health Today (by Cigna, the global health service company)
    My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital)
    Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics)
    Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia)
    Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions)

Voice as the Next Frontier for Conveniently Accessing Healthcare Services

 

  • Got health care skills? Big Tech wants to hire you — from linkedin.com Jaimy Lee
    Excerpt:
    As tech giants like Amazon, Apple and Google place bigger and bigger bets on the U.S. health care system, it should come as no surprise that the rate at which they are hiring workers with health care skills is booming.We took a deep dive into the big tech companies on this year’s LinkedIn Top Companies list in the U.S., uncovering the most popular health care skills among their workers — and what that says about the future of health care in America.
 

Check out the top 10:

1) Alphabet (Google); Internet
2) Facebook; Internet
3) Amazon; Internet
4) Salesforce; Internet
5) Deloitte; Management Consulting
6) Uber; Internet
7) Apple; Consumer Electronics
8) Airbnb; Internet
9) Oracle; Information Technology & Services
10) Dell Technologies; Information Technology & Services

 

The growing marketplace for AI ethics — from forbes.com by Forbes Insights with Intel AI

Excerpt:

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow*—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI.

 

*Insert DSC:
Read this as a very powerful, chaotic, massive WILD, WILD, WEST.  Can law schools, legislatures, governments, businesses, and more keep up with this new pace of technological change?

 

Also see:

 
 
© 2024 | Daniel Christian