Is your college future-ready? — from jisc.ac.uk by Robin Ghurbhurun

Excerpt:

Artificial intelligence (AI) is increasingly becoming science fact rather than science fiction. Alexa is everywhere from the house to the car, Siri is in the palm of your hand and students and the wider community can now get instant responses to their queries. We as educators have a duty to make sense of the information out there, working alongside AI to facilitate students’ curiosities.

Instead of banning mobile phones on campus, let’s manage our learning environments differently

We need to plan strategically to avoid a future where only the wealthy have access to human teachers, whilst others are taught with AI. We want all students to benefit from both. We should have teacher-approved content from VLEs and AI assistants supporting learning and discussion, everywhere from the classroom to the workplace. Let’s learn from the domestic market; witness the increasing rise of co-bot workers coming to an office near you.

 

 

Stanford team aims at Alexa and Siri with a privacy-minded alternative — from nytimes.com by John Markoff

Excerpt:

Now computer scientists at Stanford University are warning about the consequences of a race to control what they believe will be the next key consumer technology market — virtual assistants like Amazon’s Alexa and Google Assistant.

The group at Stanford, led by Monica Lam, a computer systems designer, last month received a $3 million grant from the National Science Foundation. The grant is for an internet service they hope will serve as a Switzerland of sorts for systems that use human language to control computers, smartphones and internet devices in homes and offices.

The researchers’ biggest concern is that virtual assistants, as they are designed today, could have a far greater impact on consumer information than today’s websites and apps. Putting that information in the hands of one big company or a tiny clique, they say, could erase what is left of online privacy.

 

Amazon sends Alexa developers on quest for ‘holy grail of voice science’ — from venturebeat.com by Khari Johnson

Excerpt:

At Amazon’s re:Mars conference last week, the company rolled out Alexa Conversations in preview. Conversations is a module within the Alexa Skills Kit that stitches together Alexa voice apps into experiences that help you accomplish complex tasks.

Alexa Conversations may be Amazon’s most intriguing and substantial pitch to voice developers in years. Conversations will make creating skills possible with fewer lines of code. It will also do away with the need to understand the many different ways a person can ask to complete an action, as a recurrent neural network will automatically generate dialogue flow.

For users, Alexa Conversations will make it easier to complete tasks that require the incorporation of multiple skills and will cut down on the number of interactions needed to do things like reserve a movie ticket or order food.

 

 

Facial recognition smart glasses could make public surveillance discreet and ubiquitous — from theverge.com by James Vincent; with thanks to Mr. Paul Czarapata, Ed.D. out on Twitter for this resource
A new product from UAE firm NNTC shows where this tech is headed next. <– From DSC: though hopefully not!!!

Excerpt:

From train stations and concert halls to sport stadiums and airports, facial recognition is slowly becoming the norm in public spaces. But new hardware formats like these facial recognition-enabled smart glasses could make the technology truly ubiquitous, able to be deployed by law enforcement and private security any time and any place.

The glasses themselves are made by American company Vuzix, while Dubai-based firm NNTC is providing the facial recognition algorithms and packaging the final product.

 

From DSC…I commented out on Twitter:

Thanks Paul for this posting – though I find it very troubling. Emerging technologies race out ahead of society. It would be interested in knowing the age of the people developing these technologies and if they care about asking the tough questions…like “Just because we can, should we be doing this?”

 

Addendum on 6/12/19:

 

‘Robots’ Are Not ‘Coming for Your Job’—Management Is — from gizmodo.com by Brian Merchant; with a special thanks going out to Keesa Johnson for her posting this out on LinkedIn

A robot is not ‘coming for’, or ‘stealing’ or ‘killing’ or ‘threatening’ to take away your job. Management is.

Excerpt (emphasis DSC):

At first glance, this might like a nitpicky semantic complaint, but I assure you it’s not—this phrasing helps, and has historically helped, mask the agency behind the *decision* to automate jobs. And this decision is not made by ‘robots,’ but management. It is a decision most often made with the intention of saving a company or institution money by reducing human labor costs (though it is also made in the interests of bolstering efficiency and improving operations and safety). It is a human decision that ultimately eliminates the job.

 

From DSC:
I’ve often said that if all the C-Suite cares about is maximizing profits — instead of thinking about their fellow humankind and society as a whole —  we’re in big trouble.

If the thinking goes, “Heh — it’s just business!” <– Again, then we’re in big trouble here.

Just because we can, should we? Many people should be reflecting upon this question…and not just members of the C-Suite.

 

 

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

From DSC:
I’m wondering to what extent artificial intelligence will be used to write code in the future…and/or to review/tweak/correct code…? Along these lines, see: “Introducing AI-Assisted Development to Elevate Low-Code Platforms to the Next Level.”

Excerpt:

Mendix was founded on the belief that software development could only be significantly improved if we introduced a paradigm shift. And that’s what we did. We fundamentally changed how software is created. With the current generation of the Mendix Platform, business applications can be created 10 times faster in close collaboration or even owned by the business, with IT being in control. Today we announce the next innovation, the introduction of AI-assisted development, which gives everyone the equivalent of a world-class coach looking over their shoulder.

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

From Google: New AR features in Search rolling out later this month.

 

 

Along these lines, see:

 

 

How blockchain, virtual assistants and AI are changing higher ed — from educationdive.com by Ben Unglesbee

Dive Brief:

  • In the coming years, advanced technologies like mixed reality, artificial intelligence (AI), blockchain and virtual assistants could play a bigger role at colleges and universities, according to a new report from Educause, a nonprofit focused on IT’s role in higher ed.
  • The 2019 Horizon Report, based on a panel of higher ed experts, zeroes in on trends, challenges and developments in educational technology. Challenges range from the “solvable,” such as improving digital fluency and increasing demand for digital learning experiences, to the “wicked.” The latter includes rethinking teaching and advancing digital equity.
  • The panel contemplated blockchain’s use in higher ed for the first time in the 2019 report. Specifically, the authors looked at its potential for creating alternative forms of academic records that “could follow students from one institution to another, serving as verifiable evidence of learning and enabling simpler transfer of credits across institutions.”

 

 

An algorithm wipes clean the criminal pasts of thousands — from bbc.com by Dave Lee

Excerpt:

This month, a judge in California cleared thousands of criminal records with one stroke of his pen. He did it thanks to a ground-breaking new algorithm that reduces a process that took months to mere minutes. The programmers behind it say: we’re just getting started solving America’s urgent problems.

 

Walmart unveils an AI-powered store of the future, now open to the public — from techcrunch.comby Sarah Perez

Excerpts:

Walmart this morning unveiled a new “store of the future” and test grounds for emerging technologies, including AI-enabled cameras and interactive displays. The store, a working concept called the Intelligent Retail Lab — or “IRL” for short — operates out of a Walmart Neighborhood Market in Levittown, N.Y.

Similar to Amazon Go’s convenience stores, the store has a suite of cameras mounted in the ceiling. But unlike Amazon Go, which is a grab-and-go store with smaller square footage, Walmart’s IRL spans 50,000 square feet of retail space and is staffed by more than 100 employees.

The cameras and other sensors in the store pump out 1.6 TB of data per second, or the equivalent of three years’ worth of music, which necessitates a big data center on site.

 

From DSC:
I was pleased to see that 100+ human beings were still employed/utilized in that store location.

 

The finalized 2019 Horizon Report Higher Education Edition (from library.educause.edu) was just released on 4/23/19.

Excerpt:

Key Trends Accelerating Technology Adoption in Higher Education:

Short-TermDriving technology adoption in higher education for the next one to two years

  • Redesigning Learning Spaces
  • Blended Learning Designs

Mid-TermDriving technology adoption in higher education for the next three to five years

  • Advancing Cultures of Innovation
  • Growing Focus on Measuring Learning

Long-TermDriving technology adoption in higher education for five or more years

  • Rethinking How Institutions Work
  • Modularized and Disaggregated Degrees

 

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao
A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.

Excerpt:

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

 

Along these lines, also see:

‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul
Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Excerpt:

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

 

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian