Artificial Intelligence in Higher Education: Applications, Promise and Perils, and Ethical Questions — from er.educause.edu by Elana Zeide
What are the benefits and challenges of using artificial intelligence to promote student success, improve retention, streamline enrollment, and better manage resources in higher education?

Excerpt:

The promise of AI applications lies partly in their efficiency and partly in their efficacy. AI systems can capture a much wider array of data, at more granularity, than can humans. And these systems can do so in real time. They can also analyze many, many students—whether those students are in a classroom or in a student body or in a pool of applicants. In addition, AI systems offer excellent observations and inferences very quickly and at minimal cost. These efficiencies will lead, we hope, to increased efficacy—to more effective teaching, learning, institutional decisions, and guidance. So this is one promise of AI: that it will show us things we can’t assess or even envision given the limitations of human cognition and the difficulty of dealing with many different variables and a wide array of students.

A second peril in the use of artificial intelligence in higher education consists of the various legal considerations, mostly involving different bodies of privacy and data-protection law. Federal student-privacy legislation is focused on ensuring that institutions (1) get consent to disclose personally identifiable information and (2) give students the ability to access their information and challenge what they think is incorrect.7 The first is not much of an issue if institutions are not sharing the information with outside parties or if they are sharing through the Family Educational Rights and Privacy Act (FERPA), which means an institution does not have to get explicit consent from students. The second requirement—providing students with access to the information that is being used about them—is going to be an increasingly interesting issue.8 I believe that as the decisions being made by artificial intelligence become much more significant and as students become more aware of what is happening, colleges and universities will be pressured to show students this information. People are starting to want to know how algorithmic and AI decisions are impacting their lives.

My short advice about legal considerations? Talk to your lawyers. The circumstances vary considerably from institution to institution.

 

Technology as Part of the Culture for Legal Professionals -- a Q&A with Mary Grush and Daniel Christian

 


Technology as Part of the Culture for Legal Professionals A Q&A with Daniel Christian — from campustechnology.com by Mary Grush and Daniel Christian

Excerpt (emphasis DSC):

Mary Grush: Why should new technologies be part of a legal education?

Daniel Christian: I think it’s a critical point because our society, at least in the United States — and many other countries as well — is being faced with a dramatic influx of emerging technologies. Whether we are talking about artificial intelligence, blockchain, Bitcoin, chatbots, facial recognition, natural language processing, big data, the Internet of Things, advanced robotics — any of dozens of new technologies — this is the environment that we are increasingly living in, and being impacted by, day to day.

It is so important for our nation that legal professionals — lawyers, judges, attorney generals, state representatives, and legislators among them — be up to speed as much as possible on the technologies that surround us: What are the issues their clients and constituents face? It’s important that legal professionals regularly pulse check the relevant landscapes to be sure that they are aware of the technologies that are coming down the pike. To help facilitate this habit, technology should be part of the culture for those who choose a career in law. (And what better time to help people start to build that habit than within the law schools of our nation?)

 

There is a real need for the legal realm to catch up with some of these emerging technologies, because right now, there aren’t many options for people to pursue. If the lawyers, and the legislators, and the judges don’t get up to speed, the “wild wests” out there will continue until they do.

 


 

The Age of AI: How Will In-house Law Departments Run in 10 Years? — from accdocket.com by Elizabeth Colombo

Excerpt:

2029 may feel far away right now, but all of this makes me wonder what in-house law might look like in 10 years. What will in-house law be like in an age of artificial intelligence (AI)? This article will look at how in-house law may be different in 10 years, focusing largely on anticipated changes to contract review and negotiation, and the workplace.

 

Also see:
A Primer on Using Artificial Intelligence in the Legal Profession — from jolt.law.harvard.edu by Lauri Donahue (2018)

Excerpt (emphasis DSC):

How Are Lawyers Using AI?
Lawyers are already using AI to do things like reviewing documents during litigation and due diligence, analyzing contracts to determine whether they meet pre-determined criteria, performing legal research, and predicting case outcomes.


Document Review

Analyzing Contracts

Legal Research

Predicting Results
Lawyers are often called upon to predict the future: If I bring this case, how likely is it that I’ll win — and how much will it cost me? Should I settle this case (or take a plea), or take my chances at trial? More experienced lawyers are often better at making accurate predictions, because they have more years of data to work with.

However, no lawyer has complete knowledge of all the relevant data.

Because AI can access more of the relevant data, it can be better than lawyers at predicting the outcomes of legal disputes and proceedings, and thus helping clients make decisions. For example, a London law firm used data on the outcomes of 600 cases over 12 months to create a model for the viability of personal injury cases. Indeed, trained on 200 years of Supreme Court records, an AI is already better than many human experts at predicting SCOTUS decisions.

 

5 important artificial intelligence predictions (for 2019) everyone should read — from forbes.com by Bernard Marr

Excerpts:

  1. AI increasingly becomes a matter of international politics
  2. A move towards “transparent AI”
  3. AI and automation drilling deeper into every business
  4. More jobs will be created by AI than will be lost to it (for the next year at least)
  5. AI assistants will become truly useful

 

“…these tensions could compromise the spirit of cooperation between academic and industrial organizations across the world.”

 

“AI solutions for managing compliance and legal issues are also likely to be increasingly adopted. As these tools will often be fit-for-purpose across a number of organizations, they will increasingly be offered as-a-service…”

 

 

Governments take first, tentative steps at regulating AI — from heraldnet.com by James McCusker
Can we control artificial intelligence’s potential for disrupting markets? Time will tell.

Excerpt:

State legislatures in New York and New Jersey have proposed legislation that represents the first, tentative steps at regulation. While the two proposed laws are different, they both have elements of information gathering about the risks to such things as privacy, security and economic fairness.

 

 

You’re already being watched by facial recognition tech. This map shows where — from fastcompany.com by Katharine Schwab
Digital rights nonprofit Fight for the Future has mapped out the physical footprint of the controversial technology, which is in use in cities across the country.

 

 

A new immersive classroom uses AI and VR to teach Mandarin Chinese — from technologyreview.com by Karen Hao
Students will learn the language by ordering food or haggling with street vendors on a virtual Beijing street.

Excerpt:

Often the best way to learn a language is to immerse yourself in an environment where people speak it. The constant exposure, along with the pressure to communicate, helps you swiftly pick up and practice new vocabulary. But not everyone gets the opportunity to live or study abroad.

In a new collaboration with IBM Research, Rensselaer Polytechnic Institute (RPI), a university based in Troy, New York, now offers its students studying Chinese another option: a 360-degree virtual environment that teleports them to the busy streets of Beijing or a crowded Chinese restaurant. Students get to haggle with street vendors or order food, and the environment is equipped with different AI capabilities to respond to them in real time.

 

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

Introduction: Leading the social enterprise—Reinvent with a human focus
2019 Global Human Capital Trends
— from deloitte.com by Volini?, Schwartz? ?, Roy?, Hauptmann, Van Durme, Denny, and Bersin

Excerpt (emphasis DSC):

Learning in the flow of life. The number-one trend for 2019 is the need for organizations to change the way people learn; 86 percent of respondents cited this as an important or very important issue. It’s not hard to understand why. Evolving work demands and skills requirements are creating an enormous demand for new skills and capabilities, while a tight labor market is making it challenging for organizations to hire people from outside. Within this context, we see three broader trends in how learning is evolving: It is becoming more integrated with work; it is becoming more personal; and it is shifting—slowly—toward lifelong models. Effective reinvention along these lines requires a culture that supports continuous learning, incentives that motivate people to take advantage of learning opportunities, and a focus on helping individuals identify and develop new, needed skills.

 

People, Power and Technology: The Tech Workers’ View — from doteveryone.org.uk

Excerpt:

People, Power and Technology: The Tech Workers’ View is the first in-depth research into the attitudes of the people who design and build digital technologies in the UK. It shows that workers are calling for an end to the era of moving fast and breaking things.

Significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society. This is heightening the UK’s tech talent crisis and running up employers’ recruitment and retention bills. Organisations and teams that can understand and meet their teams’ demands to work responsibly will have a new competitive advantage.

While Silicon Valley CEOs have tried to reverse the “techlash” by showing their responsible credentials in the media, this research shows that workers:

    • need guidance and skills to help navigate new dilemmas
    • have an appetite for more responsible leadership
    • want clear government regulation so they can innovate with awareness

Also see:

  • U.K. Tech Staff Quit Over Work On ‘Harmful’ AI Projects — from forbes.com by Sam Shead
    Excerpt:
    An alarming number of technology workers operating in the rapidly advancing field of artificial intelligence say they are concerned about the products they’re building. Some 59% of U.K. tech workers focusing on AI have experience of working on products that they felt might be harmful for society, according to a report published on Monday by Doteveryone, the think tank set up by lastminute.com cofounder and Twitter board member Martha Lane Fox.

 

 

Microsoft debuts Ideas in Word, a grammar and style suggestions tool powered by AI — from venturebeat.com by Kyle Wiggers; with thanks to Mr. Jack Du Mez for his posting on this over on LinkedIn

Excerpt:

The first day of Microsoft’s Build developer conference is typically chock-full of news, and this year was no exception. During a keynote headlined by CEO Satya Nadella, the Seattle company took the wraps off a slew of updates to Microsoft 365, its lineup of productivity-focused, cloud-hosted software and subscription services. Among the highlights were a new AI-powered grammar and style checker in Word Online, dubbed Ideas in Word, and dynamic email messages in Outlook Mobile.

Ideas in Word builds on Editor, an AI-powered proofreader for Office 365 that was announced in July 2016 and replaced the Spelling & Grammar pane in Office 2016 later that year. Ideas in Words similarly taps natural language processing and machine learning to deliver intelligent, contextually aware suggestions that could improve a document’s readability. For instance, it’ll recommend ways to make phrases more concise, clear, and inclusive, and when it comes across a particularly tricky snippet, it’ll put forward synonyms and alternative phrasings.

 

Also see:

 

 

5 Myths of AI — from thejournal.com by Dian Schaffhauser

Excerpt:

No, artificial intelligence can’t replace the human brain, and no, we’ll never really be able to make AI bias-free. Those are two of the 10 myths IT analyst and consulting firm Gartner tackled in its recent report, Debunking Myths and Misconceptions About Artificial Intelligence.”

 

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao
A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.

Excerpt:

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

 

Along these lines, also see:

‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul
Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Excerpt:

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian