AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao
A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.

Excerpt:

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

 

Along these lines, also see:

‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul
Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Excerpt:

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

 

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

Collaboration technology is fueling enterprise transformation – increasing agility, driving efficiency and improving productivity. Join Amy Chang at Enterprise Connect where she will share Cisco’s vision for the future of collaboration, the foundations we have in place and the amazing work we’re driving to win our customers’ hearts and minds. Cognitive collaboration – technology that weaves context and intelligence across applications, devices and workflows, connecting people with customers & colleagues, to deliver unprecedented experiences and transform how we work – is at the heart of our efforts. Join this session to see our technology in action and hear how our customers are using our portfolio of products today to transform the way they work.

 

 

 

 

Is Thomas Frey right? “…by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet.”

From a fairly recent e-newsletter from edsurge.com — though I don’t recall the exact date (emphasis DSC):

New England is home to some of the most famous universities in the world. But the region has also become ground zero for the demographic shifts that promise to disrupt higher education.

This week saw two developments that fit the narrative. On Monday, Southern Vermont College announced that it would shut its doors, becoming the latest small rural private college to do so. Later that same day, the University of Massachusetts said it would start a new online college aimed at a national audience, noting that it expects campus enrollments to erode as the number of traditional college-age students declines in the coming years.

“Make no mistake—this is an existential threat to entire sectors of higher education,” said UMass president Marty Meehan in announcing the online effort.

The approach seems to parallel the U.S. retail sector, where, as a New York Times piece outlines this week, stores like Target and WalMart have thrived by building online strategies aimed at competing with Amazon, while stores like Gap and Payless, which did little to move online, are closing stores. Of course, college is not like any other product or service, and plenty of campuses are touting the richness of the experience that students get by actually coming to a campus. And it’s not clear how many colleges can grow online to a scale that makes their investments pay off.

 

“It’s predicted that over the next several years, four to five major national players with strong regional footholds will be established. We intend to be one of them.”

University of Massachusetts President Marty Meehan

 

 

From DSC:
That last quote from UMass President Marty Meehan made me reflect upon the idea of having one or more enormous entities that will provide “higher education” in the future. I wonder if things will turn out to be that we’ll have more lifelong learning providers and platforms in the future — with the idea of a 60-year curriculum being an interesting idea that may come into fruition.

Long have I predicted that such an enormous entity would come to pass. Back in 2008, I named it the Forthcoming Walmart of Education. But then as the years went by, I got bumbed out on some things that Walmart was doing, and re-branded it the Forthcoming Amazon.com of Higher Education. We’ll see how long that updated title lasts — but you get the point. In fact, the point aligns very nicely with what futurist Thomas Frey has been predicting for years as well:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider. (source)

I realize that education doesn’t always scale well…but I’m thinking that how people learn in the future may be different than how we did things in the past…communities of practice comes to mind…as does new forms of credentialing…as does cloud-based learner profiles…as does the need for highly efficient, cost-effective, and constant opportunities/means to reinvent oneself.

Also see:

 

 

Addendum:

74% of consumers go to Amazon when they’re ready to buy something. That should be keeping retailers up at night. — from cnbc.com

Key points (emphasis DSC)

  • Amazon remains a looming threat for some of the biggest retailers in the country — like Walmart, Target and Macy’s.
  • When consumers are ready to buy a specific product, nearly three-quarters of them, or 74 percent, are going straight to Amazon to do it, according to a new study by Feedvisor.
  • By the end of this year, Amazon is expected to account for 52.4 percent of the e-commerce market in the U.S., up from 48 percent in 2018.

 

“In New England, there will be between 32,000 and 54,000 fewer college-aged students just seven years from now,” Meehan said. “That means colleges and universities will have too much capacity and not enough demand at a time when the economic model in higher education is already straining under its own weight.” (Marty Meehan at WBUR)

 

 

The Future Today Institute’s 12th Annual Emerging Tech Trends Report — from futuretodayinstitute.com

Excerpts:

At the Future Today Institute, we identify emerging tech trends and map the future for our clients. This is FTI’s 12th annual Tech Trends Report, and in it we identify 315 tantalizing advancements in emerging technologies — artificial intelligence, biotech, autonomous robots, green energy and space travel — that will begin to enter the mainstream and fundamentally disrupt business, geopolitics and everyday life around the world. As of the publication date, the annual FTI Tech Trend Report report has garnered more than 7.5 cumulative views.

Key findings for 2019 (emphasis DSC)

  • Privacy is dead. (DC: NOT GOOD!!! If this is true, can the situation be reversed?)
  • Voice Search Optimization (VSO) is the new SEO.
  • The Big Nine.
  • Personal data records are coming. (DC: Including cloud-based learner profiles I hope.)
  • China continues to ascend, and not just in artificial intelligence.
  • Lawmakers around the world are not prepared to deal with new challenges that arise from emerging science and technology.
  • Consolidation continues as a key theme for 2019.

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 
 

81% of legal departments aren’t ready for digitization: Gartner — from mitratech.com by The Mitratech Team

Excerpt:

Despite the efforts of Legal Operations legal tech adopters and advocates, and the many expert voices raised about the need to evolve the legal industry?  A Gartner, Inc. report finds the vast majority of in-house legal departments are unprepared for digital transformation.

In compiling the report, Gartner reviewed the roles of legal departments in no less than 1,715 digital business projects. They also conducted interviews with over 100 general counsel and privacy officers, and another 100 legal stakeholders at large companies.

The reveal? That 81% of legal departments weren’t prepared for the oncoming tide of digitization at their companies. That leaves them at a disadvantage when one considers the results of Gartner’s CEO Survey.  Two-thirds of its CEO respondents predicted their business models would change in the next three years, with digitization as a major factor.

 

Also relevant here/see:
AI Pre-Screening Technology: A New Era for Contracts? — from by Tim Pullan, CEO and Founder, ThoughtRiver

Excerpt:

However, enterprises are beginning to understand the tangible value that can be delivered by automated contract pre-screening solutions. Such technology can ask thousands of questions defined by legal experts, and within minutes deliver an output weighing up the risks and advising next steps. Legal resources are then only required to follow up on these recommendations, whether they be a change to a clause, removing common bottlenecks altogether, or acting quickly to monetise a business opportunity.

There are clear benefits for both the legal team and the business. The GC’s team spends more time on enterprise-wide strategy and supporting other departments, while the business can move at pace and gain considerable competitive advantage.

 

 

Horizon Report Preview 2019 — from library.educause.edu
Analytics, Artificial Intelligence (AI), Badges and Credentialing, Blended Learning, Blockchain, Digital Learning, Digital Literacy, Extended Reality (XR), Instructional Design, Instructional Technologies, Learning Analytics, Learning Space, Mobile Learning, Student Learning Support, Teaching and Learning

Abstract
The EDUCAUSE Horizon Report Preview provides summaries of each of the upcoming edition’s trends, challenges, and important developments in educational technology, which were ranked most highly by the expert panel. This year’s trends include modularized and disaggregated degrees, the advancing of digital equity, and blockchain.

For more than a decade, EDUCAUSE has partnered with the New Media Consortium (NMC) to publish the annual Horizon Report – Higher Education Edition. In 2018, EDUCAUSE acquired the rights to the NMC Horizon project.

 

 

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Learning and Student Success: Presenting the Results of the 2019 Key Issues Survey — from er.educause.edu by Malcolm Brown

Excerpts:

Here are some results that caught (Malcolm’s) eye, with a few speculations tossed in:

  • The issue of faculty development reclaimed the top spot.
  • Academic transformation, previously a consistent top-three finisher, took a tumble in 2019 down to 10th.
  • After falling to 16th last year, the issue of competency-based education and new methods of learning assessment jumped up to 6th for 2019.
  • The issues of accessibility and universal design for learning (UDL) and of digital and information literacy held more or less steady.
  • Online and blended learning has rebounded significantly.

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

AI bias: 9 questions leaders should ask — from enterprisersproject.com by Kevin Casey
Artificial intelligence bias can create problems ranging from bad business decisions to injustice. Use these questions to fight off potential biases in your AI systems.

Excerpt:

People questions to ask about AI bias
1. Who is building the algorithms?
2. Do your AI & ML teams take responsibility for how their work will be used?
3. Who should lead an organization’s effort to identify bias in its AI systems?
4. How is my training data constructed?

Data questions to ask about AI bias
5. Is the data set comprehensive?
6. Do you have multiple sources of data?

Management questions to ask about AI bias
7. What proportion of resources is appropriate for an organization to devote to assessing potential bias?
8. Have you thought deeply about what metrics you use to evaluate your work?
9. How can we test for bias in training data?

 

 

2019 Top 10 IT Issues — from educause.edu

2019 Reveals Focus on “Student Genome”
In 2019, after a decade of preparing, higher education stands on a threshold. A new era of technology is ushering in myriad opportunities to apply data that supports and advances our higher ed mission. This threshold is similar to the one science stood on in the late 20th century: the prospect of employing technology to put genetic information to use meaningfully and ethically. Much in the same way, higher education must first “sequence” the data before we can apply it with any reliability or precision.

Our focus in 2019, then, is to organize, standardize, and safeguard data before applying it to our most pressing priority: student success.

The issues cluster into three themes:

  • Empowered Students: In their drive to improve student outcomes, institutions are increasingly focused on individual students, on their life circumstances, and on their entire academic journey. Leaders are relying on analytics and technology to make progress. Related issues: 2 and 4
  • Trusted Data: This is the work of the Student Genome Project, where the “sequencing” is taking place. Institutions are collecting, securing, integrating, and standardizing data and preparing the institution to use data meaningfully and ethically. Related issues: 1, 3, 5, 6 and 8
  • 21st Century Business Strategies: This is the leadership journey, in which institutions address today’s funding challenges and prepare for tomorrow’s more competitive ecosystem. Technology is now embedded into teaching and learning, research, and business operations and so must be embedded into the institutional strategy and business model. Related issues: 7, 9 and 10

 

 

 

 

 

 

Emerging technology trends can seem both elusive and ephemeral, but some become integral to business and IT strategies—and form the backbone of tomorrow’s technology innovation. The eight chapters of Tech Trends 2019 look to guide CIOs through today’s most promising trends, with an eye toward innovation and growth and a spotlight on emerging trends that may well offer new avenues for pursuing strategic ambitions.

 

 
© 2024 | Daniel Christian