AI Foundation Models for the Rest of Us — from future.a16z.com by Elliot Turner

Excerpt:

While 2012’s state-of-the-art systems could be trained on a $700 video game card, today’s state-of-the-art systems – often referred to as foundation models – likely require tens of millions of dollars in computation to train.

The emergence of these massive-scale, high-cost foundation models brings opportunities, risks, and limitations for startups and others that want to innovate in artificial intelligence and machine learning.

 

Data-Informed Learning Design and the Shift to Online — from campustechnology.com by Rhea Kelly

Excerpt:

The pandemic has been a testament to the progress that has been made in the use of technology to support online learning, but it has also revealed how poorly traditional course design translates to a digital experience. And that’s an opportunity for institutions to become more sophisticated in leveraging digital learning environments to go beyond what’s possible in a brick-and-mortar classroom. That’s according to Luyen Chou, chief learning officer at 2U. Here, we talk about transforming online pedagogy, the potential of emerging technologies, the beauty of simple data, essential human skills and more.

 

The Future Of Law? Legal Data — from techlaw.co.il by Adv. Zohar Fisher

Excerpt:

In an increasingly digital world, data is being leveraged – and collected – in novel ways. The legal field, known for being extremely risk-averse, has a real opportunity to utilize digital tools for many aspects of legal practice to achieve a wide variety of goals, including increasing revenue and expediency.

The legal technology industry, colloquially dubbed ‘Legal Tech,’ has been fueled by a flood of recent investments, including $1.5 billion in 2020 alone. As part of this tidal wave, legal data analysis is an integral part of this rapid growth. In fact, according to the American Bar Association’s Legal Technology Survey, 58% of participants used web-based software to aid their practice.

These programs rely on different types of data including internal data derived from the activity of a firm (such as tracking billable hours), individual data collected from cookies and web traffic, as well as industry data (from publicly available legal sources), can be used and combined to give various forecasts and insights.

 

Feds’ spending on facial recognition tech expands, despite privacy concerns — from by Tonya Riley

Excerpt:

The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.

From DSC:
What?!? Isn’t this yet another foot in the door for Clearview AI and the like? Is this the kind of world that we want to create for our kids?! Will our kids have any privacy whatsoever? I feel so powerless to effect change here. This technology, like other techs, will have a life of its own. Don’t think it will stop at finding criminals. 

AI being used in the hit series called Person of Interest

This is a snapshot from the series entitled, “Person of Interest.
Will this show prove to be right on the mark?

Addendum on 1/18/22:
As an example, check out this article:

Tencent is set to ramp up facial recognition on Chinese children who log into its gaming platform. The increased surveillance comes as the tech giant caps how long kids spend gaming on its platform. In August 2021, China imposed strict limits on how much time children could spend gaming online.

 

A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI — from wired.com by Khari Johnson
Researchers are encouraging those who work in AI to explicitly consider racism, gender, and other structural inequalities.

Excerpt:

FORMS OF AUTOMATION such as artificial intelligence increasingly inform decisions about who gets hired, is arrested, or receives health care. Examples from around the world articulate that the technology can be used to exclude, control, or oppress people and reinforce historic systems of inequality that predate AI.

“Algorithms are animated by data, data comes from people, people make up society, and society is unequal,” the paper reads. “Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage.

 

 
 

Python, Java, Linux and SQL: These are the hot tech skills employers are looking for — from zdnet.com by Owen Hughes
Programming languages continue to dominate jobs specs, with employers keen to build up their tech teams and deliver on expanding digital programmes.

In Q3, job listings in the tech industry suggest that organizations are on the lookout for technology professionals “who understand the core concepts of software development and project management” and possess technical skills in Linux, as well as programming languages Java, Python and SQL.

 

AI bots to user data: Is there space for rights in the metaverse? — from reuters.com by Sonia Elks

Summary

  • Facebook’s ‘metaverse’ plans fuel debate on virtual world
  • Shared digital spaces raise privacy, ownership questions
  • Rights campaigners urge regulators to widen safeguards

 

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 

Higher Education Needs to Move Toward Mass-Personalization — from fierceeducation.com by Susan Fourtané

Excerpt:

Every industry, from health sciences to marketing to manufacturing, is using Artificial Intelligence (AI) to facilitate the delivery of mass-personalization, yet education has been slow in its adoption. These smart systems create personalized solutions targeted to meet the unique needs of every individual.

Artificial Intelligence-based technologies have the potential of serving as tools for educators to provide personalized learning. However, for mass-personalization to work, institutions first need to align their leadership. In his feature session What Is It Going to Take to Move from Mass-Production to Mass-Personalization?, during the recent Online Learning Consortium virtual event, Dale Johnson, Director of Digital Innovation at Arizona State University, addressed the issue of mass personalization in higher education.

Johnson reinforced the idea that with mass-personalization professors can deliver the right lesson to the right student at the right time.

Mass-personalization software does not replace the professor. It makes the professor better, more focused on the students.

 
 

How tech will change the way we work by 2030 — from techradar.com by Rob Lamb
Everyday jobs will be augmented by technology

Technologies like Artificial Intelligence (AI), Machine Learning (ML) and Blockchain will have a significant impact on work in the next decade and beyond. But if you believe the sci-fi hype or get bogged down in the technology, it can be difficult to relate them to today’s workplaces and jobs.

Excerpt:

Here are three everyday examples of their potential, expressed in terms of the business challenge they are addressing or how consumers will experience them. I don’t mean to over simplify – these are powerful tools – but I think their potential shines through best when they’re expressed in their simplest terms.

From DSC:
I have an issue with one of their examples that involves positioning AI as the savior of the hiring process:

The problem with this process isn’t just that it’s hugely time-consuming, it’s that all kinds of unconscious biases can creep in, potentially even into the job advert. These issues can be eradicated with the use of AI, which can vet ads for gendered language, sort through applicants and pick out the most suitable ones in a fraction of the time it would take an HR professional or any other human being.

AI can be biased as well — just like humans. Lamb recognizes this as well when he states that “Provided the algorithms are written correctly, AI will be key to organisations addressing issues around diversity and inclusion in the workplace.” But that’s a big IF in my mind. So far, it appears that the track record isn’t great in this area. Also, as Cathy O’Neil asserts: “Algorithms are opinions embedded in code.”

AI may not be as adept as a skilled HR professional in seeing the possibilities for someone. Can person A’s existing skillsets be leveraged in this new/other position?

 

Gartner: 4 Key Trends Speeding AI Innovation — from campustechnology.com by Rhea Kelly

Excerpt:

Research firm Gartner has identified four trends that are driving artificial intelligence innovation in the near term. These technologies and approaches will be key to scaling AI initiatives, the company emphasized in a news announcement…

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 
© 2022 | Daniel Christian