From DSC:
Should Economics classes be looking at the idea of a digital dollar?


The battle for control of the digital dollar — from protocol.com

Excerpt:

After months of delay, the Federal Reserve’s much-awaited report on a digital dollar could be out soon. Federal Reserve Chairman Jerome Powell told the Senate Banking Committee Tuesday that the white paper on a planned central bank digital currency (CBDC) is “ready to go,” possibly in weeks.

But don’t expect a detailed blueprint for an American CBDC. The white paper will be more of “an exercise in asking questions” about how the digital dollar should work, Powell said.

And there are plenty of questions. Some have already sparked a heated debate: How should consumers gain access to the digital dollar? And how much control should the Fed have?

Will a digital dollar lead to “digital authoritarianism”? That’s one of the biggest fears about a digital currency: a Big Brother future where the Fed has the power to track how people spend their money.

 

Feds’ spending on facial recognition tech expands, despite privacy concerns — from by Tonya Riley

Excerpt:

The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.

From DSC:
What?!? Isn’t this yet another foot in the door for Clearview AI and the like? Is this the kind of world that we want to create for our kids?! Will our kids have any privacy whatsoever? I feel so powerless to effect change here. This technology, like other techs, will have a life of its own. Don’t think it will stop at finding criminals. 

AI being used in the hit series called Person of Interest

This is a snapshot from the series entitled, “Person of Interest.
Will this show prove to be right on the mark?

Addendum on 1/18/22:
As an example, check out this article:

Tencent is set to ramp up facial recognition on Chinese children who log into its gaming platform. The increased surveillance comes as the tech giant caps how long kids spend gaming on its platform. In August 2021, China imposed strict limits on how much time children could spend gaming online.

 

Autonomous Weapons Are Here, but the World Isn’t Ready for Them — from wired.com by Will Knight
A UN report says a drone, operating without human control, attacked people in Libya. International efforts to restrict such weapons have so far failed.

This may be remembered as the year when the world learned that lethal autonomous weapons had moved from a futuristic worry to a battlefield reality. It’s also the year when policymakers failed to agree on what to do about it.

On Friday, 120 countries participating in the United Nations’ Convention on Certain Conventional Weapons could not agree on whether to limit the development or use of lethal autonomous weapons. Instead, they pledged to continue and “intensify” discussions.

 

 

From DSC:
As I looked at the article below, I couldn’t help but wonder…what is the role of the American Bar Association (ABA) in this type situation? How can the ABA help the United States deal with the impact/place of emerging technologies?


Clearview AI will get a US patent for its facial recognition tech — from engadget.com by J. Fingas
Critics are worried the company is patenting invasive tech.

Excerpt:

Clearview AI is about to get formal acknowledgment for its controversial facial recognition technology. Politico reports Clearview has received a US Patent and Trademark Office “notice of allowance” indicating officials will approve a filing for its system, which scans faces across public internet data to find people from government lists and security camera footage. The company just has to pay administrative fees to secure the patent.

In a Politico interview, Clearview founder Hoan Ton-That claimed this was the first facial recognition patent involving “large-scale internet data.” The firm sells its tool to government clients (including law enforcement) hoping to accelerate searches.

As you might imagine, there’s a concern the USPTO is effectively blessing Clearview’s technology and giving the company a chance to grow despite widespread objections to its technology’s very existence. 

Privacy, news, facial recognition, USPTO, internet, patent,
Clearview AI, surveillance, tomorrow, AI, artificial intelligence

 

Top Resources For Students To Discover Real World Problems and Issues

Top Resources For Students To Discover Real World Problems & Issues — from edtechreview.in by Saniya Khan

Are you looking for ways to help students learn about world issues: climate change, cultural diversity, biodiversity, education, water crisis, [homelessness] and more to build awareness about global issues and develop global competence?

 
 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 

Constitution Day 2021: An Inspiring and Incomplete Civil Justice System — from legaltechmonitor.com by David Yellen

Excerpts:

Constitution Day, September 17, commemorates that date in 1787 when the delegates to the Constitutional Convention signed the U.S. Constitution in Philadelphia. Clearly, this date and this holiday do not attract nearly as much attention as July 4, the holiday marking the signing of the other foundational document of the United States, the Declaration of Independence. That is somewhat lamentable, however, because all Americans should take the time to celebrate and reflect on the Constitution each year.

I say celebrate because the Constitution was remarkable when it was written, and has served as the basic architecture for our system of government, law, and individual liberty for over 200 years. Not only has it remained the steadfast backbone of our country’s system of government, it has influenced new and evolving democracies around the world.

Despite the challenges, we believe the American legal system is still the best approach—and we believe all of us share a responsibility to embrace its ideals, expand access, and improve delivery. 

 

Defining the skills citizens will need in the future world of work — from McKinsey & Company; with thanks to Ryan Craig for this resource

Excerpts:

Our findings help define the particular skills citizens are likely to require in the future world of work and suggest how proficiency in them can influence work-related outcomes, namely employment, income, and job satisfaction. This, in turn, suggests three actions governments may wish to take.

  1. Reform education systems
  2. Reform adult-training systems
  3. Ensure affordability of lifelong education

Establish an AI aggregator of training programs to attract adult learners and encourage lifelong learning. AI algorithms could guide users on whether they need to upskill or reskill for a new profession and shortlist relevant training programs. 

Foundational skills that will help citizens thrive in the future of work


From DSC:
No one will have all 56 skills that McKinsey recommends here. So (HR) managers, please don’t load up your job postings with every single skill listed here. The search for purple unicorns can get tiring, old, and discouraging for those who are looking for work.

That said, much of what McKinsey’s research/data shows — and what their recommendations are — resonates with me. And that’s why I keep adding to the developments out at:

Learning from the living class room

A powerful, global, next-generation learning platform — meant to help people reinvent themselves quickly, safely, cost-effectively, conveniently, & consistently!!!

 

 

Watch a Drone Swarm Fly Through a Fake Forest Without Crashing — from wired.com by Max Levy
Each copter doesn’t just track where the others are. It constantly predicts where they’ll go.

From DSC:
I’m not too crazy about this drone swarm…in fact, the more I thought about it, I find it quite alarming and nerve-racking. It doesn’t take much imagination to think what the militaries of the world are already doing with this kind of thing. And our son is now in the Marines. So forgive me if I’m a bit biased here…but I can’t help but wondering what the role/impact of foot soldiers will be in the next war? I hope we don’t have one. 

Anway, just because we can…

 

21 jobs of the future: A guide to getting — and staying — employed over the next 10 years — from cognizant.com and  the Center for The Future of Work

Excerpt:

WHAT THE NEXT 10 YEARS WILL BRING: NEW JOBS
In this report, we propose 21 new jobs that will emerge over the next 10 years and will become cornerstones of the future of work. In producing this report, we imagined hundreds of jobs that could emerge within the major macroeconomic, political, demographic, societal, cultural, business and technology trends observable today, e.g., growing populations, aging populations, populism, environmentalism, migration, automation, arbitrage, quantum physics, AI, biotechnology, space exploration, cybersecurity, virtual reality.

Among the jobs we considered, some seemed further out on the horizon and are not covered here: carbon farmers, 3-D printing engineers, avatar designers, cryptocurrency arbitrageurs, drone jockeys, human organ developers, teachers of English as a foreign language for robots, robot spa owners, algae farmers, autonomous fleet valets, Snapchat addiction therapists, urban vertical farmers and Hyperloop construction managers. These are jobs that younger generations may do in the further off future.

21 jobs on a chart where tech-centricity is on the vertical axis and the time horizon is on the horizontal axis. 21 jobs are represented in this graphic and report.

Also see:

Here are the top 10 jobs of the future — from bigthink.com by Robert Brown
Say hello to your new colleague, the Workplace Environment Architect.

Excerpt:

6. Algorithm Bias Auditor – “All online, all the time” lifestyles for work and leisure accelerated the competitive advantage derived from algorithms by digital firms everywhere. But from Brussels to Washington, given the increasing statutory scrutiny on data, it’s a near certainty that when it comes to how they’re built, verification through audits will help ensure the future workforce is also the fair workforce.

 

The metaverse: real world laws give rise to virtual world problems — from cityam.com by Gregor Pryor

Legal questions
Like many technological advances, from the birth of the internet to more modern-day phenomena such as the use of big data and artificial intelligence (AI), the metaverse will in some way challenge the legal status quo.

Whilst the growth and adoption of the metaverse will raise age-old legal questions, it will also generate a number of unique legal and regulatory obstacles that need to be overcome.

From DSC:
I’m posting this because this is another example of why we have to pick up the pace within the legal realm. Organizations like the American Bar Association (ABA) are going to have to pick up the pace big time. Society has been being impacted by a variety of emerging technologies such as these. And such changes are far from being over. Law schools need to assess their roles and responsibilities in this new world as well.

Addendum on 3/29/21:
Below are some more examples from Jason Tashea’s “The Justice Tech Download” e-newsletter:

  • Florida prisons buy up location data from data brokers. (Techdirt) A prison mail surveillance company keeps tabs on those on the outside, too. (VICE)
  • Police reform requires regulating surveillance tech. (Patch) (h/t Rebecca Williams) A police camera that never tires stirs unease at the US First Circuit Court of Appeals. (Courthouse News)
  • A Florida sheriff’s office was sued for using its predictive policing program to harass residents. (Techdirt)
  • A map of e-carceration in the US. (Media Justice) (h/t Upturn)
  • This is what happens when ICE asks Google for your user information. (Los Angeles Times)
  • Data shows the NYPD seized 55,000 phones in 2020, and it returned less than 35,000 of them. (Techdirt)
  • The SAFE TECH Act will make the internet less safe for sex workers. (OneZero)
  • A New York lawmaker wants to ban the use of armed robots by police. (Wired)
  • A look at the first wave of government accountability of algorithms. (AI Now Institute) The algorithmic auditing trap. (OneZero)
  • The (im)possibility of fairness: Different value systems require different mechanisms for fair decision making. (Association for Computing Machinery)
  • A new open dataset has 510 commercial legal contracts with 13,000+ labels. (Atticus Project)
  • JusticeText co-founder shares her experience building tech for public defenders. (Law360)

 

 

A New York Lawmaker Wants to Ban Police Use of Armed Robots — from wired.com by Sidney Fussell
Officers’ use of Boston Robotics’ Digidog intensifies concerns about militarization of the police.

A robot dog is pictured here.

Excerpt:

NEW YORK CITY councilmember Ben Kallos says he “watched in horror” last month when city police responded to a hostage situation in the Bronx using Boston Dynamics’ Digidog, a remotely operated robotic dog equipped with surveillance cameras. Pictures of the Digidog went viral on Twitter, in part due to their uncanny resemblance with world-ending machines in the Netflix sci-fi series Black Mirror.

 
© 2024 | Daniel Christian