U.S. issues charges in first criminal cryptocurrency sanctions case — from washingtonpost.com by Spencer S. Hsu
Federal judge finds U.S. sanctions laws apply to $10 million in Bitcoin sent by American citizen to a country blacklisted by Washington

Excerpt:

The Justice Department has launched its first criminal prosecution involving the alleged use of cryptocurrency to evade U.S. economic sanctions, a federal judge disclosed Friday.

 

Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’ — from protocol.com by Kyle Alspach
Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.

Excerpt:

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn’t been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

Also re: AI, see:

Nuance partners with The Academy to launch The AI Collaborative — from artificialintelligence-news.com by Ryan Daws

Excerpt:

Nuance has partnered with The Health Management Academy (The Academy) to launch The AI Collaborative, an industry group focused on advancing healthcare using artificial intelligence and machine learning.

Nuance became a household name for creating the speech engine recognition engine behind Siri. In recent years, the company has put a strong focus on AI solutions for healthcare and is now a full-service partner of 77 percent of US hospitals and is trusted by over 500,000 physicians daily.

Inflection AI, led by LinkedIn and DeepMind co-founders, raises $225M to transform computer-human interactions — from techcrunch.com by Kyle Wiggers

Excerpts:

Inflection AI, the machine learning startup headed by LinkedIn co-founder Reid Hoffman and founding DeepMind member Mustafa Suleyman, has secured $225 million in equity financing, according to a filing with the U.S. Securities and Exchange Commission.

“[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman told the publication. “It feels like we’re on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.”

 

AI research is a dumpster fire and Google’s holding the matches — from thenextweb.com by Tristan Greene
Scientific endeavor is no match for corporate greed

Excerpts:

The world of AI research is in shambles. From the academics prioritizing easy-to-monetize schemes over breaking novel ground, to the Silicon Valley elite using the threat of job loss to encourage corporate-friendly hypotheses, the system is a broken mess.

And Google deserves a lion’s share of the blame.

Google, more than any other company, bears responsibility for the modern AI paradigm. That means we need to give big G full marks for bringing natural language processing and image recognition to the masses.

It also means we can credit Google with creating the researcher-eat-researcher environment that has some college students and their big-tech-partnered professors treating research papers as little more than bait for venture capitalists and corporate headhunters.

But the system’s set up to encourage the monetization of algorithms first, and to further the field second. In order for this to change, big tech and academia both need to commit to wholesale reform in how research is presented and reviewed.

Also relevant/see:

Every month Essentials publish an Industry Trend Report on AI in general and the following related topics:

  • AI Research
  • AI Applied Use Cases
  • AI Ethics
  • AI Robotics
  • AI Marketing
  • AI Cybersecurity
  • AI Healthcare

It’s never too early to get your AI ethics right — from protocol.com by Veronica Irwin
The Ethical AI Governance Group wants to give startups a framework for avoiding scandals and blunders while deploying new technology.

Excerpt:

To solve this problem, a group of consultants, venture capitalists and executives in AI created the Ethical AI Governance Group last September. In March, it went public, and published a survey-style “continuum” for investors to use in advising the startups in their portfolio.

The continuum conveys clear guidance for startups at various growth stages, recommending that startups have people in charge of AI governance and data privacy strategy, for example. EAIGG leadership argues that using the continuum will protect VC portfolios from value-destroying scandals.

 

12 examples of artificial intelligence in everyday life — from itproportal.com by Christopher Oldman

Excerpt:

4. Plagiarism
The college students’ (or is it professor’s?) nightmare. Whether you are a content manager or a teacher grading essays, you have the same problem – the internet makes plagiarism easier.

There is a nigh unlimited amount of information and data out there, and less-than-scrupulous students and employees will readily take advantage of that.

Indeed, no human could compare and contrast somebody’s essay with all the data out there. AIs are a whole different beast.

They can sift through an insane amount of information, compare it with the relevant text, and see if there is a match or not.

Furthermore, thanks to advancement and growth in this area, some tools can actually check sources in foreign languages, as well as images and audio.

Intel calls its AI that detects student emotions a teaching tool. Others call it ‘morally reprehensible.’ — from protocol.com by Kate Kaye
Virtual school software startup Classroom Technologies will test the controversial “emotion AI” technology.

Excerpts:

But Intel and Classroom Technologies, which sells virtual school software called Class, think there might be a better way. The companies have partnered to integrate an AI-based technology developed by Intel with Class, which runs on top of Zoom. Intel claims its system can detect whether students are bored, distracted or confused by assessing their facial expressions and how they’re interacting with educational content.

But critics argue that it is not possible to accurately determine whether someone is feeling bored, confused, happy or sad based on their facial expressions or other external signals.

The classroom is just one arena where controversial “emotion AI” is finding its way into everyday tech products and generating investor interest. It’s also seeping into delivery and passenger vehicles and virtual sales and customer service software.

MIT’s FutureMakers programs help kids get their minds around — and hands on — AI — from news.mit.edu by Kim Patch
The programs are designed to foster an understanding of how artificial intelligence technologies work, including their social implications.

Excerpt:

During one-week, themed FutureMakers Workshops organized around key topics related to AI, students learn how AI technologies work, including social implications, then build something that uses AI.

“AI is shaping our behaviors, it’s shaping the way we think, it’s shaping the way we learn, and a lot of people aren’t even aware of that,” says Breazeal. “People now need to be AI literate given how AI is rapidly changing digital literacy and digital citizenship.”

AI can now kill those annoying cookie pop-ups — from thenextweb.com by Thomas Macaulay
The notifications have been put on notice

Excerpt:

After years of suffering this digital torture, a new AI tool has finally offered hope of an escape.

Named CookieEnforcer, the system was created by researchers from Google and the University of Wisconsin-Madison.

The system was created to stop cookies from manipulating people into making website-friendly choices that put their privacy at risk. Yet it could also end the constant hassle of navigating the notices.

Using machine learning to improve student success in higher education — from mckinsey.com
Deploying machine learning and advanced analytics thoughtfully and to their full potential may support improvements in student access, success, and the overall student experience.

Excerpt:

Yet higher education is still in the early stages of data capability building. With universities facing many challenges (such as financial pressures, the demographic cliff, and an uptick in student mental-health issues) and a variety of opportunities (including reaching adult learners and scaling online learning), expanding use of advanced analytics and machine learning may prove beneficial.

Below, we share some of the most promising use cases for advanced analytics in higher education to show how universities are capitalizing on those opportunities to overcome current challenges, both enabling access for many more students and improving the student experience.

Artificial intelligence (AI): 7 roles to prioritize now — from enterprisersproject.com by Marc Lewis
Which artificial intelligence (AI) jobs are hottest now? Consider these seven AI/ML roles to prioritize in your organization

Excerpt:

Rather than a Great Resignation, this would suggest a Great Reallocation of the workforce. As a global search consultant, we are seeing this precipitous shift in positions, with great demand for skills in artificial intelligence and machine learning (AI/ML).

With that in mind, here are seven artificial intelligence (AI)-related roles to consider prioritizing right now as the workforce reallocates talent to new jobs that drive economic value for leading companies…

4 ways AI will be a great teaching assistant — from thetechedvocate.org by Matthew Lynch

 

Web3 Security: Attack Types and Lessons Learned — from a16z.com by Riyaz Faizullabhoy and Matt Gleason

Excerpt:

A good deal of web3 security rests on blockchains’ special ability to make commitments and to be resilient to human intervention. But the related feature of finality – where transactions are generally irreversible – makes these software-controlled networks a tempting target for attackers. Indeed, as blockchains – the distributed computer networks that are the foundation of web3 – and their accompanying technologies and applications accrue value, they become increasingly coveted targets for attackers.

Despite web3’s differences from earlier iterations of the internet, we’ve observed commonalities with previous software security trends. In many cases, the biggest problems remain the same as ever. By studying these areas, defenders – whether builders, security teams, or everyday crypto users – can better guard themselves, their projects, and their wallets against would-be thieves. Below we present some common themes and projections based on our experience.

 

Reflections on “Do We Really Want Academic Permanent Records to Live Forever on Blockchain?” [Bohnke]

From DSC:
Christin Bohnke raises a great and timely question out at edsurge.com in her article entitled:
Do We Really Want Academic Permanent Records to Live Forever on Blockchain?

Christin does a wonderful job of addressing the possibilities — but also the challenges — of using blockchain for educational/learning-related applications. She makes a great point that the time to look at this carefully is now:

Yet as much as unchangeable education records offer new chances, they also create new challenges. Setting personal and academic information in stone may actually counter the mission of education to help people evolve over time. The time to assess the benefits and drawbacks of blockchain technology is right now, before adoption in schools and universities is widespread.

As Christin mentions, blockchain technology can be used to store more than formal certification data. It could also store such informal certification data such as “research experience, individual projects and skills, mentoring or online learning.”

The keeping of extensive records via blockchain certainly raises numerous questions. Below are a few that come to my mind:

  • Will this type of record-keeping help or hurt in terms of career development and moving to a different job?
  • Will — or should — CMS/LMS vendors enable this type of feature/service in their products?
  • Should credentials from the following sources be considered relevant?
    • Microlearning-based streams of content
    • Data from open courseware/courses
    • Learning that we do via our Personal Learning Networks (PLNs) and social networks
    • Learning that we get from alternatives such as bootcamps, coding schools, etc.
  • Will the keeping of records impact the enjoyment of learning — or vice versa? Or will it depend upon the person?
  • Will there be more choice, more control — or less so?
  • To what (granular) level of competency-based education should we go? Or from project-based learning?
  • Could instructional designers access learners’ profiles to provide more personalized learning experiences?
  • …and I’m certain there are more questions than these.

All that said…

To me, the answers to these questions — and likely other questions as well — lie in:

  1. Giving a person a chance to learn, practice, and then demonstrate the required skills (regardless of the data the potential employer has access to)
    .
  2. Giving each user the right to own their own data — and to release it as they see fit. Each person should have the capability of managing their own information/data without having to have the skills of a software engineer or a database administrator. When something is written to a blockchain, there would be a field for who owns — and can administer — the data.

In the case of finding a good fit/job, a person could use a standardized interface to generate a URL that is sent out to a potential employer. That URL would be good for X days. The URL gives the potential employer the right to access whatever data has been made available to them. It could be full access, in which case the employer is able to run their own queries/searches on the data. Or the learner could restrict the potential employer’s reach to a more limited subset of data.

Visually, speaking:


Each learner can say who can access what data from their learner's profile


I still have a lot more thinking to do about this, but that’s where I’m at as of today. Have a good one all!


 

Is Artificial Intelligence Undermining The Legal System? — from lawyer-monthly.com
Globally recognised artificial intelligence expert Dr Lance Eliot explains how AI is undermining the legal system.

Excerpt:

Well, imagine if I told you that the text messages, the emails, and the video clips were all crafted via the use of AI-based deepfake technologies. None of that seeming “evidence” of wrongdoing or at least inappropriate actions of the prosecutor are real. They certainly look to be real. The texts use the same style of text messaging that the prosecutor normally uses. The emails have the same written style as other emails by the prosecutor.

And, the most damning of the materials, those video clips of the prosecutor, are clearly the face of the prosecutor, and the words spoken are of the same voice as the prosecutor. You might have been willing to assume that the texts and the emails could be faked, but the video seems to be the last straw on the camel’s back. This is the prosecutor caught on video saying things that are utterly untoward in this context. All of that could readily be prepared via the use of today’s AI-based deepfake high-tech.

So, be on the watch for getting AI-based deepfake materials produced about you.

 

Meet The Secretive Surveillance Wizards Helping The FBI And ICE Wiretap Facebook And Google Users — from forbes.com by Thomas Brewster
A small Nebraska company is helping law enforcement around the world spy on users of Google, Facebook and other tech giants. A secretly recorded presentation to police reveals how deeply embedded in the U.S. surveillance machine PenLink has become.

Excerpts:

PenLink might be the most pervasive wiretapper you’ve never heard of.

With $20 million revenue every year from U.S. government customers such as the Drug Enforcement Administration, the FBI, Immigration Customs Enforcement (ICE) and almost every other law enforcement agency in the federal directory, PenLink enjoys a steady stream of income. That doesn’t include its sales to local and state police, where it also does significant business but for which there are no available revenue figures. Forbes viewed contracts across the U.S., including towns and cities in California, Florida, Illinois, Hawaii, North Carolina and Nevada.

 

China Is About to Regulate AI—and the World Is Watching — from wired.com by Jennifer Conrad
Sweeping rules will cover algorithms that set prices, control search results, recommend videos, and filter content.

Excerpt:

On March 1, China will outlaw this kind of algorithmic discrimination as part of what may be the world’s most ambitious effort to regulate artificial intelligence. Under the rules, companies will be prohibited from using personal information to offer users different prices for a product or service.

The sweeping rules cover algorithms that set prices, control search results, recommend videos, and filter content. They will impose new curbs on major ride-hailing, ecommerce, streaming, and social media companies.

 

The US is testing robot patrol dogs on its borders. Should we worry? — from interestingengineering.com by Loukia Papadopoulos
Bow-wow just got darker.

However, we can’t shake the feeling that their deployment is creating a dystopian future, one we are not sure is a completely safe one especially if the dogs are trained to operate autonomously. That is a capability we are not sure the robots yet have. Will this be the next stop for border control? If so, how can we trust machines with rifles? Time will tell how this situation evolves.
 

Feds’ spending on facial recognition tech expands, despite privacy concerns — from by Tonya Riley

Excerpt:

The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.

From DSC:
What?!? Isn’t this yet another foot in the door for Clearview AI and the like? Is this the kind of world that we want to create for our kids?! Will our kids have any privacy whatsoever? I feel so powerless to effect change here. This technology, like other techs, will have a life of its own. Don’t think it will stop at finding criminals. 

AI being used in the hit series called Person of Interest

This is a snapshot from the series entitled, “Person of Interest.
Will this show prove to be right on the mark?

Addendum on 1/18/22:
As an example, check out this article:

Tencent is set to ramp up facial recognition on Chinese children who log into its gaming platform. The increased surveillance comes as the tech giant caps how long kids spend gaming on its platform. In August 2021, China imposed strict limits on how much time children could spend gaming online.

 

From DSC:
As with many emerging technologies, there appear to be some significant pros and cons re: the use of NFTs (Non-Fungible Tokens)

The question I wonder about is: How can the legal realm help address the massive impacts of the exponential pace of technological change in our society these days? For examples:

Technicians, network engineers, data center specialists, computer scientists, and others also need to be asking themselves how they can help out in these areas as well.

Emphasis below is mine.


NFTs Are Hot. So Is Their Effect on the Earth’s Climate — from wired.com by Gregory Barber
The sale of a piece of crypto art consumed as much energy as the studio uses in two years. Now the artist is campaigning to reduce the medium’s carbon emissions.

Excerpt:

The works were placed for auction on a website called Nifty Gateway, where they sold out in 10 seconds for thousands of dollars. The sale also consumed 8.7 megawatt-hours of energy, as he later learned from a website called Cryptoart.WTF.

NFTs And Their Role In The “Metaverse” — from 101blockchains.com by Georgia Weston

Many people would perceive NFTs as mere images of digital artworks or collectibles which they can sell for massive prices. However, the frenzy surrounding digital art in present times has pointed out many new possibilities with NFTs. For example, the NFT metaverse connection undoubtedly presents a promising use case for NFTs. The road for the future of NFTs brings many new opportunities for investors, enterprises, and hobbyists, which can shape up NFT usage and adoption in the long term. 

NFTs or non-fungible tokens are a new class of digital assets, which are unique, indivisible, and immutable. They help in representing the ownership of digital and physical assets on the blockchain. Starting from digital artwork to the gaming industry, NFTs are making a huge impact everywhere.

The decentralized nature of the blockchain offers the prospects for unlimited business opportunities and social interaction. Metaverse offers extremely versatile, scalable, and interoperable digital environments. Most important of all, the metaverse blends innovative technologies with models of interaction between participants from individual and enterprise perspectives. 

From DSC:
How might the developments occurring with NFTs and the Metaverse impact a next-gen learning platform?

—–

Artist shuts down because people keep their work to make NFTs — from futurism.com by Victor Tangermann
NFT theft is a huge problem

Someone is selling NFTs of Olive Garden locations that they do not own — from futurism.com by
And you can mint a breadstick NFT — for free, of course

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 

Surveillance in Schools Associated With Negative Student Outcomes — from techlearning.com by Erik Ofgang
Surveillance at schools is meant to keep students safe but sometimes it can make them feel like suspects instead.

Excerpt:

“We found that schools that rely heavily on metal detectors, random book bag searches, school resource officers, and other methods of surveillance had a negative impact relative to those schools who relied on those technologies least,” says Odis Johnson Jr., the lead author of the study and the Bloomberg Distinguished Professor of Social Policy & STEM Equity at Johns Hopkins.

The researchers also found that Black students are four times more likely to attend a high- versus low-surveillance school, and students who attend high-surveillance schools are more likely to be poor.

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 
© 2022 | Daniel Christian