Cost cutting algorithms are making your job search a living hell — from vice.com by Nick Keppler
More companies are using automated job screening systems to vet candidates, forcing jobseekers to learn new and absurd tricks to have their résumés seen by a human.

Excerpts:

Companies are increasingly using automated systems to select who gets ahead and who gets eliminated from pools of applicants. For jobseekers, this can mean a series of bizarre, time-consuming tasks demanded by companies who have not shown any meaningful consideration of them.

Maneuvering around algorithmic gatekeepers to reach an actual person with a say in hiring has become a crucial skill, even if the tasks involved feel duplicitous and absurd. ATS software can also enable a company to discriminate, possibly unwittingly, based on bias-informed data and culling of certain psychological traits.

Until he started as a legal writer for FreeAdvice.com last month, Johnson, 36, said he was at potential employers’ whims. “I can’t imagine I’d move to the next round if I didn’t do what they said,” he told Motherboard.

 

How to block Facebook and Google from identifying your face — from cnbc.com by Todd Haselton

Excerpt:

  • A New York Times report over the weekend discussed a company named Clearview AI that can easily recognize people’s faces when someone uploads a picture.
  • It scrapes this data from the internet and sites people commonly use, such as Facebook and YouTube, according to the report.
  • You can stop Facebook and Google from recognizing your face in their systems, which is one step toward regaining your privacy.
  • Still, it probably won’t entirely stop companies like Clearview AI from recognizing you, since they’re not using the systems developed by Google or Facebook.
 

From DSC:
As some of you may know, I’m now working for the WMU-Thomas M. Cooley Law School. My faith gets involved here, but I believe that the LORD wanted me to get involved with:

  • Using technology to increase access to justice (#A2J)
  • Contributing to leveraging the science of learning for the long-term benefit of our students, faculty, and staff
  • Raising awareness regarding the potential pros and cons of today’s emerging technologies
  • Increase the understanding that the legal realm has a looooong way to go to try to get (even somewhat) caught up with the impacts that such emerging technologies can/might have on us.
  • Contributing and collaborating with others to help develop a positive future, not a negative one.

Along these lines…in regards to what’s been happening with law schools over the last few years, I wanted to share a couple of things:

1) An article from The Chronicle of Higher Education by Benjamin Barton:

The Law School Crash

 

2) A response from our President and Dean, James McGrath:Repositioning a Law School for the New Normal

 

From DSC:
I also wanted to personally say that I arrived at WMU-Cooley Law School in 2018, and have been learning a lot there (which I love about my job!).  Cooley employees are very warm, welcoming, experienced, knowledgeable, and professional. Everyone there is mission-driven. My boss, Chris Church, is multi-talented and excellent. Cooley has a great administrative/management team as well.

There have been many exciting, new things happening there. But that said, it will take time before we see the results of these changes. Perseverance and innovation will be key ingredients to crafting a modern legal education — especially in an industry that is just now beginning to offer online-based courses at the Juris Doctor (J.D.) level (i.e., 20 years behind when this began occurring within undergraduate higher education).

My point in posting this is to say that we should ALL care about what’s happening within the legal realm!  We are all impacted by it, whether we realize it or not. We are all in this together and no one is an island — not as individuals, and not as organizations.

We need:

  • Far more diversity within the legal field
  • More technical expertise within the legal realm — not only with lawyers, but with legislators, senators, representatives, judges, others
  • Greater use of teams of specialists within the legal field
  • To offer more courses regarding emerging technologies — and not only for the legal practices themselves but also for society at large.
  • To be far more vigilant in crafting a positive world to be handed down to our kids and grandkids — a dream, not a nightmare. Just because we can, doesn’t mean we should.

Still not convinced that you should care? Here are some things on the CURRENT landscapes:

  • You go to drop something off at your neighbor’s house. They have a camera that gets activated.  What facial recognition database are you now on? Did you give your consent to that? No, you didn’t.
  • Because you posted your photo on Facebook, YouTube, Venmo and/or on millions of other websites, your face could be in ClearView AI’s database. Did you give your consent to that occurring? No, you didn’t.
  • You’re at the airport and facial recognition is used instead of a passport. Whose database was that from and what gets shared? Did you give your consent to that occurring? Probably not, and it’s not easy to opt-out either.
  • Numerous types of drones, delivery bots, and more are already coming onto the scene. What will the sidewalks, streets, and skies look like — and sound like — in your neighborhood in the near future? Is that how you want it? Did you give your consent to that happening? No, you didn’t.
  • …and on and on it goes.

Addendum — speaking of islands!

Palantir CEO: Silicon Valley can’t be on ‘Palo Alto island’ — Big Tech must play by the rules — from cnbc.com by Jessica Bursztynsky

Excerpt:

Palantir Technologies co-founder and CEO Alex Karp said Thursday the core problem in Silicon Valley is the attitude among tech executives that they want to be separate from United States regulation.

“You cannot create an island called Palo Alto Island,” said Karp, who suggested tech leaders would rather govern themselves. “What Silicon Valley really wants is the canton of Palo Alto. We have the United States of America, not the ‘United States of Canton,’ one of which is Palo Alto. That must change.”

“Consumer tech companies, not Apple, but the other ones, have basically decided we’re living on an island and the island is so far removed from what’s called the United States in every way, culturally, linguistically and in normative ways,” Karp added.

 

 

From DSC:
I’ll say it again, just because we can, doesn’t mean we should.

From the article below…we can see another unintended consequence is developing on society’s landscapes. I really wish the 20 and 30 somethings that are being hired by the big tech companies — especially at Amazon, Facebook, Google, Apple, and Microsoft — who are developing these things would ask themselves:

  • “Just because we can develop this system/software/application/etc., SHOULD we be developing it?”
  • What might the negative consequences be? 
  • Do the positive contributions outweigh the negative impacts…or not?

To colleges professors and teachers:
Please pass these thoughts onto your students now, so that this internal questioning/conversations begin to take place in K-16.


Report: Colleges Must Teach ‘Algorithm Literacy’ to Help Students Navigate Internet — from edsurge.com by Rebecca Koenig

Excerpt (emphasis DSC):

If the Ancient Mariner were sailing on the internet’s open seas, he might conclude there’s information everywhere, but nary a drop to drink.

That’s how many college students feel, anyway. A new report published this week about undergraduates’ impressions of internet algorithms reveals students are skeptical of and unnerved by tools that track their digital travels and serve them personalized content like advertisements and social media posts.

And some students feel like they’ve largely been left to navigate the internet’s murky waters alone, without adequate guidance from teachers and professors.

Researchers set out to learn “how aware students are about their information being manipulated, gathered and interacted with,” said Alison Head, founder and director of Project Information Literacy, in an interview with EdSurge. “Where does that awareness drop off?”

They found that many students not only have personal concerns about how algorithms compromise their own data privacy but also recognize the broader, possibly negative implications of tools that segment and customize search results and news feeds.

 

From DSC:
Very disturbing that citizens had no say in this. Legislators, senators, representatives, lawyers, law schools, politicians, engineers, programmers, professors, teachers, and more…please reflect upon our current situation here. How can we help create the kind of future that we can hand down to our kids and rest well at night…knowing we did all that we could to provide a dream — and not a nightmare — for them?


The Secretive Company That Might End Privacy as We Know It — from nytimes.com by Kashmir Hill
A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

 

Excerpts:

“But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year…”

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

 

CES 2020: Finding reality in a deluge of utopia — from web-strategist.com by Jeremiah Owyang

Excerpts:

One of my strategies is to look past the products that were announced, and instead find the new technologies that will shed light on which products will emerge such as sensors and data types.

The trick to approaching CES: Look for what creates the data, then analyze how it will be used, therein lies the power/leverage/business model of the future.

Sharp’s augmented windows give us an interesting glimpse of what retail could look like if every window was a transparent screen…

Rivian, the new electric truck company, which is funded by both Ford and Amazon was featured at the Amazon booth, with a large crowd, each wheel has an independent motor and it’s Alexa integrated – watch out Cybertruck.

Caution: “Data leakage” (where your data ends up in places you didn’t expect) is frightening, and people will start to care. The amount of devices present, that offer data collection to unknown companies in unknown countries is truly astounding. Both from a personal, business, and national security perspective, consumers and businesses alike really don’t know the ramifications of all of this data sharing.

Also see:

 

From pizza to transplant organs: What drones will be delivering in the 2020s — from digitaltrends.com by Luke Dormehl

Excerpt:

From drone racing to drone photography, quadcopters and other unmanned aerial vehicles rose to prominence in the 2010s. But in the decade to come they’re going to become an even bigger thing in the next 10 years. Case in point: Deliveries by drone.

Who should you be watching in this space?


From DSC:

While I appreciate Luke’s reporting on this, I am very much against darkening the skies with noisy machines. Again, we adults need to be very careful of the world that we are developing for our kids! If items could be delivered via a system of underground pipes, that would be a different, quieter, not visible, more agreeable approach for me.

Just because we can…

 

Indian police are using facial recognition to identify protesters in Delhi — from fastcompany.com by Kristin Toussaint

Excerpt:

At Modi’s rally on December 22, Delhi police used Automated Facial Recognition System (AFRS) software—which officials there acquired in 2018 as a tool to find and identify missing children—to screen the crowd for faces that match a database of people who have attended other protests around the city, and who officials said could be disruptive.

According to the Indian Express, Delhi police have long filmed these protest events, and the department announced Monday that officials fed that footage through AFRS. Sources told the Indian news outlet that once “identifiable faces” are extracted from that footage, a dataset will point out and retain “habitual protesters” and “rowdy elements.” That dataset was put to use at Modi’s rally to keep away “miscreants who could raise slogans or banners.”

 

From DSC:
Here in the United States…are we paying attention to today’s emerging technologies and collaboratively working to create a future dream — versus a future nightmare!?!  A vendor or organization might propose a beneficial reason to use their product or technology — and it might even meet the hype at times…but then comes along other unintended uses and consequences of that technology. For example, in the article above, what started out as a technology that was supposed to be used to find/identify missing children (a benefit) was later used to identify protesters (an unintended consequence, and a nightmare in terms of such an expanded scope of use I might add)!

Along these lines, the youth of today have every right to voice their opinions and to have a role in developing or torpedoing emerging techs. What we build and put into place now will impact their lives bigtime!

 

AI arms race — insidehighered.com by Lilah Burke
More employers are using applicant tracking systems to hire employees. Some colleges are using new AI-based tools, like VMock, to help students keep up.

Excerpt:

When college students need help with their résumés, some now will be turning to algorithms rather than advisers.

In the last decade, a growing number of large companies have started hiring using applicant tracking systems, AI-based platforms that scan résumés for keywords and rank job candidates.

Similarly, video interviewing platforms that use algorithms to evaluate a candidate’s voice, gestures and emotions have become ubiquitous in some industries. HireVue, the most well-known of these platforms, has drawn accusations of being pseudoscientific and potentially exacerbating bias in hiring.

The frustration many job candidates voice when coming up against these platforms is that they have no way of knowing what they could have done better. The systems give no feedback to candidates.

So what if students, job seekers and career advisers could use the AI for themselves?

Boston University, in a document of VMock tips for students, also advised graphic design or other creative industry students to have two versions of their résumé, one with a conventional layout.

From DSC:
Per my nephew, who works in a recruiting type of position within HR for a Fortune 500 organization:

  • Without a doubt HR recruiting is using AI to help in the selection process.
  • Many companies use keyword scanners, but not everyone [and, in fact, his company did not].
  • HireVue is very important to use when it comes to understanding a person’s presentation skills since a lot of presenting is done via Skype/live video these days. So HireVue is not going away anytime soon. I think it’s a great system/product.
  • At the end of the day, a good recruiter will identify the best talent that has applied to a position. I think it’s important for students to really think about what position they’re applying for and be realistic with their applications. I think that’s where a lot of frustration happens with students that apply to positions and never get to the first round interview. They apply to 20-50 positions that don’t reflect their experience at all…so that’s where coaching and personal advisement is important
 

7 Artificial Intelligence Trends to Watch in 2020 — from interestingengineering.com by Christopher McFadden

Excerpts:

Per this article, the following trends were listed:

  1. Computer Graphics will greatly benefit from AI
  2. Deepfakes will only get better, er, worse
  3. Predictive text should get better and better
  4. Ethics will become more important as time goes by
  5. Quantum computing will supercharge AI
  6. Facial recognition will appear in more places
  7. AI will help in the optimization of production pipelines

Also, this article listed several more trends:

According to sources like The Next Web, some of the main AI trends for 2020 include:

  • The use of AI to make healthcare more accurate and less costly
  • Greater attention paid to explainability and trust
  • AI becoming less data-hungry
  • Improved accuracy and efficiency of neural networks
  • Automated AI development
  • Expanded use of AI in manufacturing
  • Geopolitical implications for the uses of AI

Artificial Intelligence offers great potential and great risks for humans in the future. While still in its infancy, it is already being employed in some interesting ways.

According to sources like Forbes, some of the next “big things” in technology include, but are not limited to:

  • Blockchain
  • Blockchain As A Service
  • AI-Led Automation
  • Machine Learning
  • Enterprise Content Management
  • AI For The Back Office
  • Quantum Computing AI Applications
  • Mainstreamed IoT

Also see:

Artificial intelligence predictions for 2020: 16 experts have their say — from verdict.co.uk by Ellen Daniel

Excerpts:

  • Organisations will build in processes and policies to prevent and address potential biases in AI
  • Deepfakes will become a serious threat to corporations
  • Candidate (and employee) care in the world of artificial intelligence
  • AI will augment humans, not replace them
  • Greater demand for AI understanding
  • Ramp up in autonomous vehicles
  • To fully take advantage of AI technologies, you’ll need to retrain your entire organisation
  • Voice technologies will infiltrate the office
  • IT will run itself while data acquires its own DNA
  • The ethics of AI
  • Health data and AI
  • AI to become an intrinsic part of robotic process automation (RPA)
  • BERT will open up a whole new world of deep learning use cases

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of what’s in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

 

 

Don’t trust AI until we build systems that earn trust — from economist.com
Progress in artificial intelligence belies a lack of transparency that is vital for its adoption, says Gary Marcus, coauthor of “Rebooting AI”

Excerpts:

Mr Marcus argues that it would be foolish of society to put too much stock in today’s AI techniques since they are so prone to failures and lack the transparency that researchers need to understand how algorithms reached their conclusions.

As part of The Economist’s Open Future initiative, we asked Mr Marcus about why AI can’t do more, how to regulate it and what teenagers should study to remain relevant in the workplace of the future.

Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other field. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.

The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high. It’s fine if autotagging people in photos turns out to be only 90 percent reliable—if it is just about personal photos that people are posting to Instagram—but it better be much more reliable when the police start using it to find suspects in surveillance photos.

 

120 AI predictions for 2020 — from forbes.com by Gil Press

Excerpt:

As for the universe, it is an open book for the 120 senior executives featured here, all involved with AI, delivering 2020 predictions for a wide range of topics: Autonomous vehicles, deepfakes, small data, voice and natural language processing, human and augmented intelligence, bias and explainability, edge and IoT processing, and many promising applications of artificial intelligence and machine learning technologies and tools. And there will be even more 2020 AI predictions, in a second installment to be posted here later this month.

 

2019 AI report tracks profound growth — from ide.mit.edu by Paula Klein

Excerpt:

Until now “we’ve been sorely lacking good data about basic questions like ‘How is the technology advancing’ and ‘What is the economic impact of AI?’ ” Brynjolfsson said. The new index, which tracks three times as many data sets as last year’s report, goes a long way toward providing answers.

  1. Education
  • At the graduate level, AI has rapidly become the most popular specialization among computer science PhD students in North America. In 2018, over 21% of graduating Computer Science PhDs specialize in Artificial Intelligence/Machine Learning.
  • Industry is the largest consumer of AI talent. In 2018, over 60% of AI PhD graduates went to industry, up from 20% in 2004.
  • In the U.S., AI faculty leaving academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

In the U.S., #AI faculty leaving #academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 
© 2024 | Daniel Christian