Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the 10 technology advances we think will shape the way we work and live now and for years to come.
Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.
…
3-D Metal Printing
Artificial Embryos
Sensing City
AI for Everybody
Dueling Neural Networks
Babel-Fish Earbuds
In the cult sci-fi classic The Hitchhiker’s Guide to the Galaxy, you slide a yellow Babel fish into your ear to get translations in an instant. In the real world, Google has come up with an interim solution: a $159 pair of earbuds, called Pixel Buds. These work with its Pixel smartphones and Google Translate app to produce practically real-time translation. One person wears the earbuds, while the other holds a phone. The earbud wearer speaks in his or her language—English is the default—and the app translates the talking and plays it aloud on the phone. The person holding the phone responds; this response is translated and played through the earbuds.
It’s not hard to imagine a world in which social media is awash with doctored videos targeting ordinary people to exact revenge, extort or to simply troll.
In that scenario, where Twitter and Facebook are algorithmically flooded with hoaxes, no one could fully believe what they see. Truth, already diminished by Russia’s misinformation campaign and President Trump’s proclivity to label uncomplimentary journalism “fake news,” would be more subjective than ever.
The danger there is not just believing hoaxes, but also dismissing what’s real.
…
The consequences could be devastating for the notion of evidentiary video, long considered the paradigm of proof given the sophistication required to manipulate it.
“This goes far beyond ‘fake news’ because you are dealing with a medium, video, that we traditionally put a tremendous amount of weight on and trust in,” said David Ryan Polgar, a writer and self-described tech ethicist.
From DSC: Though I’m typically pro-technology, this is truly disturbing. There are certainly downsides to technology as well as upsides — but it’s how we use a technology that can make the real difference. Again, this is truly disturbing.
1 in 5 workers will have AI as their co worker in 2022
More job roles will change than will be become totally automated so HR needs to prepare today
…
As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.
How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…
…
The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”
…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.
From DSC: Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.
Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.
From DSC: “Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent #AIis a bit unnerving.
Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.
The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.
…
“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.
Artificial intelligence and machine learning tools are already embedded in our lives, but how should businesses that use such technology manage the associated risks?
As artificial intelligence (AI) penetrates deeper into business operations and services, even supporting judicial decision-making, are we approaching a time when the greatest legal mind could be a machine? According to Prof Dame Wendy Hall, co-author of the report Growing the Artificial Intelligence Industry in the UK, we are just at the beginning of the AI journey and now is the time to set boundaries.
“All tech has the power to do harm as well as good,” Hall says. “So we have to look at regulating companies and deciding what they can and cannot do with the data now.”
AI and robotics professor Noel Sharkey highlights the “legal and moral implications of entrusting human decisions to algorithms that we cannot fully understand”. He explains that the narrow AI systems that businesses currently use (to draw inferences from large volumes of data) apply algorithms that learn from experience and feed back to real-time and historical data. But these systems are far from perfect.
Potential results include flawed outcomes or reasoning, but difficulties also arise from the lack of transparency. This supports Hall’s call for supervision and regulation. Businesses that use AI in their operations need to manage the ethical and legal risks, and the legal profession will have a major role to play in assessing and apportioning risk, responsibility and accountability.
The top 10 technology trends predicted to reach adoption in 2018 are:
Deep learning (DL)
Digital currencies.
Blockchain.
Industrial IoT.
Robotics.
Assisted transportation.
Assisted reality and virtual reality (AR/VR).
Ethics, laws, and policies for privacy, security, and liability.
Accelerators and 3D.
Cybersecurity and AI.
Existing Technologies: We did not include the following technologies in our top 10 list as we assume that they have already experienced broad adoption:
A. Data science
B. “Cloudification”
C. Smart cities
D. Sustainability
E. IoT/edge computing
What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:
According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses. (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.
Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30
While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.
The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.
Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.
We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental health disorders, sentencing algorithms that fail twice as often for black defendants as for white defendants, statistically flawed public teacher assessments or oppressive scheduling algorithms. And we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.
There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.
There’s one solution for the short term. We urgently need an academic institute focused on algorithmic accountability. First, it should provide a comprehensive ethical training for future engineers and data scientists at the undergraduate and graduate levels, with case studies taken from real-world algorithms that are choosing the winners from the losers. Lecturers from humanities, social sciences and philosophy departments should weigh in.
Somewhat related:
More than 50 experts just told DHS that using AI for “extreme vetting” is dangerously misguided — from qz.com by Dave Gershgorn Excerpt:
A group of experts from Google, Microsoft, MIT, NYU, Stanford, Spotify, and AI Now are urging (pdf) the Department of Homeland Security to reconsider using automated software powered by machine learning to vet immigrants and visitors trying to enter the United States.
This is a new series about cameras and their relationship to face recognition, machine learning, and how, in the future, the ways in which we interact with technology will be radically different.
Excerpt (emphasis DSC):
First, the data. LDV Capital, a venture capital firm focussed on Visual Technologies, recently published a 19 page report thick with some pretty eye opening data around cameras.
Specifically, how many cameras we can expect to have watching us, what they are watching us for, and how those insights will be used.
According to their study, by 2022 there will be more than 44,354,881,622 (that’s 44 BILLION) cameras in use globally, collecting even more billions of images for visual collateral.This is incredible — but what’s interesting — is that most of these images will never be seen by human eyes.
From DSC: Though the author asserts there will be great business opportunities surrounding this trend, I’m not sure that I’m comfortable with it. Embedded cameras everywhere…hmmm…what he calls a privilege (in the quote below), I see as an overstepping of boundaries.
We have the privilege of experiencing the actual evolution of a device that we have come to know as one thing, for all of our lives to this point, into something completely different, to the extent that the word “camera”, itself, is becoming outdated.
Until this month, it seemed that GAN-generated images [where GAN stands for “generative adversarial networks”] that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects.GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.
From DSC: So AI can now generate realistic photos (i.e., image creation/manipulation). And then there’s Adobe’s VoCo Project, a sort of a Photoshop for audio manipulation plus other related technologies out there:
The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.
…and perhaps we’ll need to add, “we shouldn’t trust everything we hear either.” But how will the average person with average tools know the real deal? The concept of watermarking visuals/audio may be increasingly involved. From the ending of bbc.com article:
For its part, Adobe has talked of its customers using Voco to fix podcast and audio book recordings without having to rebook presenters or voiceover artists.
But a spokeswoman stressed that this did not mean its release was imminent.
“[It] may or may not be released as a product or product feature,” she told the BBC.
“No ship date has been announced.”
In the meantime, Adobe said it was researching ways to detect use of its software.
“Think about watermarking detection,” Mr Jin said at the demo, referring to a method used to hide identifiers in images and other media.
But again, we see that technology often races ahead. “Look at what we can do!” But then the rest of society — such as developing laws, policies, questions about whether we should roll out such technologies, etc. — needs time to catch up. Morals and ethics do come into play here — as trust levels are most assuredly at stake.
Another relevant article/topic/example of this is listed below. (Though I’m not trying to say that we shouldn’t pursue self-driving cars. Rather, the topic serves as another example of technologies racing ahead while it takes a while for the rest of us/society to catch up with them).
Nvidia CEO: Expect autonomous vehicles to hit the streets by 2021 — from bizjournals.com by Gina Hall More than 40 companies, including Intel, Nvidia, Waymo, Tesla, Uber Technologies and Samsung, have secured permits from the California DMV to test self-driving cars on public roads.
Artificial intelligence is predicted to fundamentally alter the nature of society by 2040. Investment in AI start-ups was estimated at $6-$9 billion in 2016, up from US$415 million four years earlier. While futurist Ray Kurzweil argues that AI will help us to address the grand challenges facing humanity, Elon Musk warns us that artificial intelligence will be our “biggest existential threat.” Others argue that artificial intelligence is the future of growth. Everything depends on how we manage the transition to this AI-era.
In 2016 the Obama administration released a national strategic plan for artificial intelligence and, while we do not all suddenly now need a plan for artificial intelligence, we do need to stay up to date on how AI is being implemented. Much of AI’s potential is yet to be realized, but AI is already running our lives, from Siri to Netflix recommendations to automated air traffic control. We all need to become more aware of how we are algorithmically shaped by our tools.
This Australian discussion paper on the implications of AI, automation and 21st-century skills, shows how AI will not just affect blue-collar truck drivers and cleaners, it will also affect white-collar lawyers and doctors. Automated pharmacy systems with robots dispensing medication exist, Domino’s pizza delivery by drone has already occurred, and a fully automated farm is opening in Japan.
Education reformers need to plan for our AI-driven future and its implications for education, both in schools and beyond. The never-ending debate about the sorts of skills needed in the future and the role of schools in teaching and assessing them is becoming a whole lot more urgent and intense.
The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code.
Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over.
The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated. Such systems are expected to get more complex as technologies such as machine learning used by tech companies become more widely available.
“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Crawford says. She says it can be possible to disclose information about systems and their performance without disclosing their code, which is sometimes protected intellectual property.
“We announce the appointment of a minister for artificial intelligence. The next global wave is artificial intelligence and we want the UAE to be more prepared for it.”
Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent— from nytimes.com by Cade Metz Nearly all big tech companies have an artificial intelligence project, and they are willing to pay experts millions of dollars to help get it done.
Excerpt:
Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles. As they chase this future, they are doling out salaries that are startling even in an industry that has never been shy about lavishing a fortune on its top talent.
Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.
With so few A.I. specialists available, big tech companies are also hiring the best and brightest of academia. In the process, they are limiting the number of professors who can teach the technology.
From DSC: I know Quentin Schultze from our years working together at Calvin College, in Grand Rapids, Michigan (USA). I have come to greatly appreciate Quin as a person of faith, as an innovative/entrepreneurial professor, as a mentor to his former students, and as an excellent communicator.
Quin has written a very concise, wisdom-packed book that I would like to recommend to those people who are seeking to be better communicators, leaders, and servants. But I would especially like to recommend this book to the leadership at Google, Amazon, Apple, Microsoft, IBM, Facebook, Nvidia, the major companies developing robots, and other high-tech companies. Why do I list these organizations? Because given the exponential pace of technological change, these organizations — and their leaders — have an enormous responsibility to make sure that the technologies that they are developing result in positive changes for societies throughout the globe. They need wisdom, especially as they are working on emerging technologies such as Artificial Intelligence (AI), personal assistants and bots, algorithms, robotics, the Internet of Things, big data, blockchain and more. These technologies continue to exert an increasingly powerful influence on numerous societies throughout the globe today. And we haven’t seen anything yet! Just because we can develop and implement something, doesn’t mean that we should. Again, we need wisdom here.
But as Quin states, it’s not just about knowledge, the mind and our thoughts. It’s about our hearts as well. That is, we need leaders who care about others, who can listen well to others, who can serve others well while avoiding gimmicks, embracing diversity, building trust, fostering compromise and developing/exhibiting many of the other qualities that Quin writes about in his book. Our societies desperately need leaders who care about others and who seek to serve others well.
I highly recommend you pick up a copy of Quin’s book. There are few people who can communicate as much in as few words as Quin can. In fact, I wish that more writing on the web and more articles/research coming out of academia would be as concisely and powerfully written as Quin’s book, Communicate Like a True Leader: 30 Days of Life-Changing Wisdom.
To lead is to accept responsibility and act responsibly.
— Quentin Schultze
Description:
Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.
David Danks, head of the philosophy department at Carnegie Mellon University, has a message for his colleagues in the CMU robotics department: As they invent and develop the technologies of the future, he encourages them to consider the human dimensions of their work.
His concern? All too often, Danks says, technological innovation ignores the human need for ethical guidelines and moral standards. That’s especially true when it comes to innovations such as artificial intelligence and automation, he says.
“It’s, ‘Look at this cool technology that we’ve got. How can you stand in the way of something like this?’” says Danks. “We should be saying, ‘Wait a second. How is this technology affecting people?’”
As an example, Danks points to AI-powered medical diagnostic systems. Such tools have great potential to parse data for better decision-making, but they lack the social interaction between patient and physician that can be so important to those decisions. It’s one thing to have a technology that can diagnose a patient with strep throat and recommend a certain antibiotic, but what about a patient with cancer who happens to be a professional violinist?
“For most people, you’d just give them the most effective drug,” says Danks. “But what do you do if one of the side effects of that medication is hand tremors? I see a lot of possibilities with AI, but it’s also important to recognize the challenges.”
From DSC: I’ve been thinking about Applicant Tracking Systems (ATSs) for a while now, but the article below made me revisit my reflections on them. (By the way, my thoughts below are not meant to be a slam on Google. I like Google and I use their tools daily.) I’ve included a few items below, but there were some other articles/vendors’ products that I had seen on this topic that focused specifically on ATSs, but I couldn’t locate them all.
In mid-June, Google announced the implementation of an AI-powered search function aimed at connecting job seekers with jobs by sorting through posted recruitment information. The system allows users to search for basic phrases, such as “jobs near me,” or perform searches for industry-specific keywords. The search results can include reviews from Glassdoor or other companies, along with the details of what skills the hiring company is looking to acquire.
As this is a relatively new development, what the system will mean is still an open question. To help, members from the Forbes Coaches Council offer their analysis on how the search system will impact candidates or companies. Here’s what they said…
5. Expect competition to increase. Google jumping into the job search market may make it easier than ever to apply for a role online. For companies, this could likely tax the already strained-ATS system, and unless fixed, could mean many more resumes falling into that “black hole.” For candidates, competition might be steeper than ever, which means networking will be even more important to job search success. – Virginia Franco
10. Understanding keywords and trending topics will be essential. Since Google’s AI is based on crowd-gathered metrics, the importance of keywords and understanding trending topics is essential for both employers and candidates. Standing out from the crowd or getting relevant results will be determined by how well you speak the expected language of the AI. Optimizing for the search engine’s results pages will make or break your search for a job or candidate. – Maurice Evans, IGROWyourBiz, Inc
Before then, 21-year-old Ms. Jaffer had filled out a job application, played a set of online games and submitted videos of herself responding to questions about how she’d tackle challenges of the job. The reason she found herself in front of a hiring manager? A series of algorithms recommended her.
The debate is on, whether man or machine will win the race, as they are pitted against each other in every walk of life. Experts are already worried about the social disruption that is inevitable, as artificial intelligence (AI)-led robots take over the jobs of human beings, leaving them without livelihoods.The same is believed to happen to the HR profession, says a report by Career Builder. HR jobs are at threat, like all other jobs out there, as we can expect certain roles in talent acquisition, talent management, and mainstream business being automated over the next 10 years. To delve deeper into the imminent problem, Career Builder carried out a study of 719 HR professionals in the private sector, specifically looking for the rate of adoption of emerging technologies in HR and what HR professionals perceived about it.
The change is happening for real, though different companies are adopting technologies at varied paces. Most companies are turning to the new-age technologies to help carry out talent acquisition and management tasks that are time-consuming and labor-intensive.
From DSC: Are you aware that if you apply for a job at many organizations nowadays, your resume has a significant chance of not ever making it in front of a human’s eyeballs for their review? Were you aware that an Applicant Tracking System (an ATS) will likely syphon off and filter out your resume unless you have the exact right keywords in your resume and unless you mentioned those keywords the optimal number of times?
And were you aware that many advisors assert that you should use a 1 page resume — a 2 page resume at most? Well…assuming that you have to edit big time to get to a 1-2 page resume, how does that editing help you get past the ATSs out there? When you significantly reduce your resume’s size/information, you hack out numerous words that the ATS may be scanning for. (BTW, advisors recommend creating a Wordle from the job description to ascertain the likely keywords; but still, you don’t know which exact keywords the ATS will be looking for in your specific case/job application and how many times to use those keywords. Numerous words can be of similar size in the resulting Wordle graphic…so is that 1-2 page resume helping you or hurting you when you can only submit 1 resume for a position/organization?)
Vendors are hailing these ATS systems as being major productivity boosters for their HR departments…and that might be true in some cases. But my question is, at what cost?
At this point in time, I still believe that humans are better than software/algorithms at making judgement calls. Perhaps I’m giving hiring managers too much credit, but I’d rather have a human being make the call at this point. I want a pair of human eyeballs to scan my resume, not a (potentially) narrowly defined algorithm. A human being might see transferable skills better than a piece of code at this point.
Just so you know…in light of these keyword-based means of passing through the first layer of filtering, people are now playing games with their resumes and are often stretching the truth — if not outright lying:
A new study shows huge increase in lies on job applications.
Excerpt (emphasis DSC):
Employer Applicant Tracking Systems Expect an Exact Match Most companies use some form of applicant tracking system (ATS) to take in résumés, sort through them, and narrow down the applicant pool. With the average job posting getting more than 100 applicants, recruiters don’t want to go bleary-eyed sorting through them. Instead, they let the ATS do the dirty work by telling it to pass along only the résumés that match their specific requirementsfor things like college degrees, years of experience, and salary expectations. The result? Job seekers have gotten wise to the finicky nature of the technology and are lying on their résumés and applications in hopes of making the cut.
From DSC:
I don’t see this as being very helpful. But perhaps that’s because I don’t like playing games with people and/or with other organizations. I’m not a game player. What you see is what you get. I’ll be honest and transparent about what I can — and can’t — deliver.
But students, you should know that these ATS systems are in place. Those of us in higher education should know about these ATS systems, as many of us are being negatively impacted by the current landscape within higher education.
Is Your Resume ATS Friendly?
Did you know that an ATS (applicant tracking system) will play a major role in whether or not your resume is selected for further review when you’re applying to opportunities through online job boards?
It’s true. When you apply to a position a company has posted online, a human usually isn’t the first to review your resume, a computer program is. Scouring your resume for keywords, terminology and phrases the hiring manager is targeting, the program will toss your resume if it can’t understand the content it’s reading. Basically, your resume doesn’t stand a chance of making it to the next level if it isn’t optimized for ATS.
To ensure your resume makes it past the evil eye of ATS, format your resume correctly for applicant tracking programs, target it to the opportunity and check for spelling errors. If you don’t, you’re wasting your time applying online.
Hackers attempt to hook tens of thousands of people like Mr. Latif through job scams each year, according to U.S. Federal Trade Commission data, aiming to trick them into handing over personal or sensitive information, or to gain access to their corporate networks.
Employment fraud is nothing new, but as more companies shift to entirely-digital job application processes, Better Business Bureau director of communications Katherine Hutt said scams targeting job seekers pose a growing threat. Job candidates are now routinely invited to fill out applications, complete skill evaluations and interview—all on their smartphones, as employers seek to cast a wider net for applicants and improve the matchmaking process for entry-level hires.
Young people are a frequent target. Of the nearly 3,800 complaints the nonprofit has received from U.S. consumers on its scam report tracker in the past two years, people under 34 years old were the most susceptible to such scams, which frequently offer jobs requiring little to no prior experience, Ms. Hutt said.
Hackers are finding new ways to prey on young job seekers.
I was no longer operating in a world circumscribed by lesson plans, problem sets and programming assignments, and intended course outcomes. I also wasn’t coding to specs, because there were no specs. As my teammates and I were building the product, we were also simultaneously defining what it should be, whom it would serve, what behaviors we wanted to incentivize amongst our users, what kind of community it would become, and what kind of value we hoped to create in the world.
I still loved immersing myself in code and falling into a state of flow—those hours-long intensive coding sessions where I could put everything else aside and focus solely on the engineering tasks at hand. But I also came to realize that such disengagement from reality and societal context could only be temporary.
…
At Quora, and later at Pinterest, I also worked on the algorithms powering their respective homefeeds: the streams of content presented to users upon initial login, the default views we pushed to users. It seems simple enough to want to show users “good” content when they open up an app. But what makes for good content? Is the goal to help users to discover new ideas and expand their intellectual and creative horizons? To show them exactly the sort of content that they know they already like? Or, most easily measurable, to show them the content they’re most likely to click on and share, and that will make them spend the most time on the service?
Ruefully—and with some embarrassment at my younger self’s condescending attitude toward the humanities—I now wish that I had strived for a proper liberal arts education. That I’d learned how to think critically about the world we live in and how to engage with it. That I’d absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That I’d had opportunities to debate my peers and develop informed opinions on philosophy and morality. And even more than all of that, I wish I’d even realized that these were worthwhile thoughts to fill my mind with—that all of my engineering work would be contextualized by such subjects.
It worries me that so many of the builders of technology today are people like me; people haven’t spent anywhere near enough time thinking about these larger questions of what it is that we are building, and what the implications are for the world.
In a recent Harvard Business Review piece titled “Liberal Arts in the Data Age,” author JM Olejarz writes about the importance of reconnecting a lateral, liberal arts mindset with the sort of rote engineering approach that can lead to myopic creativity. Today’s engineers have been so focused on creating new technologies that their short term goals risk obscuring unintended longterm outcomes. While a few companies, say Intel, are forward-thinking enough to include ethics professionals on staff, they remain exceptions. At this point all tech companies serious about ethical grounding need to be hiring folks with backgrounds in areas like anthropology, psychology and philosophy.