With great tech success, comes even greater responsibility — from techcrunch.com by Ron Miller

Excerpts:

As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.

 

 

But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.

 

 

 

 

Why the Public Overlooks and Undervalues Tech’s Power — from morningconsult.com by Joanna Piacenza
Some experts say the tech industry is rapidly nearing a day of reckoning

Excerpts:

  • 5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
  • Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).

It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.

But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.

 

 

 

 

Tech companies should stop pretending AI won’t destroy jobs — from technologyreview.com / MIT Technology Review by Kai-Fu Lee
No matter what anyone tells you, we’re not ready for the massive societal upheavals on the way.

Excerpt (emphasis DSC):

The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-­collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.

And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.

 

From DSC:
If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.

However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust. Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.

 

Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.

 

 

To Fight Fatal Infections, Hospitals May Turn to Algorithms — from scientificamerican.com by John McQuaid
Machine learning could speed up diagnoses and improve accuracy

Excerpt:

The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.

 

 

Lawyer-Bots Are Shaking Up Jobs — from technologyreview.com by Erin Winick

Excerpt:

Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.

As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.

 

“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”

 

So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.

People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.

 

 

Lessons From Artificial Intelligence Pioneers — from gartner.com by Christy Pettey

CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.

Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.

Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations

“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”

So what lessons can we learn from these early AI pioneers?

 

 

Why Artificial Intelligence Researchers Should Be More Paranoid — from wired.com by Tom Simonite

Excerpt:

What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.

 

 

How to Prepare College Graduates for an AI World — from wsj.com by
Northeastern University President Joseph Aoun says schools need to change their focus, quickly

Excerpt:

WSJ: What about adults who are already in the workforce?

DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.

That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.

 

 

Inside Amazon’s Artificial Intelligence Flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt:

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

 

 

 

 

10 Breakthrough Technologies 2018 -- from MIT Technology Review

 

10 Breakthrough Technologies 2018 — from MIT Technology Review

Excerpt:

Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the 10 technology advances we think will shape the way we work and live now and for years to come.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.

  1. 3-D Metal Printing
  2. Artificial Embryos
  3. Sensing City
  4. AI for Everybody
  5. Dueling Neural Networks
  6. Babel-Fish Earbuds
    In the cult sci-fi classic The Hitchhiker’s Guide to the Galaxy, you slide a yellow Babel fish into your ear to get translations in an instant. In the real world, Google has come up with an interim solution: a $159 pair of earbuds, called Pixel Buds. These work with its Pixel smartphones and Google Translate app to produce practically real-time translation. One person wears the earbuds, while the other holds a phone. The earbud wearer speaks in his or her language—English is the default—and the app translates the talking and plays it aloud on the phone. The person holding the phone responds; this response is translated and played through the earbuds.
  7. Zero-Carbon Natural Gas
  8. Perfect Online Privacy
  9. Genetic Fortune-Telling
  10. Materials’ Quantum Leap

 

 

 

Fake videos are on the rise. As they become more realistic, seeing shouldn’t always be believing — from latimes.com by David Pierson Fe

Excerpts:

It’s not hard to imagine a world in which social media is awash with doctored videos targeting ordinary people to exact revenge, extort or to simply troll.

In that scenario, where Twitter and Facebook are algorithmically flooded with hoaxes, no one could fully believe what they see. Truth, already diminished by Russia’s misinformation campaign and President Trump’s proclivity to label uncomplimentary journalism “fake news,” would be more subjective than ever.

The danger there is not just believing hoaxes, but also dismissing what’s real.

The consequences could be devastating for the notion of evidentiary video, long considered the paradigm of proof given the sophistication required to manipulate it.

“This goes far beyond ‘fake news’ because you are dealing with a medium, video, that we traditionally put a tremendous amount of weight on and trust in,” said David Ryan Polgar, a writer and self-described tech ethicist.

 

 

 

 

From DSC:
Though I’m typically pro-technology, this is truly disturbing. There are certainly downsides to technology as well as upsides — but it’s how we use a technology that can make the real difference. Again, this is truly disturbing.

 

 

AI plus human intelligence is the future of work — from forbes.com by Jeanne Meister

Excerpts:

  • 1 in 5 workers will have AI as their co worker in 2022
  • More job roles will change than will be become totally automated so HR needs to prepare today


As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.

How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…

The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”

 

…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.

 

 

From DSC:
Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.

Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.

 

 

 

Artificial intelligence is going to supercharge surveillance – from theverge.com by James Vincent
What happens when digital eyes get the brains to match?

From DSC:
Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent is a bit unnerving.

 

 

 

Artificial intelligence | 2018 AI predictions — from thomsonreuters.com

Excerpts:

  • AI brings a new set of rules to knowledge work
  • Newsrooms embrace AI
  • Lawyers assess the risks of not using AI
  • Deep learning goes mainstream
  • Smart cars demand even smarter humans
  • Accountants audit forward
  • Wealth managers look to AI to compete and grow

 

 

 

Chatbots and Virtual Assistants in L&D: 4 Use Cases to Pilot in 2018 —  from bottomlineperformance.com by Steven Boller

Excerpt:

  1. Use a virtual assistant like Amazon Alexa or Google Assistant to answer spoken questions from on-the-go learners.
  2. Answer common learner questions in a chat window or via SMS.
  3. Customize a learning path based on learners’ demographic information.
  4. Use a chatbot to assess learner knowledge.

 

 

 

Suncorp looks to augmented reality for insurance claims — from itnews.com.au by Ry Crozier with thanks to Woontack Woo for this resource

Excerpts:

Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.

The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.

“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.

 

 

 

6 important AI technologies to look out for in 2018 — from itproportal.com by  Olga Egorsheva
Will businesses and individuals finally make AI a part of their daily lives?

 

 

 

 

 

The legal and ethical minefield of AI: ‘Tech has the power to do harm as well as good’ — from theguardian.com by Joanna Goodman

Excerpt:

Artificial intelligence and machine learning tools are already embedded in our lives, but how should businesses that use such technology manage the associated risks?

As artificial intelligence (AI) penetrates deeper into business operations and services, even supporting judicial decision-making, are we approaching a time when the greatest legal mind could be a machine? According to Prof Dame Wendy Hall, co-author of the report Growing the Artificial Intelligence Industry in the UK, we are just at the beginning of the AI journey and now is the time to set boundaries.

“All tech has the power to do harm as well as good,” Hall says. “So we have to look at regulating companies and deciding what they can and cannot do with the data now.”

AI and robotics professor Noel Sharkey highlights the “legal and moral implications of entrusting human decisions to algorithms that we cannot fully understand”. He explains that the narrow AI systems that businesses currently use (to draw inferences from large volumes of data) apply algorithms that learn from experience and feed back to real-time and historical data. But these systems are far from perfect.

Potential results include flawed outcomes or reasoning, but difficulties also arise from the lack of transparency. This supports Hall’s call for supervision and regulation. Businesses that use AI in their operations need to manage the ethical and legal risks, and the legal profession will have a major role to play in assessing and apportioning risk, responsibility and accountability.

 

 

Also see:

 

 

 

 

Top 10 Technology Trends for 2018: IEEE Computer Society Predicts the Future of Tech — from computer.org

Excerpts:

The top 10 technology trends predicted to reach adoption in 2018 are:

  1. Deep learning (DL)
  2. Digital currencies.
  3. Blockchain.
  4. Industrial IoT.
  5. Robotics.
  6. Assisted transportation.
  7. Assisted reality and virtual reality (AR/VR).
  8. Ethics, laws, and policies for privacy, security, and liability.
  9. Accelerators and 3D.
  10. Cybersecurity and AI.

Existing Technologies: We did not include the following technologies in our top 10 list as we assume that they have already experienced broad adoption:

A. Data science
B. “Cloudification”
C. Smart cities
D. Sustainability
E. IoT/edge computing

 

 

 


Also relevant/see:


 

 

 

AI: Embracing the promises and realities — from the Allegis Group

Excerpts:

What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:

  • According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses.  (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
  • The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
  • 47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
  • In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.

Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30

While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.

 

 

 

 

 

The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.

 

 

 

 

Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.

 

 

 

 

The Ivory Tower Can’t Keep Ignoring Tech — from nytimes.com by Cathy O’Neil

Excerpt:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental health disorders, sentencing algorithms that fail twice as often for black defendants as for white defendants, statistically flawed public teacher assessments or oppressive scheduling algorithms. And we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

 

 

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.

 

 

There’s one solution for the short term. We urgently need an academic institute focused on algorithmic accountability. First, it should provide a comprehensive ethical training for future engineers and data scientists at the undergraduate and graduate levels, with case studies taken from real-world algorithms that are choosing the winners from the losers. Lecturers from humanities, social sciences and philosophy departments should weigh in.

 

 

 

Somewhat related:

 

 

 

Cameras are Watching and Machines are Learning: The Beginning — from medium.com by Brian Brackeen
You better believe their eyes

This is a new series about cameras and their relationship to face recognition, machine learning, and how, in the future, the ways in which we interact with technology will be radically different.

Excerpt (emphasis DSC):

First, the data.
LDV Capital, a venture capital firm focussed on Visual Technologies, recently published a 19 page report thick with some pretty eye opening data around cameras.

Specifically, how many cameras we can expect to have watching us, what they are watching us for, and how those insights will be used.

According to their study, by 2022 there will be more than 44,354,881,622 (that’s 44 BILLION) cameras in use globally, collecting even more billions of images for visual collateral. This is incredible — but what’s interesting — is that most of these images will never be seen by human eyes.

 

 

 

From DSC:
Though the author asserts there will be great business opportunities surrounding this trend, I’m not sure that I’m comfortable with it. Embedded cameras everywhere…hmmm…what he calls a privilege (in the quote below), I see as an overstepping of boundaries.

We have the privilege of experiencing the actual evolution of a device that we have come to know as one thing, for all of our lives to this point, into something completely different, to the extent that the word “camera”, itself, is becoming outdated.

How do you feel about this trend?

 

 

 

The era of easily faked, AI-generated photos is quickly emerging — from qz.com by Dave Gershgorn

Excerpt (emphasis DSC):

Until this month, it seemed that GAN-generated images [where GAN stands for “generative adversarial networks”] that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

 

From DSC:
So AI can now generate realistic photos (i.e., image creation/manipulation). And then there’s Adobe’s VoCo Project, a sort of a Photoshop for audio manipulation plus other related technologies out there:

 

So I guess it’s like the first article concludes:

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

…and perhaps we’ll need to add, “we shouldn’t trust everything we hear either.” But how will the average person with average tools know the real deal? The concept of watermarking visuals/audio may be increasingly involved. From the ending of bbc.com article:

For its part, Adobe has talked of its customers using Voco to fix podcast and audio book recordings without having to rebook presenters or voiceover artists.

But a spokeswoman stressed that this did not mean its release was imminent.

“[It] may or may not be released as a product or product feature,” she told the BBC.

“No ship date has been announced.”

In the meantime, Adobe said it was researching ways to detect use of its software.

“Think about watermarking detection,” Mr Jin said at the demo, referring to a method used to hide identifiers in images and other media.

 

But again, we see that technology often races ahead. “Look at what we can do!”  But then the rest of society — such as developing laws, policies, questions about whether we should roll out such technologies, etc. — needs time to catch up. Morals and ethics do come into play here — as trust levels are most assuredly at stake.

Another relevant article/topic/example of this is listed below. (Though I’m not trying to say that we shouldn’t pursue self-driving cars. Rather, the topic serves as another example of technologies racing ahead while it takes a while for the rest of us/society to catch up with them).

 

 

 

Artificial Intelligence in Education: Where It’s At, Where It’s Headed — from gettingsmart.com by Cameron Paterson

Excerpt:

Artificial intelligence is predicted to fundamentally alter the nature of society by 2040. Investment in AI start-ups was estimated at $6-$9 billion in 2016, up from US$415 million four years earlier. While futurist Ray Kurzweil argues that AI will help us to address the grand challenges facing humanity, Elon Musk warns us that artificial intelligence will be our “biggest existential threat.” Others argue that artificial intelligence is the future of growth. Everything depends on how we manage the transition to this AI-era.

In 2016 the Obama administration released a national strategic plan for artificial intelligence and, while we do not all suddenly now need a plan for artificial intelligence, we do need to stay up to date on how AI is being implemented. Much of AI’s potential is yet to be realized, but AI is already running our lives, from Siri to Netflix recommendations to automated air traffic control. We all need to become more aware of how we are algorithmically shaped by our tools.

This Australian discussion paper on the implications of AI, automation and 21st-century skills, shows how AI will not just affect blue-collar truck drivers and cleaners, it will also affect white-collar lawyers and doctors. Automated pharmacy systems with robots dispensing medication exist, Domino’s pizza delivery by drone has already occurred, and a fully automated farm is opening in Japan.

 

Education reformers need to plan for our AI-driven future and its implications for education, both in schools and beyond. The never-ending debate about the sorts of skills needed in the future and the role of schools in teaching and assessing them is becoming a whole lot more urgent and intense.

 

 

 

AI Experts Want to End ‘Black Box’ Algorithms in Government — from wired.com by Tom Simonite

Excerpt:

The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code.

Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over.

The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated. Such systems are expected to get more complex as technologies such as machine learning used by tech companies become more widely available.

“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Crawford says. She says it can be possible to disclose information about systems and their performance without disclosing their code, which is sometimes protected intellectual property.

 

 

UAE appoints first-ever Minister for Artificial Intelligence — from tribune.com.pk

 

“We announce the appointment of a minister for artificial intelligence. The next global wave is artificial intelligence and we want the UAE to be more prepared for it.”

 

 

Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent — from nytimes.com by Cade Metz
Nearly all big tech companies have an artificial intelligence project, and they are willing to pay experts millions of dollars to help get it done.

Excerpt:

Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles. As they chase this future, they are doling out salaries that are startling even in an industry that has never been shy about lavishing a fortune on its top talent.

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

With so few A.I. specialists available, big tech companies are also hiring the best and brightest of academia. In the process, they are limiting the number of professors who can teach the technology.

 

 

 

Where will AI play? By Mike Quindazzi.

 

 

 

 

10 really hard decisions coming our way — from gettingsmart.com by Tom Vander Ark

Excerpt (emphasis DSC):

Things are about to get interesting. You’ve likely heard that Google’s DeepMind recently beat the world’s best Go player. But in far more practical and pervasive ways, artificial intelligence (AI) is creeping into every aspect of life–every screen you view, every search, every purchase, and every customer service contact.

What’s happening? It’s the confluence of several technologies–Moore’s law made storage, computing, and access devices almost free.

This Venn diagram illustrates how deep learning is a subset of AI and how, when combined with big data, can inform enabling technologies in many sectors. For examples, to AI and big data add:

  • Robotics, and you have industry 4.0.
  • Cameras and sensor package, and you have self-driving cars.
  • Sensors and bioinformatic maps, and you have precision medicine.

While there is lots of good news here–diseases will be eradicated and clean energy will be produced–we have a problem: this stuff is moving faster than civic infrastructure can handle. Innovation is outpacing public policy on all fronts. The following are 10 examples of issues coming at us fast that we (in the US in particular) are not ready to deal with.

  1. Unemployment.
  2. Income inequality.
  3. Privacy
  4. Algorithmic bias.
  5. Access.
  6. Machine ethics. 
  7. Weaponization. 
  8. Humanity. 
  9. Genome editing.
  10. Bad AI.

 


From DSC:
Readers of this blog will know that I’m big on pulse-checking the pace of technological change — because it has enormous ramifications for societies throughout the globe, as well as for individuals, workforces, corporations, jobs, education, training, higher education and more. Readers of this blog will again hear me say that the pace of change has changed. We’re now on an exponential pace/trajectory (vs. a slow, steady, linear path).

“Innovation is outpacing public policy on all fronts.”

How true this is. Our society doesn’t know how to deal with this new pace of change. How shall we tackle this thorny issue?

 


 

 

 

 

From DSC:
I know Quentin Schultze from our years working together at Calvin College, in Grand Rapids, Michigan (USA). I have come to greatly appreciate Quin as a person of faith, as an innovative/entrepreneurial professor, as a mentor to his former students, and as an excellent communicator. 

Quin has written a very concise, wisdom-packed book that I would like to recommend to those people who are seeking to be better communicators, leaders, and servants. But I would especially like to recommend this book to the leadership at Google, Amazon, Apple, Microsoft, IBM, Facebook, Nvidia, the major companies developing robots, and other high-tech companies. Why do I list these organizations? Because given the exponential pace of technological change, these organizations — and their leaders — have an enormous responsibility to make sure that the technologies that they are developing result in positive changes for societies throughout the globe. They need wisdom, especially as they are working on emerging technologies such as Artificial Intelligence (AI), personal assistants and bots, algorithms, robotics, the Internet of Things, big data, blockchain and more. These technologies continue to exert an increasingly powerful influence on numerous societies throughout the globe today. And we haven’t seen anything yet! Just because we can develop and implement something, doesn’t mean that we should. Again, we need wisdom here.

But as Quin states, it’s not just about knowledge, the mind and our thoughts. It’s about our hearts as well. That is, we need leaders who care about others, who can listen well to others, who can serve others well while avoiding gimmicks, embracing diversity, building trust, fostering compromise and developing/exhibiting many of the other qualities that Quin writes about in his book. Our societies desperately need leaders who care about others and who seek to serve others well.

I highly recommend you pick up a copy of Quin’s book. There are few people who can communicate as much in as few words as Quin can. In fact, I wish that more writing on the web and more articles/research coming out of academia would be as concisely and powerfully written as Quin’s book, Communicate Like a True Leader: 30 Days of Life-Changing Wisdom.

 

 

To lead is to accept responsibility and act responsibly.
Quentin Schultze

 

 

 

 

“An algorithm designed badly can go on for a long time, silently wreaking havoc.”

— Cathy O’Neil

 

 

 

Cathy O’Neil: The era of blind faith in big data must end | TED Talk | TED.com

Description:
Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.

 

 

 



Addendum:

As AI Gets Smarter, Scholars Raise Ethics Questions — from by by Chris Hayhurst
Interdisciplinary artificial intelligence research fosters philosophical discussions.

Excerpt (emphasis DSC):

David Danks, head of the philosophy department at Carnegie Mellon University, has a message for his colleagues in the CMU robotics department: As they invent and develop the technologies of the future, he encourages them to consider the human dimensions of their work.

His concern? All too often, Danks says, technological innovation ignores the human need for ethical guidelines and moral standards. That’s especially true when it comes to innovations such as artificial intelligence and automation, he says.

“It’s, ‘Look at this cool technology that we’ve got. How can you stand in the way of something like this?’” says Danks. “We should be saying, ‘Wait a second. How is this technology affecting people?’”

As an example, Danks points to AI-powered medical diagnostic systems. Such tools have great potential to parse data for better decision-making, but they lack the social interaction between patient and physician that can be so important to those decisions. It’s one thing to have a technology that can diagnose a patient with strep throat and recommend a certain antibiotic, but what about a patient with cancer who happens to be a professional violinist?

“For most people, you’d just give them the most effective drug,” says Danks. “But what do you do if one of the side effects of that medication is hand tremors? I see a lot of possibilities with AI, but it’s also important to recognize the challenges.”



 

 
© 2024 | Daniel Christian