10 jobs that are safe in an AI world — from linkedin.com by Kai-Fu Lee

Excerpts:

Teaching
AI will be a great tool for teachers and educational institutions, as it will help educators figure out how to personalize curriculum based on each student’s competence, progress, aptitude, and temperament. However, teaching will still need to be oriented around helping students figure out their interests, teaching students to learn independently, and providing one-on-one mentorship. These are tasks that can only be done by a human teacher. As such, there will still be a great need for human educators in the future.

Criminal defense law
Top lawyers will have nothing to worry about when it comes to job displacement. reasoning across domains, winning the trust of clients, applying years of experience in the courtroom, and having the ability to persuade a jury are all examples of the cognitive complexities, strategies, and modes of human interaction that are beyond the capabilities of AI. However, a lot of paralegal and preparatory work like document review, analysis, creating contracts, handling small cases, packing cases, and coming up with recommendations can be done much better and more efficiently with AI. The costs of law make it worthwhile for AI companies to go after AI paralegals and AI junior lawyers, but not top lawyers.

 

From DSC:
In terms of teaching, I agree that while #AI will help personalize learning, there will still be a great need for human teachers, professors, and trainers. I also agree w/ my boss (and with some of the author’s viewpoints here, but not all) that many kinds of legal work will still need the human touch & thought processes. I diverge from his thinking in terms of scope — the need for human lawyers will go far beyond just lawyers involved in crim law.

 

Also see:

15 business applications for artificial intelligence and machine learning — from forbes.com

Excerpt:

Fifteen members of Forbes Technology Council discuss some of the latest applications they’ve found for AI/ML at their companies. Here’s what they had to say…

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

How AI could help solve some of society’s toughest problems — from MIT Tech Review by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

  • Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
  • Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
  • Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
  • One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.

 

 

How AI can be a force for good — from science.sciencemag.org

Excerpt:

The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.

 

 

Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.

 

 

Robot wars — from ethicaljournalismnetwork.org by James Ball
How artificial intelligence will define the future of news

Excerpt:

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Alibaba looks to arm hotels, cities with its AI technology — from zdnet.com by Eileen Yu
Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.

Excerpt:

Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.

Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.

 

 

Accenture Introduces Ella and Ethan, AI Bots to Improve a Patient’s Health and Care Using the Accenture Intelligent Patient Platform — from marketwatch.com

Excerpt:

Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.

 

 

German firm’s 7 commandments for ethical AI — from france24.com

Excerpt:

FRANKFURT AM MAIN (AFP) –
German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.

 

 

 

 

Why emerging technology needs to retain a human element — from forbes.com by Samantha Radocchia
Technology opens up new, unforeseen issues. And humans are necessary for solving the problems automated services can’t.

Excerpt (emphasis DSC):

With technological advancements comes change. Rather than avoiding new technology for as long as possible, and then accepting the inevitable, people need to be actively thinking about how it will change us as individuals and as a society.

Take your phone for instance. The social media, gaming and news apps are built to keep you addicted so companies can collect data on you. They’re designed to be used constantly so you back for more the instant you feel the slightest twinge of boredom.

And yet, other apps—sometimes the same ones I just mentioned—allow you to instantly communicate with people around the world. Loved ones, colleagues, old friends—they’re all within reach now.

Make any technology decisions carefully, because their impact down the road may be tremendous.

This is part of the reason why there’s been a push lately for ethics to be a required part of any computer science or vocational training program. And it makes sense. If people want to create ethical systems, there’s a need to remember that actual humans are behind them. People make bad choices sometimes. They make mistakes. They aren’t perfect.

 

To ignore the human element in tech is to miss the larger point: Technology should be about empowering people to live their best lives, not making them fearful of the future.

 

 

 

 

It’s time to address artificial intelligence’s ethical problems — from wired.co.uk by Abigail Beall
AI is already helping us diagnose cancer and understand climate change, but regulation and oversight are needed to stop the new technology being abused

Excerpt:

The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.

Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.

The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.

 

 

‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace — from nytimes.com by Steve Lohr

Excerpt:

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

 

 

AI for Virtual Medical Assistants – 4 Current Applications — from techemergence.com by Kumba Sennaar

Excerpt:

In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.

This article will set out to determine the answers to the following questions:

  • What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
  • How is the healthcare market implementing these AI applications?

 

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says — from nytimes.com by Natasha Singer

Excerpt:

In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.

On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”

 

Back from January:

 

 

 

The title of this article is: Schools can not get facial recognition tech for free. Should they?

Schools can not get facial recognition tech for free. Should they? — from wired.com by Issie Lapowsky

Excerpt:

Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.

 

From DSC:
Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example, look at what’s happening in China already these days!

The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.

Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.).  What’s the unspoken message behind those cameras?  What’s being stated by their very presence around us?

No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.

 

 

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”

 

 

 

The title of this article being linked here is: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras
Per this week’s Next e-newsletter from edsurge.com

Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.

The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.

It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.

 

 

Also see:

  • As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber
    Excerpt:
    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

 

 

 

 

Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras — from nytimes.com by Paul Mozur

Excerpts:

ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station.

In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival.

In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor.

With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

 

In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States.

Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.

 

 

A very slippery slope has now been setup in China with facial recognition infrastructures

 

From DSC:
A veeeeery slippery slope here. The usage of this technology starts out as looking for criminals, but then what’s next? Jail time for people who disagree w/ a government official’s perspective on something? Persecution for people seen coming out of a certain place of worship?  

Very troubling stuff here….

 

 

 

 

 

Below are some excerpted slides from her presentation…

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Also see:

  • 20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
    Excerpt:
    Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.

 

“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.

 

 

 

 

 

 

10 Big Takeaways From Mary Meeker’s Widely-Read Internet Report — from fortune.com by  Leena Rao

 

 

 

 

Alexa creepily recorded a family’s private conversations, sent them to business associate — from usatoday.com by Elizabeth Weise

Excerpt:

In this instance, a random series of disconnected conversations got interpreted by Alexa as a specific and connected series of commands.

It doesn’t appear that the family members actually heard Alexa asking who it should send a message to, or confirming that it should be sent.

That’s probably a function of how good the Echo’s far field voice recognition is. Each speaker has seven microphones which are arrayed so the cylindrical speaker can pick up voice commands from far away or even in noisy rooms with lots of conversations going on.

Amazon says it is evaluating options to make cases such as happened to the Portland family less likely.

But given that Forrester predicts by 2020 almost 50% of American households will contain a smart speaker, expect more such confusions in the future.

 

 

 

 

We love augmented reality, but let’s fix things that could become big problems — from techcrunch.com by Cyan Banister and Alex Hertel

Excerpts:

But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive.

As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.

 

 

Mercedes-Benz looks to replace owner’s manual with AR app — form by Bobby Carlton

 

 

 

Introducing two new mixed reality business applications: Microsoft Remote Assist and Microsoft Layout — from blogs.windows.com by Lorraine Bardeen

Excerpt:

Microsoft Remote Assist — Collaborate in mixed reality to solve problems faster
With Microsoft Remote Assist we set out to create a HoloLens app that would help our customers collaborate remotely with heads-up, hands-free video calling, image sharing, and mixed-reality annotations. During the design process, we spent a lot of time with Firstline Workers. We asked ourselves, “How can we help Firstline Workers share what they see with an expert while staying hands-on to solve problems and complete tasks together, faster.” It was important to us that Firstline Workers are able to reach experts on whatever device they are using at the time, including PCs, phones, or tablets.

 

 

Microsoft Layout — Design spaces in context with mixed reality
With Microsoft Layout our goal was to build an app that would help people use HoloLens to bring designs from concept to completion using some of the superpowers mixed reality makes possible. With Microsoft Layout customers can import 3-D models to easily create and edit room layouts in real-world scale. Further, you can experience designs as high-quality holograms in physical space or in virtual reality and share and edit with stakeholders in real time.

 

From DSC:
Those involved with creating/enhancing learning spaces may want to experiment with Microsoft Layout.

 

 

Google Announces Major Update For ARCore — from vrfocus.com by Rebecca Hills-Duty
New capabilities and features are being introduced into Google’s AR toolset. 

Excerpt:

The new updates allow for collaborative AR experiences, such as playing multiplayer games or painting a AR community mural using a capability called Cloud Anchors.

 

 

Chrome will let you have AR experiences, no app needed — from engadget.com by Chris Velazco
The future of the immersive web can’t come soon enough.

 

 

 

 

 

Students are being prepared for jobs that no longer exist. Here’s how that could change. — from nbcnews.com by Sarah Gonser, The Hechinger Report
As automation disrupts the labor market and good middle-class jobs disappear, schools are struggling to equip students with future-proof skills.

Excerpts:

In many ways, the future of Lowell, once the largest textile manufacturing hub in the United States, is tied to the success of students like Ben Lara. Like many cities across America, Lowell is struggling to find its economic footing as millions of blue-collar jobs in manufacturing, construction and transportation disappear, subject to offshoring and automation.

The jobs that once kept the city prosperous are being replaced by skilled jobs in service sectors such as health care, finance and information technology — positions that require more education than just a high-school diploma, thus squeezing out many of those blue-collar, traditionally middle-class workers.

 

As emerging technologies rapidly and thoroughly transform the workplace, some experts predict that by 2030 400 million to 800 million people worldwide could be displaced and need to find new jobs. The ability to adapt and quickly acquire new skills will become a necessity for survival.

 

 

“We’re preparing kids for these jobs of tomorrow, but we really don’t even know what they are,” said Amy McLeod, the school’s director of curriculum, instruction and assessment. “It’s almost like we’re doing this with blinders on. … We’re doing all we can to give them the finite skills, the computer languages, the programming, but technology is expanding so rapidly, we almost can’t keep up.”

 

 

 

For students like Amber, who would rather do just about anything but go to school, the Pathways program serves another function: It makes learning engaging, maybe even fun, and possibly keeps her in school and on track to graduate.

“I think we’re turning kids off to learning in this country by putting them in rows and giving them multiple-choice tests — the compliance model,” McLeod said. “But my hope is that in the pathways courses, we’re teaching them to love learning. And they’re learning about options in the field — there’s plenty of options for kids to try here.”

 

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

SXSW 2018: Key trends — from jwtintelligence.com by Marie Stafford w/ contributions by Sarah Holbrook

Excerpt:

Ethics & the Big Tech Backlash
What a difference a week makes. As the Cambridge Analytica scandal broke last weekend, the curtain was already coming down on SXSW. Even without this latest bombshell, the discussion around ethics in technology was animated, with more than 10 panels devoted to the theme. From misinformation to surveillance, from algorithmic bias to the perils of artificial intelligence (hi Elon!) speakers grappled with the weighty issue of how to ensure technology works for the good of humanity.

The Human Connection
When technology provokes this much concern, it’s perhaps natural that people should seek respite in human qualities like empathy, understanding and emotional connection.

In a standout keynote, couples therapist Esther Perel gently berated the SXSW audience for neglecting to focus on human relationships. “The quality of your relationships,” she said, “is what determines the quality of your life.

 

 

 

 

China’s New Frontiers in Dystopian Tech — from theatlantic.com by Rene Chun
Facial-recognition technologies are proliferating, from airports to bathrooms.

Excerpt:

China is rife with face-scanning technology worthy of Black Mirror. Don’t even think about jaywalking in Jinan, the capital of Shandong province. Last year, traffic-management authorities there started using facial recognition to crack down. When a camera mounted above one of 50 of the city’s busiest intersections detects a jaywalker, it snaps several photos and records a video of the violation. The photos appear on an overhead screen so the offender can see that he or she has been busted, then are cross-checked with the images in a regional police database. Within 20 minutes, snippets of the perp’s ID number and home address are displayed on the crosswalk screen. The offender can choose among three options: a 20-yuan fine (about $3), a half-hour course in traffic rules, or 20 minutes spent assisting police in controlling traffic. Police have also been known to post names and photos of jaywalkers on social media.

The technology’s veneer of convenience conceals a dark truth: Quietly and very rapidly, facial recognition has enabled China to become the world’s most advanced surveillance state. A hugely ambitious new government program called the “social credit system” aims to compile unprecedented data sets, including everything from bank-account numbers to court records to internet-search histories, for all Chinese citizens. Based on this information, each person could be assigned a numerical score, to which points might be added for good behavior like winning a community award, and deducted for bad actions like failure to pay a traffic fine. The goal of the program, as stated in government documents, is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”

 

 

 

 
© 2024 | Daniel Christian