Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

5 questions we should be asking about automation and jobs — from hbr.org by Jed Kolko

Excerpts:

  1. Will workers whose jobs are automated be able to transition to new jobs?*
  2. Who will bear the burden of automation?
  3. How will automation affect the supply of labor?
  4. How will automation affect wages, and how will wages affect automation?
  5. How will automation change job searching?

 

From DSC:
For those Economics profs and students out there, I’m posted this with you in mind; also highly applicable and relevant to MBA programs.

* I would add a few follow-up questions to question #1 above:

  • To which jobs should they transition to?
  • Who can help identify the jobs that might be safe for 5-10 years?
  • If you have a family to feed, how are you going to be able to reinvent yourself quickly and as efficiently/flexibly as possible? (Yes…constant, online-based learning comes to my mind as well, as campus-based education is great, but very time-consuming.)

 

Also see:

We Still Don’t Know Much About the Jobs the AI Economy Will Make — or Take — from medium.com by Rachel Metz with MIT Technology Review
Experts think companies need to invest in workers the way they do for other core aspects of their business they’re looking to future-proof

One big problem that could have lasting effects, she thinks, is a mismatch between the skills companies need in new employees and those that employees have or know that they can readily acquire. To fix this, she said, companies need to start investing in their workers the way they do their supply chains.

 

Per LinkedIn:

Putting robots to work is becoming more and more popularparticularly in Europe. According to the European Bank for Reconstruction and Development, Slovakian workers face a 62% median probability that their job will be automated “in the near future.” Workers in Eastern Europe face the biggest likelihood of having their jobs overtaken by machines, with the textile, agriculture and manufacturing industries seen as the most vulnerable. • Here’s what people are saying.

 

Robot Ready: Human+ Skills for the Future of Work — from economicmodeling.com

Key Findings

In Robot-Ready, we examine several striking insights:

1. Human skills—like leadership, communication, and problem solving—are among the most in-demand skills in the labor market.

2. Human skills are applied differently across career fields. To be effective, liberal arts grads must adapt their skills to the job at hand.

3. Liberal art grads should add technical skills. There is considerable demand for workers who complement their human skills with basic technical skills like data analysis and digital fluency.

4. Human+ skills are at work in a variety of fields. Human skills help liberal arts grads thrive in many career areas, including marketing, public relations, technology, and sales.

 

 

 

 

100 voices of AR/VR in education — from virtualiteach.com

 

 

Ambitious VR experience restores 7,000 Roman buildings, monuments to their former glory  — from smithsonianmag.com by Meilan Solly
You can take an aerial tour of the city circa 320 A.D. or stop by specific sites for in-depth exploration

Excerpt:

Ever wish you could step into a hot air balloon, travel back in time to 320 A.D., and soar over the streets of Ancient Rome? Well, that oddly specific fantasy is achievable in a new virtual reality experience called “Rome Reborn.”

The ambitious undertaking, painstakingly built by a team of 50 academics and computer experts over a 22-year period, recreates 7,000 buildings and monuments scattered across a 5.5 square mile stretch of the famed Italian city. The project, according to Tom Kington of the Times, is being marketed as the largest digital reconstruction of Rome to date.


A snapshot from Rome Reborn

 

VR Isn’t a Novelty: Here’s How to Integrate it Into the Curriculum — from edsurge.com by Jan Sikorsky

Excerpt:

While the application of VR to core academics remains nascent, early returns are promising: research now suggests students retain more information and can better synthesize and apply what they have learned after participating in virtual reality exercises.

And the technology is moving within the reach of classroom teachers. While once considered high-end and cost-prohibitive, virtual reality is becoming more affordable. Discovery VR and Google Expeditions offer several virtual reality experiences for free. Simple VR viewers now come in relatively low-cost DIY cardboard view boxes, like Google Cardboard, that fit a range of VR-capable smartphones.

Still, teachers may remain unsure of how they might implement such cutting-edge technology in their classrooms. Their concerns are well founded. Virtual reality takes careful planning and implementation for success. It’s not simply plug-and-play technology. It also takes a lot of work to develop.


From DSC:

Reduced costs & greater development efficiencies needed here:

“In our case, to create just 10 minutes of simulation, a team of six developers logged almost 1,000 hours of development time.”

 

 

Unveiling RLab: the First-City Funded VR/AR Center in the Country Opens Doors at Brooklyn Navy Yard — from prnewswire.com
New York City’s Virtual and Augmented Reality Center Will Fuel Innovation, Entrepreneurship, and Education, While Creating Hundreds of Well-Paying Jobs

Excerpt:

BROOKLYN, N.Y.Oct. 24, 2018 /PRNewswire/ — New York City Economic Development Corporation (NYCEDC), the Mayor’s Office of Media and Entertainment (MOME), the NYU Tandon School of Engineering and the Brooklyn Navy Yard today announced the launch of RLab – the first City-funded virtual and augmented reality (VR/AR) lab in the country. Administered by NYU Tandon with a participating consortium of New York City universities, including Columbia UniversityCUNY and The New School, RLab will operate out of Building 22 in the Brooklyn Navy Yard and will cement New York City’s status as a global leader in VR/AR, creating over 750 jobs in the industry.

 

 

New virtual reality lab at UNMC — from wowt.com

 

 

 

 

This VR-live actor mashup is like your best absinthe-fueled nightmare — from cnet.com by Joan Solsman
Chained, an immersive reimagining of Dickens’ A Christmas Carol, weds virtual reality with a motion-capture live actor. Could it be the gateway that makes VR a hit?

 

 

Also see:

 

…and this as well:

 

See the results of a months-long effort to create a HoloLens experience that pays homage to Mont-Saint-Michel, in Normandy, France, in all its forms – as a physical relief map and work of art; as a real place visited by millions of people over the centuries; and as a remarkable digital story of resilience. In this three-part Today in Technology series, they examine how AI and mixed reality can open a new window into French culture by using technology like HoloLens.

 

 

 

Mama Mia It’s Sophia: A Show Robot Or Dangerous Platform To Mislead? — from forbes.com by Noel Sharkey

Excerpts:

A collective eyebrow was raised by the AI and robotics community when the robot Sophia was given Saudia citizenship in 2017 The AI sharks were already circling as Sophia’s fame spread with worldwide media attention. Were they just jealous buzz-kills or is something deeper going on? Sophia has gripped the public imagination with its interesting and fun appearances on TV and on high-profile conference platforms.

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.

 

 

A dangerous path for our rights and security
For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.

It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

 

 

The rise of crypto in higher education — from blog.coinbase.com
Coinbase regularly engages with students and universities across the country as part of recruiting efforts. We partnered with Qriously to ask students directly about their thoughts on crypto and blockchain — and in this report, we outline findings on the growing roster of crypto and blockchain courses amid a steady rise in student interest.

 

Key Findings

  • 42 percent of the world’s top 50 universities now offer at least one course on crypto or blockchain
  • Students from a range of majors are interested in crypto and blockchain courses — and universities are adding courses across a variety of departments
  • Original Coinbase research includes a Qriously survey of 675 U.S. students, a comprehensive review of courses at 50 international universities, and interviews with professors and students

 

Also see:

 

On the downside of this are of technology:

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 

This is how the Future Today Institute researches, models & maps the future & develops strategies

 

This is how the Future Today Institute researches, models & maps the future & develops strategies

 

Also see what the Institute for the Future does in this regard

Foresight Tools
IFTF has pioneered tools and methods for building foresight ever since its founding days. Co-founder Olaf Helmer was the inventor of the Delphi Method, and early projects developed cross-impact analysis and scenario tools. Today, IFTF is methodologically agnostic, with a brimming toolkit that includes the following favorites…

 

 

From DSC:
How might higher education use this foresight workflow? How might we better develop a future-oriented mindset?

From my perspective, I think that we need to be pulse-checking a variety of landscapes, looking for those early signals. We need to be thinking about what should be on our radars. Then we need to develop some potential scenarios and strategies to deal with those potential scenarios if they occur. Graphically speaking, here’s an excerpted slide from my introductory piece for a NGLS 2017 panel that we did.

 

 

 

This resource regarding their foresight workflow was mentioned in  a recent e-newsletter from the FTI where they mentioned this important item as well:

  • Climate change: a megatrend that impacts us all
    Excerpt:
    Earlier this week, the United Nations’ scientific panel on climate change issued a dire report [PDF]. To say the report is concerning would be a dramatic understatement. Models built by the scientists show that at our current rate, the atmosphere will warm as much as 1.5 degrees Celsius, leading to a dystopian future of food shortages, wildfires, extreme winters, a mass die-off of coral reefs and more –– as soon as 2040. That’s just 20 years away from now.

 

But China also decided to ban the import of foreign plastic waste –– which includes trash from around the U.S. and Europe. The U.S. alone could wind up with an extra 37 million metric tons of plastic waste, and we don’t have a plan for what to do with it all.

 

Immediate Futures Scenarios: Year 2019

  • Optimistic: Climate change is depoliticized. Leaders in the U.S., Brazil and elsewhere decide to be the heroes, and invest resources into developing solutions to our climate problem. We understand that fixing our futures isn’t only about foregoing plastic straws, but about systemic change. Not all solutions require regulation. Businesses and everyday people are incentivized to shift behavior. Smart people spend the next two decades collaborating on plausible solutions.
  • Pragmatic: Climate change continues to be debated, while extreme weather events cause damage to our power grid, wreak havoc on travel, cause school cancellations, and devastate our farms. The U.S. fails to work on realistic scenarios and strategies to combat the growing problem of climate change. More countries elect far-right leaders, who shun global environmental accords and agreements. By 2029, it’s clear that we’ve waited too long, and that we’re running out of time to adapt.
  • Catastrophic: A chorus of voices calling climate change a hoax grows ever louder in some of the world’s largest economies, whose leaders choose immediate political gain over longer-term consequences. China builds an environmental coalition of 100 countries within the decade, developing both green infrastructure while accumulating debt service. Beijing sets global standards emissions––and it locks the U.S out of trading with coalition members. Trash piles up in the U.S., which didn’t plan ahead for waste management. By 2040, our population centers have moved inland and further north, our farms are decimated, our lives are miserable.

Watchlist: United Nations’ Intergovernmental Panel on Climate Change; European Geosciences Union; National Oceanic and Atmospheric Administration (NOAA); NASA; Department of Energy; Department of Homeland Security; House Armed Services Sub-committee on Emerging Threats and Capabilities; Environmental Justice Foundation; Columbia University’s Earth Institute; University of North Carolina at Wilmington; Potsdam Institute for Climate Impact Research; National Center for Atmospheric Research.

 

What does the Top Tools for Learning 2018 list tell us about the future direction of L&D? — from modernworkplacelearning.com by Jane Hart

Excerpt:

But for me 3 key things jump out:

  1. More and more people are learning for themselves – in whatever way that suits them best – whether it is finding resources or online courses on the Web or interacting with their professional network. And they do all this for a variety of reasons: to solve problems, self-improve and prepare themselves for the future, etc.
  2. Learning at work is becoming more personal and continuous in that it is a key part of many professional’s working day. And what’s more people are not only organising their own learning activities, they are also indeed managing their own development too – either with (informal) digital notebooks, or with (formal) personal learning platforms.
  3. But it is in team collaboration where most of their daily learning takes place, and many now recognise and value the social collaboration platforms that underpin their daily interactions with colleagues as part of their daily work.

In other words, many people now see workplace learning as not just something that happens irregularly in corporate training, but as a continuous and on demand activity.

 


From DSC:
Reminds me of tapping into — and contributing towards — streams of content. All the time. Continuous, lifelong learning.

 

 


 

 

 

NEW: The Top Tools for Learning 2018 [Jane Hart]

The Top Tools for Learning 2018 from the 12th Annual Digital Learning Tools Survey -- by Jane Hart

 

The above was from Jane’s posting 10 Trends for Digital Learning in 2018 — from modernworkplacelearning.com by Jane Hart

Excerpt:

[On 9/24/18],  I released the Top Tools for Learning 2018 , which I compiled from the results of the 12th Annual Digital Learning Tools Survey.

I have also categorised the tools into 30 different areas, and produced 3 sub-lists that provide some context to how the tools are being used:

  • Top 100 Tools for Personal & Professional Learning 2018 (PPL100): the digital tools used by individuals for their own self-improvement, learning and development – both inside and outside the workplace.
  • Top 100 Tools for Workplace Learning (WPL100): the digital tools used to design, deliver, enable and/or support learning in the workplace.
  • Top 100 Tools for Education (EDU100): the digital tools used by educators and students in schools, colleges, universities, adult education etc.

 

3 – Web courses are increasing in popularity.
Although Coursera is still the most popular web course platform, there are, in fact, now 12 web course platforms on the list. New additions this year include Udacity and Highbrow (the latter provides daily micro-lessons). It is clear that people like these platforms because they can chose what they want to study as well as how they want to study, ie. they can dip in and out if they want to and no-one is going to tell them off – which is unlike most corporate online courses which have a prescribed path through them and their use is heavily monitored.

 

 

5 – Learning at work is becoming personal and continuous.
The most significant feature of the list this year is the huge leap up the list that Degreed has made – up 86 places to 47th place – the biggest increase by any tool this year. Degreed is a lifelong learning platform and provides the opportunity for individuals to own their expertise and development through a continuous learning approach. And, interestingly, Degreed appears both on the PPL100 (at  30) and WPL100 (at 52). This suggests that some organisations are beginning to see the importance of personal, continuous learning at work. Indeed, another platform that underpins this, has also moved up the list significantly this year, too. Anders Pink is a smart curation platform available for both individuals and teams which delivers daily curated resources on specified topics. Non-traditional learning platforms are therefore coming to the forefront, as the next point further shows.

 

 

From DSC:
Perhaps some foreshadowing of the presence of a powerful, online-based, next generation learning platform…?

 

 

 

How AI could help solve some of society’s toughest problems — from MIT Tech Review by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

  • Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
  • Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
  • Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
  • One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.

 

 

How AI can be a force for good — from science.sciencemag.org

Excerpt:

The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.

 

 

Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.

 

 

Robot wars — from ethicaljournalismnetwork.org by James Ball
How artificial intelligence will define the future of news

Excerpt:

There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.

The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.

Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.

The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Alibaba looks to arm hotels, cities with its AI technology — from zdnet.com by Eileen Yu
Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.

Excerpt:

Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.

Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.

 

 

Accenture Introduces Ella and Ethan, AI Bots to Improve a Patient’s Health and Care Using the Accenture Intelligent Patient Platform — from marketwatch.com

Excerpt:

Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.

 

 

German firm’s 7 commandments for ethical AI — from france24.com

Excerpt:

FRANKFURT AM MAIN (AFP) –
German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.

 

 

 

 

San Diego’s Nanome Inc. releases collaborative VR-STEM software for free — from vrscout.com by Becca Loux

Excerpt:

The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.

The open-source tool is free for download now on Oculus and Steam.

Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.

 

“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.

 

San Diego’s Nanome Inc. Releases Collaborative VR-STEM Software For Free

 

 

10 ways VR will change life in the near future — from forbes.com

Excerpts:

  1. Virtual shops
  2. Real estate
  3. Dangerous jobs
  4. Health care industry
  5. Training to create VR content
  6. Education
  7. Emergency response
  8. Distraction simulation
  9. New hire training
  10. Exercise

 

From DSC:
While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.

 

How MR turns firstline workers into change agents — from virtualrealitypop.com by Charlie Finkand
Mixed Reality, a new dimension of work — from Microsoft and Harvard Business Review

Excerpts:

Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.

 

Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way. 

 

 

Let’s Speak: VR language meetups — from account.altvr.com

 

 

 

 

Experiences in self-determined learning — a free download/PDF file from uni-oldenburg.de by Lisa Blaschke, Chris Kenyon, & Stewart Hase (Eds.)

Excerpts (emphasis DSC):

An Introduction to Self-Determined Learning (Heutagogy)

Summary
There is a good deal that is provocative in the theory and principles surrounding self-determined learning or heutagogy. So, it seems appropriate to start off with a, hopefully, eyebrow-raising observation. One of the key ideas underpinning self-determined learning is that learning, and educational and training are quite different things. Humans are born to learn and are very good at it. Learning is a natural capability and it occurs across the human lifespan, from birth to last breath. In contrast, educational and training systems are concerned with the production of useful citizens, who can contribute to the collective economic good. Education and training is largely a conservative enterprise that is highly controlled, is product focused, where change is slow, and the status quo is revered. Learning, however, is a dynamic process intrinsic to the learner, uncontrolled except by the learner’s mental processes. Self-determined learning is concerned with understanding how people learn best and how the methods derived from this understanding can be applied to educational systems. This chapter provides a relatively brief introduction of the origins, the key principles, and the practice of self-determined learning. It also provides a number of resources to enable the interested reader to take learning about the approach further.

Contributors to this book come from around the world: they are everyday practitioners of self-determined learning who have embraced the approach. In doing so, they have chosen the path less taken and set off on a journey of exploration and discovery – a new frontier – as they implement heutagogy in their homes, schools, and workplaces. Each chapter was written with the intent of sharing the experiences of practical applications of heutagogy, while also encouraging those just starting out on the journey in using self-determined learning. The authors in this book are your guides as you move forward and share with you the lessons they have learned along the way. These shared experiences are meant to be read – or dabbled in – in any way that you want to read them. There is no fixed recipe or procedure for tackling the book contents.

At the heart of self-determined learning is that the learner is at the centre of the learning process. Learning is intrinsic to the learner, and the educator is but an agent, as are many of the resources so freely available these days. It is now so easy to access knowledge and skills (competencies), and in informal settings we do this all the time, and we do it well. Learning is complex and non-linear, despite what the curricula might try to dictate. In addition, every brain is different as a result of its experience (as brain research tells us). Each brain will also change as learning takes place with new hypotheses, new needs, and new questions forming, as new neuronal connections are created.

Heutagogy also doesn’t have anything directly to do with self-determination theory (SDT). SDT is a theory of motivation related to acting in healthy and effective ways (Ryan & Deci, 2000). However, heutagogy is related to the philosophical notion of self-determinism and shares a common belief in the role of human agency in behavior.

The idea of human agency is critical to self-determined learning, where learning is learner-directed. Human agency is the notion that humans have the capacity to make choices and decisions, and then act on them in the real world. However, how experiences and learning bring people to make the choices and decisions that they do make, and what actions they may then take is a very complex matter. What we are concerned with in self-determined learning is that people have agency with respect to how, what, and when they learn. It is something that is intrinsic to each individual person. Learning occurs in the learner’s brain, as the result of his or her past and present experiences.

 

The notion of placing the learner at the centre of the learning experience is a key principle of self-determined learning. This principle is the opposite of teacher-centric or, perhaps more accurately curriculum-centric, approaches to learning. This is not to say that the curriculum is not important, just that it needs to be geared to the learner – flexible, adaptable, and be a living document that is open to change.

Teacher-centric learning is an artifact of the industrial revolution when an education system was designed to meet the needs of the factories (Ackoff & Greenberg, 2008) and to “make the industrial wheel go around” (Hase & Kenyon, 2013b). It is time for a change to learner-centred learning and the time is right with easy access to knowledge and skills through the Internet, high-speed communication and ‘devices’. Education can now focus on more complex cognitive activities geared to the needs of the 21st century learner, rather than have its main focus on competence (Blaschke & Hase, 2014; Hase & Kenyon, 2013a).

 

 

 

 

 

Below are some excerpted slides from her presentation…

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Also see:

  • 20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
    Excerpt:
    Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.

 

“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.

 

 

 

 

 

 

10 Big Takeaways From Mary Meeker’s Widely-Read Internet Report — from fortune.com by  Leena Rao

 

 

 

 

Skill shift: Automation and the future of the workforce — from mckinsey.com by Jacques Bughin, Eric Hazan, Susan Lund, Peter Dahlström, Anna Wiesinger, and Amresh Subramaniam
Demand for technological, social and emotional, and higher cognitive skills will rise by 2030. How will workers and organizations adapt?

Excerpt:

Skill shifts have accompanied the introduction of new technologies in the workplace since at least the Industrial Revolution, but adoption of automation and artificial intelligence (AI) will mark an acceleration over the shifts of even the recent past. The need for some skills, such as technological as well as social and emotional skills, will rise, even as the demand for others, including physical and manual skills, will fall. These changes will require workers everywhere to deepen their existing skill sets or acquire new ones. Companies, too, will need to rethink how work is organized within their organizations.

This briefing, part of our ongoing research on the impact of technology on the economy, business, and society, quantifies time spent on 25 core workplace skills today and in the future for five European countries—France, Germany, Italy, Spain, and the United Kingdom—and the United States and examines the implications of those shifts.

Topics include:
How will demand for workforce skills change with automation?
Shifting skill requirements in five sectors
How will organizations adapt?
Building the workforce of the future

 

 

Europe divided over robot ‘personhood’ — from politico.eu by Janosch Delcker

Excerpt:

BERLIN — Think lawsuits involving humans are tricky? Try taking an intelligent robot to court.

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

 

 
© 2024 | Daniel Christian