From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

From DSC:
Many times we don’t want to hear news that could be troubling in terms of our futures. But we need to deal with these trends now or face the destabilization that Harold Jarche mentions in his posting below. 

The topics found in the following items should be discussed in courses involving economics, business, political science, psychology, futurism, engineering, religion*, robotics, marketing, the law/legal affairs and others throughout the world.  These trends are massive and have enormous ramifications for our societies in the not-too-distant future.

* When I mention religion classes here, I’m thinking of questions such as :

  • What does God have in mind for the place of work in our lives?
    Is it good for us? If so, why or why not?
  • How might these trends impact one’s vocation/calling?
  • …and I’m sure that professors who teach faith/
    religion-related courses can think of other questions to pursue

 

turmoil and transition — from jarche.com by Harold Jarche

Excerpts (emphasis DSC):

One of the greatest issues that will face Canada, and many developed countries in the next decade will be wealth distribution. While it does not currently appear to be a major problem, the disparity between rich and poor will increase. The main reason will be the emergence of a post-job economy. The ‘job’ was the way we redistributed wealth, making capitalists pay for the means of production and in return creating a middle class that could pay for mass produced goods. That period is almost over. From self-driving vehicles to algorithms replacing knowledge workers, employment is not keeping up with production. Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.

The emerging economy of platform capitalism includes companies like Amazon, Facebook, Google, and Apple. These giants combined do not employ as many people as General Motors did.  But the money accrued by them is enormous and remains in a few hands. The rest of the labour market has to find ways to cobble together a living income. Hence we see many people willing to drive for a company like Uber in order to increase cash-flow. But drivers for Uber have no career track. The platform owners get richer, but the drivers are limited by finite time. They can only drive so many hours per day, and without benefits. At the same time, those self-driving cars are poised to replace all Uber drivers in the near future. Standardized work, like driving a vehicle, has little future in a world of nano-bio-cogno-techno progress.

 

Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.

 

For the past century, the job was the way we redistributed wealth and protected workers from the negative aspects of early capitalism. As the knowledge economy disappears, we need to re-think our concepts of work, income, employment, and most importantly education. If we do not find ways to help citizens lead productive lives, our society will face increasing destabilization. 

 

Also see:

Will artificial intelligence and robots take your marketing job? — from by markedu.com by
Technology will overtake jobs to an extent and at a rate we have not seen before. Artificial intelligence is threatening jobs even in service and knowledge intensive sectors. This begs the question: are robots threatening to take your marketing job?

Excerpt:

What exactly is a human job?
The benefits of artificial intelligence are obvious. Massive productivity gains while a new layer of personalized services from your computer – whether that is a burger robot or Dr. Watson. But artificial intelligence has a bias. Many jobs will be lost.

A few years ago a study from the University of Oxford got quite a bit of attention. The study said that 47 percent of the US labor market could be replaced by intelligent computers within the next 20 years.

The losers are a wide range of job categories within the administration, service, sales, transportation and manufacturing.

Before long we should – or must – redefine what exactly a human job is and the usefulness of it. How we as humans can best complement the extraordinary capabilities of artificial intelligence.

 

This development is expected to grow fast. There are different predictions about the timing, but by 2030 there will be very few tasks that only a human can solve.

 

 

Assignment #1:
Review the opinion/posting out at marketwatch.com entitled,
Opinion: How the stock market destroyed the middle class” by Rex Nutting and make a listing of the items you believe he is right on the mark on and another listing of items you believe he is mistaken about. Then answer the following questions:

  • What data or other types of support can you find to backup your lists and perspectives? 
  • What data or other types of support does he bring to the table?
  • What are the potential ramifications of this topic (on career development/livelihoods, policy, business practices, business ethics, families, society, innovation, other)?
  • If companies aren’t investing in their employees as much, what advice would you give to existing employees within the corporate world?  To your peers in your colleges and universities or to your peers within your MBA programs?
  • Has the middle class decreased in size since the early 1980’s? What are some of the other factors involved here? Is this situation currently impacting families across the nation and if so, how?

Excerpt:

“The ‘buyback corporation’ is in large part responsible for a national economy characterized by income inequality, employment instability, and diminished innovative capacity,” wrote William Lazonick, an economics professor at the University of Massachusetts at Lowell in a new paper published by the Brookings Institution.

Lazonick argues that corporations — which once retained a sizable share of profits to reinvest (including investing in their workforce by paying them enough to get them to stay) — have adopted a “downsize-and-distribute” model.

It’s not just lefty academics and pundits who think buybacks are ruining America. Last week, the CEOs of America’s 500 biggest companies received a letter from Lawrence Fink, CEO of BlackRock BLK, +0.46% the largest asset manager in the world, saying exactly the same thing.

“The effects of the short-termist phenomenon are troubling both to those seeking to save for long-term goals such as retirement and for our broader economy,” Fink wrote, adding that favoring shareholders comes at the expense of investing in “innovation, skilled work forces or essential capital expenditures necessary to sustain long-term growth.”

 

Also see:

————–

Assignment #2:
Review the current information out at usdebtclock.org and answer the following questions:

  • What does the $18.2+ trillion (as of 4/24/15) U.S. National Debt affect?
  • What level of debt is acceptable for a nation?
  • What does that level of debt depend upon?
  • Which of the pieces of information below have the most impact on future interest rates?  Do we even know that or is that crystal balling it?
  • Do the C-Suites at major companies look at this information? If so, what pieces of this information do they focus in on?
  • Are there potential implications for inflation or items related to the financial stability of the banking systems throughout the globe?
  • Are there any other ramifications of this information that you can think of?
  • What might you focus in on if you were addressing the masses (i.e., all U.S. citizens)?
  • Should politicians be aware of these #’s? If so, what might their concerns be for their constituents? For their local economies?

 

USDebtClockDotOrg-April242015

 

————–

 

Extra Credit Questions:
Now let’s bring it closer to home. Do you have some student loans that are contributing to the Student Loan Debt figure of $1.3+trillion (as of 4/24/15)?  Do you see such loans impacting you in the future? If so, how?

 

studentdebt-4-24-15

 

 

 

From DSC:
To set the stage for the following reflections…first, an excerpt from
Climate researcher claims CIA asked about weaponized weather: What could go wrong? — from computerworld.com (emphasis DSC)

We’re not talking about chemtrails, HAARP (High Frequency Active Auroral Research Program) or other weather warfare that has been featured in science fiction movies; the concerns were raised not a conspiracy theorist, but by climate scientist, geoengineering specialist and Rutgers University Professor Alan Robock. He “called on secretive government agencies to be open about their interest in radical work that explores how to alter the world’s climate.” If emerging climate-altering technologies can effectively alter the weather, Robock is “worried about who would control such climate-altering technologies.”

 

Exactly what I’ve been reflecting on recently.

***Who*** is designing, developing, and using the powerful technologies that are coming into play these days and ***for what purposes?***

Do these individuals care about other people?  Or are they much more motivated by profit or power?

Given the increasingly potent technologies available today, we need people who care about other people. 

Let me explain where I’m coming from here…

I see technologies as tools.  For example, a pencil is a technology. On the positive side of things, it can be used to write or draw something. On the negative side of things, it could be used as a weapon to stab someone.  It depends upon the user of the pencil and what their intentions are.

Let’s look at some far more powerful — and troublesome — examples.

 



DRONES

Drones could be useful…or they could be incredibly dangerous. Again, it depends on who is developing/programming them and for what purpose(s).  Consider the posting from B.J. Murphy below (BTW, nothing positive or negative is meant by linking to this item, per se).

DARPA’s Insect and Bird Drones Are On Their Way — from proactiontranshuman.wordpress.com by B.J. Murphy

.

Insect drone

From DSC:
I say this is an illustrative posting because if the inventor/programmer of this sort of drone wanted to poison someone, they surely could do so. I’m not even sure that this drone exists or not; it doesn’t matter, as we’re quickly heading that way anyway.  So potentially, this kind of thing is very scary stuff.

We need people who care about other people.

Or see:
Five useful ideas from the World Cup of Drones — from  dezeen.com
The article mentions some beneficial purposes of drones, such as for search and rescue missions or for assessing water quality.  Some positive intentions, to be sure.

But again, it doesn’t take too much thought to come up with some rather frightening counter-examples.
 

 

GENE-RELATED RESEARCH

Or another example re: gene research/applications; an excerpt from:

Turning On Genes, Systematically, with CRISPR/Cas9 — from by genengnews.com
Scientists based at MIT assert that they can reliably turn on any gene of their choosing in living cells.

Excerpt:

It was also suggested that large-scale screens such as the one demonstrated in the current study could help researchers discover new cancer drugs that prevent tumors from becoming resistant.

From DSC:
Sounds like there could be some excellent, useful, positive uses for this technology.  But who is to say which genes should be turned on and under what circumstances? In the wrong hands, there could be some dangerous uses involved in such concepts as well.  Again, it goes back to those involved with designing, developing, selling, using these technologies and services.

 

ROBOTICS

Will robots be used for positive or negative applications?

The mechanized future of warfare — from theweek.com
OR
Atlas Unplugged: The six-foot-two humanoid robot that might just save your life — from zdnet.com
Summary:From the people who brought you the internet, the latest version of the Atlas robot will be used in its disaster-fighting robotic challenge.

 

atlasunpluggedtorso

 

AUTONOMOUS CARS

How Uber’s autonomous cars will destroy 10 million jobs and reshape the economy by 2025 — from sanfrancisco.cbslocal.com by

Excerpt:

Autonomous cars will be commonplace by 2025 and have a near monopoly by 2030, and the sweeping change they bring will eclipse every other innovation our society has experienced. They will cause unprecedented job loss and a fundamental restructuring of our economy, solve large portions of our environmental problems, prevent tens of thousands of deaths per year, save millions of hours with increased productivity, and create entire new industries that we cannot even imagine from our current vantage point.

One can see the potential for good and for bad from the above excerpt alone.

Or Ford developing cross country automotive remote control — from spectrum.ieee.org

 

Ford-RemoteCtrl-Feb-2015

Or Germany has approved the use of self driving cars on Autobahn A9 Route — from wtvox.com

While the above items list mostly positive elements, there are those who fear that autonomous cars could be used by terrorists. That is, could a terrorist organization make some adjustments to such self-driving cars and load them up with explosives, then remotely control them in order to drive them to a certain building or event and cause them to explode?

Again, it depends upon whether the designers and users of a system care about other people.

 

BIG DATA / AI / COGNITIVE COMPUTING

The rise of machines that learn — from infoworld.com by Eric Knorr; with thanks to Oliver Hansen for his tweet on this
A new big data analytics startup, Adatao, reminds us that we’re just at the beginning of a new phase of computing when systems become much, much smarter

Excerpt:

“Our warm and creepy future,” is how Miko refers to the first-order effect of applying machine learning to big data. In other words, through artificially intelligent analysis of whatever Internet data is available about us — including the much more detailed, personal stuff collected by mobile devices and wearables — websites and merchants of all kinds will become extraordinarily helpful. And it will give us the willies, because it will be the sort of personalized help that can come only from knowing us all too well.

 

Privacy is dead: How Twitter and Facebook are exposing you — from finance.yahoo.com

Excerpt:

They know who you are, what you like, and how you buy things. Researchers at MIT have matched up your Facebook (FB) likes, tweets, and social media activity with the products you buy. The results are a highly detailed and accurate profile of how much money you have, where you go to spend it and exactly who you are.

The study spanned three months and used the anonymous credit card data of 1.1 million people. After gathering the data, analysts would marry the findings to a person’s public online profile. By checking things like tweets and Facebook activity, researchers found out the anonymous person’s actual name 90% of the time.

 

iBeacon, video analysis top 2015 tech trends — from progressivegrocer.com

Excerpt:

Using digital to engage consumers will make the store a more interesting and – dare I say – fun place to shop. Such an enhanced in-store experience leads to more customer loyalty and a bigger basket at checkout. It also gives supermarkets a competitive edge over nearby stores not equipped with the latest technology.

Using video cameras in the ceilings of supermarkets to record shopper behavior is not new. But more retailers will analyze and use the resulting data this year. They will move displays around the store and perhaps deploy new traffic patterns that follow a shopper’s true path to purchase. The result will be increased sales.

Another interesting part of this video analysis that will become more important this year is facial recognition. The most sophisticated cameras are able to detect the approximate age and ethnicity of shoppers. Retailers will benefit from knowing, say, that their shopper base includes more Millennials and Hispanics than last year. Such valuable information will change product assortments.

Scientists join Elon Musk & Stephen Hawking, warn of dangerous AI — from rt.com

Excerpt:

Hundreds of leading scientists and technologists have joined Stephen Hawking and Elon Musk in warning of the potential dangers of sophisticated artificial intelligence, signing an open letter calling for research on how to avoid harming humanity.

The open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

 

 

SMART/ CONNECTED TVs

 



Though there are many other examples, I think you get the point.

That biblical idea of loving our neighbors as ourselves…well, as you can see,
that idea is as highly applicable, important, and relevant today as it ever was.



 

 

Addendum on 3/19/15 that gets at exactly the same thing here:

  • Teaching robots to be moral — from newyorker.com by Gary Marcus
    Excerpt:
    Robots and advanced A.I. could truly transform the world for the better—helping to cure cancer, reduce hunger, slow climate change, and give all of us more leisure time. But they could also make things vastly worse, starting with the displacement of jobs and then growing into something closer to what we see in dystopian films. When we think about our future, it is vital that we try to understand how to make robots a force for good rather than evil.

 

 

Addendum on 3/20/15:

 

Jennifer A. Doudna, an inventor of a new genome-editing technique, in her office at the University of California, Berkeley. Dr. Doudna is the lead author of an article calling for a worldwide moratorium on the use of the new method, to give scientists, ethicists and the public time to fully understand the issues surrounding the breakthrough.
Credit Elizabeth D. Herman for The New York Times

 

Does Studying Fine Art = Unemployment? Introducing LinkedIn’s Field of Study Explorer — from LinkedIn.com by Kathy Hwang

Excerpt:

[On July 28, 2014], we are pleased to announce a new product – Field of Study Explorer – designed to help students like Candice explore the wide range of careers LinkedIn members have pursued based on what they studied in school.

So let’s explore the validity of this assumption: studying fine art = unemployment by looking at the careers of members who studied Fine & Studio Arts at Universities around the world. Are they all starving artists who live in their parents’ basements?

 

 

LinkedInDotCom-July2014-FieldofStudyExplorer

 

 

Also see:

The New Rankings? — from insidehighered.com by Charlie Tyson

Excerpt:

Who majored in Slovak language and literature? At least 14 IBM employees, according to LinkedIn.

Late last month LinkedIn unveiled a “field of study explorer.” Enter a field of study – even one as obscure in the U.S. as Slovak – and you’ll see which companies Slovak majors on LinkedIn work for, which fields they work in and where they went to college. You can also search by college, by industry and by location. You can winnow down, if you desire, to find the employee who majored in Slovak at the Open University and worked in Britain after graduation.

 

 

Trends and breakthroughs likely to affect your work, your investments, and your family

Excerpts:

At the outset, let me say that futurists do not claim to predict precisely what will happen in the future. If we could know the future with certainty, it would mean that the future could not be changed. Yet this is the main purpose of studying the future: to look at what may happen if present trends continue, decide if this is desirable, and, if it’s not, work to change it.

The main goal of studying the future is to make it better. Trends, forecasts, and ideas about the future enable you to spot opportunities and threats early, and position yourself, your business, and your investments accordingly.

How you can succeed in the age of hyperchange
Look how quickly our world is transforming around us. Entire new industries and technologies unheard of 15 years ago are now regular parts of our lives. Technology, globalization, and the recent financial crisis have left many of us reeling. It’s increasingly difficult to keep up with new developments—much less to understand their implications.

And, if you think things are changing fast now, you haven’t seen anything yet.

 

In this era of accelerating change, knowledge alone is no longer the key to a prosperous life. The critical skill is foresight.

 

 

7 ways to spot tomorrow’s trends today

  1. Scan the media to identify trends
  2. Analyze and extrapolate trends
  3. Develop scenarios
  4. Ask groups of experts
  5. Use computer modeling
  6. Explore possibilities with simulations
  7. Create the vision

 

 

App Ed Review

 

APPEdReview-April2014

 

From the About Us page (emphasis DSC):

App Ed Review is a free searchable database of educational app reviews designed to support classroom teachers finding and using apps effectively in their teaching practice. In its database, each app review includes:

  • A brief, original description of the app;
  • A classification of the app based on its purpose;
  • Three or more ideas for how the app could be used in the classroom;
  • A comprehensive app evaluation;
  • The app’s target audience;
  • Subject areas where the app can be used; and,
  • The cost of the app.

 

 

Also see the Global Education Database:

 

GlobalEducationDatabase-Feb2014

 

From the About Us page:

It’s our belief that digital technologies will utterly change the way education is delivered and consumed over the next decade. We also reckon that this large-scale disruption doesn’t come with an instruction manual. And we’d like GEDB to be part of the answer to that.

It’s the pulling together of a number of different ways in which all those involved in education (teachers, parents, administrators, students) can make some sense of the huge changes going on around them. So there’s consumer reviews of technologies, a forum for advice, an aggregation of the most important EdTech news and online courses for users to equip themselves with digital skills. Backed by a growing community on social media (here, here and here for starters).

It’s a fast-track to digital literacy in the education industry.

GEDB has been pulled together by California residents Jeff Dunn, co-founder of Edudemic, and Katie Dunn, the other Edudemic co-founder, and, across the Atlantic in London, Jimmy Leach, a former habitue of digital government and media circles.

 

 

Addendum:

Favorite educational iPad apps that are also on Android — from the Learning in Hand blog by Tony Vincent

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian