We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part III in a series of such postings that illustrate how quickly things are moving (Part I and Part II) and to ask:

  • How do we collectively start talking about the future that we want?
  • Then, how do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved?

As I mentioned in Part I, I want to again refer to Gerd Leonhard’s work as it is relevant here, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

Looking at several items below, ask yourself…is this the kind of future that we want?  There are some things mentioned below that could likely prove to be very positive and helpful. However, there are also some very troubling advancements and developments as well.

The point here is that we had better start talking and discussing the pros and cons of each one of these areas — and many more I’m not addressing here — or our dreams will turn into our nightmares and we will have missed what Edward Cornish and the World Future Society are often trying to get at.

 


 

Google’s Artificial Intelligence System Masters Game of ‘Go’ — from abcnews.go.com by Alyssa Newcomb

Excerpt:

Google just mastered one of the biggest feats in artificial intelligence since IBM’s Deep Blue beat Gary Kasparov at chess in 1997.

The search giant’s AlphaGo computer program swept the European champion of Go, a complex game with trillions of possible moves, in a five-game series, according Demis Hassabis, head of Google’s machine learning, who announced the feat in a blog post that coincided with an article in the journal Nature.

While computers can now compete at the grand master level in chess, teaching a machine to win at Go has presented a unique challenge since the game has trillions of possible moves.

Along these lines, also see:
Mastering the game of go with deep neural networks and tree search — from deepmind.com

 

 

 

Harvard is trying to build artificial intelligence that is as fast as the human brain — from futurism.com
Harvard University and IARPA are working together to study how AI can work as efficiently and effectively as the human brain.

Excerpt:

Harvard University has been given $28M by the Intelligence Advanced Projects Activity (IARPA) to study why the human brain is significantly better at learning and retaining information than artificial intelligence (AI). The investment into this study could potentially help researchers develop AI that’s faster, smarter, and more like human brains.

 

 

Digital Ethics: The role of the CIO in balancing the risks and rewards of digital innovation — from mis-asia.com by Kevin Wo; with thanks to Gerd Leonhard for this posting

What is digital ethics?
In our hyper-connected world, an explosion of data is combining with pattern recognition, machine learning, smart algorithms, and other intelligent software to underpin a new level of cognitive computing. More than ever, machines are capable of imitating human thinking and decision-making across a raft of workflows, which presents exciting opportunities for companies to drive highly personalized customer experiences, as well as unprecedented productivity, efficiency, and innovation. However, along with the benefits of this increased automation comes a greater risk for ethics to be compromised and human trust to be broken.

According to Gartner, digital ethics is the system of values and principles a company may embrace when conducting digital interactions between businesses, people and things. Digital ethics sits at the nexus of what is legally required; what can be made possible by digital technology; and what is morally desirable.  

As digital ethics is not mandated by law, it is largely up to each individual organisation to set its own innovation parameters and define how its customer and employee data will be used.

 

 

New algorithm points the way towards regrowing limbs and organs — from sciencealert.com by David Nield

Excerpt:

An international team of researchers has developed a new algorithm that could one day help scientists reprogram cells to plug any kind of gap in the human body. The computer code model, called Mogrify, is designed to make the process of creating pluripotent stem cells much quicker and more straightforward than ever before.

A pluripotent stem cell is one that has the potential to become any type of specialised cell in the body: eye tissue, or a neural cell, or cells to build a heart. In theory, that would open up the potential for doctors to regrow limbs, make organs to order, and patch up the human body in all kinds of ways that aren’t currently possible.

 

 

 

The world’s first robot-run farm will harvest 30,000 heads of lettuce daily — from techinsider.io by Leanna Garfield

Excerpt (from DSC):

The Japanese lettuce production company Spread believes the farmers of the future will be robots.

So much so that Spread is creating the world’s first farm manned entirely by robots. Instead of relying on human farmers, the indoor Vegetable Factory will employ robots that can harvest 30,000 heads of lettuce every day.

Don’t expect a bunch of humanoid robots to roam the halls, however; the robots look more like conveyor belts with arms. They’ll plant seeds, water plants, and trim lettuce heads after harvest in the Kyoto, Japan farm.

 

 

 

Drone ambulances may just be the future of emergency medical vehicles — from interestingengineering.com by Gabrielle Westfield

Excerpt:

Drones are advancing everyday. They are getting larger, faster and more efficient to control. Meanwhile the medical field keeps facing major losses from emergency response vehicles not being able to reach their destination fast enough. Understandable so, I mean especially in the larger cities where traffic is impossible to move swiftly through. Red flashing lights atop or not, sometimes the roads are just not capable of opening up. It makes total sense that the future of ambulances would be paved in the open sky rather than unpredictable roads.

.

 

 

 

Phone shop will be run entirely by Pepper robots — from telegraph.co.uk by

Excerpt (emphasis DSC):

Creator company SoftBank said it planned to open the pop-up mobile store employing only Pepper robots by the end of March, according to Engadget.

The four foot-tall robots will be on hand to answer questions, provide directions and guide customers in taking out phone contracts until early April. It’s currently unknown what brands of phone Pepper will be selling.

 

 

 

Wise.io introduces first intelligent auto reply functionality for customer support organizations — from consumerelectronicsnet.com
Powered by Machine Learning, Wise Auto Response Frees Up Agent Time, Boosting Productivity, Accelerating Response Time and Improving the Customer Experience

Excerpt:

BERKELEY, CA — (Marketwired) — 01/27/16 — Wise.io, which delivers machine learning applications to help enterprises provide a better customer experience, today announced the availability of Wise Auto Response, the first intelligent auto reply functionality for customer support organizations. Using machine learning to understand the intent of an incoming ticket and determine the best available response, Wise Auto Response automatically selects and applies the appropriate reply to address the customer issue without ever involving an agent. By helping customer service teams answer common questions faster, Wise Auto Response removes a high percentage of tickets from the queue, freeing up agents’ time to focus on more complex tickets and drive higher levels of customer satisfaction.

 

 

Video game for treating ADHD looks to 2017 debut — from educationnews.org

Excerpt:

Akili Interactive Labs out of Boston has created a video game that they hope will help treat children diagnosed with attention-deficit hyperactivity disorder by teaching them to focus in a distracting environment.

The game, Project: EVO, is meant to be prescribed to children with ADHD as a medical treatment.  And after gaining $30.5 million in funding, investors appear to believe in it.  The company plans to use the funding to run clinical trials with plans to gain approval from the US Food and Drug Administration in order to be able to launch the game in late 2017.

Players will enter a virtual world filled with colorful distractions and be required to focus on specific tasks such as choosing certain objects while avoiding others.  The game looks to train the portion of the brain designed to manage and prioritize all the information taken in at one time.

 

Addendum on 1/29/16:

 

 

 

 

7 trends for artificial intelligence in 2016: ‘Like 2015 on steroids’ — from techcrunch.com by Hope Reese
TechRepublic checked in with AI experts Andrew Moore, Kathleen Richardson, and Roman Yampolskiy, for their take on what we’ve seen in AI this year and what’s coming in 2016.

Excerpt:

To get a handle on what to look for in the AI world, TechRepublic caught up with Andrew Moore, dean of Carnegie Mellon’s School of Computer Science, Kathleen Richardson, Senior Research Fellow in the Ethics of Robotics at De Montfort University, and Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville for what they see as the most important areas of AI research in the year ahead—what Yampolskiy says will be “like 2015 on steroids.”

Topics include:

  1. Deep learning
  2. AI replacing workers
  3. Internet of Things (IoT)
  4. Breakthroughs in emotional understanding
  5. AI in shopping and customer service
  6. Ethical questions
  7. A problem with representation

 

 

From DSC:
Below are some further items that discuss the need for some frameworks, policies, institutes, research, etc. that deal with a variety of game-changing technologies that are quickly coming down the pike (if they aren’t already upon on).  We need such things to help us create a positive future.

Also see Part I of this thread of thinking entitled, “The need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!  There have been so many other items that came out since that posting, I felt like I needed to add another one here.

What kind of future do we want? How are we going to insure that we get there?

As the saying goes…”Just because we can do something, doesn’t mean we should.” Or another saying comes to my mind…”What could possibly go wrong with this? It’s a done deal.”

While some of the items below should have very positive impacts on society, I do wonder how long it will take the hackers — the ones who are bent on wreaking havoc — to mess up some of these types of applications…with potentially deadly consequences? Security-related concerns must be dealt with here.


 

5 amazing and alarming things that may be done with your DNA — from washingtonpost.com by Matt McFarland

Excerpt (emphasis DSC):

Venter is leading efforts to use digital technology to analyze humans in ways we never have before, and the results will have huge implications for society. The latest findings he described are currently being written up for scientific publications. Venter didn’t want to usurp the publications, so he wouldn’t dive into extensive detail of how his team has made these breakthroughs. But what he did share offers an exciting and concerning overview of what lies ahead for humanity. There are social, legal and ethical implications to start considering. Here are five examples of how digitizing DNA will change the human experience:

 

 

These are the decisions the Pentagon wants to leave to robots — from defenseone.com by Patrick Tucker
The U.S. military believes its battlefield edge will increasingly depend on automation and artificial intelligence.

Excerpt:

Conducting cyber defensive operations, electronic warfare, and over-the-horizon targeting. “You cannot have a human operator operating at human speed fighting back at determined cyber tech,” Work said. “You are going to need have a learning machine that does that.” He did not say  whether the Pentagon is pursuing the autonomous or automatic deployment of offensive cyber capabilities, a controversial idea to be sure. He also highlighted a number of ways that artificial intelligence could help identify new waveforms to improve electronic warfare.

 

 

Britain should lead way on genetically engineered babies, says Chief Scientific Adviser — from.telegraph.co.uk by Sarah Knapton
Sir Mark Walport, who advises the government on scientific matters, said it could be acceptable to genetically edit human embryos

Excerpt:

Last week more than 150 scientists and campaigners called for a worldwide ban on the practice, claiming it could ‘irrevocably alter the human species’ and lead to a world where inequality and discrimination were ‘inscribed onto the human genome.’

But at a conference in London [on 12/8/15], Sir Mark Walport, who advises the government on scientific matters, said he believed there were ‘circumstances’ in which the genetic editing of human embyros could be ‘acceptable’.

 

 

Cyborg Future: Engineers Build a Chip That Is Part Biological and Part Synthetic — from futurism.com

Excerpt:

Engineers have succeeded in combining an integrated chip with an artificial lipid bilayer membrane containing ATP-powered ion pumps, paving the way for more such artificial systems that combine the biological with the mechanical down the road.

 

 

Robots expected to run half of Japan by 2035 — from engadget.com by Andrew Tarantola
Something-something ‘robot overlords’.

Excerpt:

Data analysts Nomura Research Institute (NRI), led by researcher Yumi Wakao, figure that within the next 20 years, nearly half of all jobs in Japan could be accomplished by robots. Working with Professor Michael Osborne from Oxford University, who had previously investigated the same matter in both the US and UK, the NRI team examined more than 600 jobs and found that “up to 49 percent of jobs could be replaced by computer systems,” according to Wakao.

 

 

 

Cambridge University is opening a £10 million centre to study the impact of AI on humanity — from businessinsider.com by Sam Shead

Excerpt:

Cambridge University announced on [12/3/15] that it is opening a new £10 million research centre to study the impact of artificial intelligence on humanity.

The 806-year-old university said the centre, being funded with a grant from non-profit foundation The Leverhulme Trust, will explore the opportunities and challenges facing humanity as a result of further developments in artificial intelligence.

 

Cambridge-Center-Dec2015

 

 

Tech leaders launch nonprofit to save the world from killer robots — from csmonitor.com by Jessica Mendoza
Elon Musk, Sam Altman, and other tech titans have invested $1 billion in a nonprofit that would help direct artificial intelligence technology toward positive human impact. 

 

 

 

 

2016 will be a pivotal year for social robots — from therobotreport.com by Frank Tobe
1,000 Peppers are selling each month from a big-dollar venture between SoftBank, Alibaba and Foxconn; Jibo just raised another $16 million as it prepares to deliver 7,500+ units in Mar/Apr of 2016; and Buddy, Rokid, Sota and many others are poised to deliver similar forms of social robots.

Excerpt:

These new robots, and the proliferation of mobile robot butlers, guides and kiosks, promise to recognize your voice and face and help you plan your calendar, provide reminders, take pictures of special moments, text, call and videoconference, order fast food, keep watch on your house or office, read recipes, play games, read emotions and interact accordingly, and the list goes on. They are attempting to be analogous to a sharp administrative assistant that knows your schedule, contacts and interests and engages with you about them, helping you stay informed, connected and active.

 

 

IBM opens its artificial mind to the world — from fastcompany.com by Sean Captain
IBM is letting companies plug into its Watson artificial intelligence engine to make sense of speech, text, photos, videos, and sensor data.

Excerpt:

Artificial intelligence is the big, oft-misconstrued catchphrase of the day, making headlines recently with the launch of the new OpenAI organization, backed by Elon Musk, Peter Thiel, and other tech luminaries. AI is neither a synonym for killer robots nor a technology of the future, but one that is already finding new signals in the vast noise of collected data, ranging from weather reports to social media chatter to temperature sensor readings. Today IBM has opened up new access to its AI system, called Watson, with a set of application programming interfaces (APIs) that allow other companies and organizations to feed their data into IBM’s big brain for analysis.

 

 

GE wants to give industrial machines their own social network with Predix Cloud — from fastcompany.com by Sean Captain
GE is selling a new service that promises to predict when a machine will break down…so technicians can preemptively fix it.

 

 

Foresight 2020: The future is filled with 50 billion connected devices — from ibmbigdatahub.com by Erin Monday

Excerpt:

By 2020, there will be over 50 billion connected devices generating continuous data.

This figure is staggering, but is it really a surprise? The world has come a long way from 1992, when the number of computers was roughly equivalent to the population of San Jose. Today, in 2015, there are more connected devices out there than there are human beings. Ubiquitous connectivity is very nearly a reality. Every day, we get a little closer to a time where businesses, governments and consumers are connected by a fluid stream of data and analytics. But what’s driving all this growth?

 

 

Designing robots that learn as effortlessly as babies — from singularityhub.com by Shelly Fan

Excerpt:

A wide-eyed, rosy-cheeked, babbling human baby hardly looks like the ultimate learning machine.

But under the hood, an 18-month-old can outlearn any state-of-the-art artificial intelligence algorithm.

Their secret sauce?

They watch; they imitate; and they extrapolate.

Artificial intelligence researchers have begun to take notice. This week, two separate teams dipped their toes into cognitive psychology and developed new algorithms that teach machines to learn like babies. One instructs computers to imitate; the other, to extrapolate.

 

 

Researchers have found a new way to get machines to learn faster — from fortune.com by  Hilary Brueck

Excerpt:

An international team of data scientists is proud to announce the very latest in machine learning: they’ve built a program that learns… programs. That may not sound impressive at first blush, but making a machine that can learn based on a single example is something that’s been extremely hard to do in the world of artificial intelligence. Machines don’t learn like humans—not as fast, and not as well. And even with this research, they still can’t.

 

 

Team showcase how good Watson is at learning — from adigaskell.org

Excerpt:

Artificial intelligence has undoubtedly come a long way in the last few years, but there is still much to be done to make it intuitive to use.  IBM’s Watson has been one of the most well known exponents during this time, but despite it’s initial success, there are issues to overcome with it.

A team led by Georgia Tech are attempting to do just that.  They’re looking to train Watson to get better at returning answers to specific queries.

 

 

Why The Internet of Things will drive a Knowledge Revolution. — from linkedin.com by David Evans

Excerpt:

As these machines inevitably connect to the Internet, they will ultimately connect to each other so they can share, and collaborate on their own findings. In fact, in 2014 machines got their own ”World Wide Web” called RoboEarth, in which to share knowledge with one another. …
The implications of all of this are at minimum twofold:

  • The way we generate knowledge is going to change dramatically in the coming years.
  • Knowledge is about to increase at an exponential rate.

What we choose to do with this newfound knowledge is of course up to us. We are about to face some significant challenges at scales we have yet to experience.

 

 

Drone squad to be launched by Tokyo police — from bbc.com

Excerpt:

A drone squad, designed to locate and – if necessary – capture nuisance drones flown by members of the public, is to be launched by police in Tokyo.

 

 

An advance in artificial intelligence rivals human abilities — from todayonline.com by John Markoff

Excerpt:

NEW YORK — Computer researchers reported artificial-intelligence advances [on Dec 10] that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

 

 

Somewhat related:

Novo Nordisk, IBM Watson Health to create ‘virtual doctor’ — from wsj.com by Denise Roland
Software could dispense treatment advice for diabetes patients

Excerpt:

Novo Nordisk A/S is teaming up with IBM Watson Health, a division of International Business Machines Corp., to create a “virtual doctor” for diabetes patients that could dispense treatment advice such as insulin dosage.

The Danish diabetes specialist hopes to use IBM’s supercomputer platform, Watson, to analyze health data from diabetes patients to help them manage their disease.

 

 

Why Google’s new quantum computer could launch an artificial intelligence arms race — from washingtonpost.com

 

 

 

8 industries robots will completely transform by 2025 — from techinsider.io

 

 

 

Addendums on 12/17/15:

Russia and China are building highly autonomous killer robots — from businessinsider.com.au by Danielle Muoi

Excerpt:

Russia and China are creating highly autonomous weapons, more commonly referred to as killer robots, and it’s putting pressure on the Pentagon to keep up, according to US Deputy Secretary of Defense Robert Work. During a national-security forum on Monday, Work said that China and Russia are heavily investing in a roboticized army, according to a report from Defense One.

Your Algorithmic Self Meets Super-Intelligent AI — from techcrunch.com by Jarno M. Koponen

Excerpt:

At the same time, your data and personalized experiences are used to develop and train the machine learning systems that are powering the Siris, Watsons, Ms and Cortanas. Be it a speech recognition solution or a recommendation algorithm, your actions and personal data affect how these sophisticated systems learn more about you and the world around you.

The less explicit fact is that your diverse interactions — your likes, photos, locations, tags, videos, comments, route selections, recommendations and ratings — feed learning systems that could someday transform into superintelligent AIs with unpredictable consequences.

As of today, you can’t directly affect how your personal data is used in these systems

 

Addendum on 12/20/15:

 

Addendum on 12/21/15:

  • Facewatch ‘thief recognition’ CCTV on trial in UK stores — from bbc.com
    Excerpts (emphasis DSC):
    Face-recognition camera systems should be used by police, he tells me. “The technology’s here, and we need to think about what is a proportionate response that respects people’s privacy,” he says.

    “The public need to ask themselves: do they want six million cameras painted red at head height looking at them?

 

Addendum on 1/13/16:

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

Can technology identify China’s top graduates? — from bbc.com by John Sudworth, Shanghai

Excerpts (emphasis DSC):

Has the humble CV finally met its match?

[L’Oreal] has chosen the world’s biggest jobs market – China – to utter two words that would be music to the ears of beleaguered recruitment executives everywhere: “Goodbye CV”. This year, the 33,000 applicants for the 70 places on the company’s Chinese graduate recruitment scheme have been asked to save themselves the paper, the printer ink and the pain. Instead, they were asked to answer three simple questions via their smartphones.

We have developed algorithms that can take the words that people use and derive context from them,” said Robin Young, the founder of Seedlink Tech.

Here’s how it works: students use their mobile phones to access L’Oreal’s website which prompts them to answer three open-ended questions.

The answers, which have to be at least 75 words long, are automatically fed into Seedlink’s database and the software gets to work. It analyses the language used and compares each candidate’s answers with the many thousands of others. Then, supposedly calibrated to mine for the specific personality traits that L’Oreal is looking for, it produces a ranking with, in theory, the person most suited for a career at L’Oreal at the top.

 

Excerpt from the March 1, 2015 edition of CIO Magazine (emphasis DSC):

The almighty algorithm is the fuel for today’s data-driven businesses. They stoke the data engines that recommend purchases, trade stocks, predict crime, spot medical conditions, monitor sleep apnea, find dating partners, calculate driving routes and so much more. “These math equations,” writes Managing Editor Kim S. Nash, “may someday run our lives.”

In the wrong application, they may someday ruin lives, as well.

The fascinating story that Nash unearthed will show you exactly why CIOs need to develop what one expert called “algorithmic accountability.”

 

From DSC:
To set the stage for the following reflections…first, an excerpt from
Climate researcher claims CIA asked about weaponized weather: What could go wrong? — from computerworld.com (emphasis DSC)

We’re not talking about chemtrails, HAARP (High Frequency Active Auroral Research Program) or other weather warfare that has been featured in science fiction movies; the concerns were raised not a conspiracy theorist, but by climate scientist, geoengineering specialist and Rutgers University Professor Alan Robock. He “called on secretive government agencies to be open about their interest in radical work that explores how to alter the world’s climate.” If emerging climate-altering technologies can effectively alter the weather, Robock is “worried about who would control such climate-altering technologies.”

 

Exactly what I’ve been reflecting on recently.

***Who*** is designing, developing, and using the powerful technologies that are coming into play these days and ***for what purposes?***

Do these individuals care about other people?  Or are they much more motivated by profit or power?

Given the increasingly potent technologies available today, we need people who care about other people. 

Let me explain where I’m coming from here…

I see technologies as tools.  For example, a pencil is a technology. On the positive side of things, it can be used to write or draw something. On the negative side of things, it could be used as a weapon to stab someone.  It depends upon the user of the pencil and what their intentions are.

Let’s look at some far more powerful — and troublesome — examples.

 



DRONES

Drones could be useful…or they could be incredibly dangerous. Again, it depends on who is developing/programming them and for what purpose(s).  Consider the posting from B.J. Murphy below (BTW, nothing positive or negative is meant by linking to this item, per se).

DARPA’s Insect and Bird Drones Are On Their Way — from proactiontranshuman.wordpress.com by B.J. Murphy

.

Insect drone

From DSC:
I say this is an illustrative posting because if the inventor/programmer of this sort of drone wanted to poison someone, they surely could do so. I’m not even sure that this drone exists or not; it doesn’t matter, as we’re quickly heading that way anyway.  So potentially, this kind of thing is very scary stuff.

We need people who care about other people.

Or see:
Five useful ideas from the World Cup of Drones — from  dezeen.com
The article mentions some beneficial purposes of drones, such as for search and rescue missions or for assessing water quality.  Some positive intentions, to be sure.

But again, it doesn’t take too much thought to come up with some rather frightening counter-examples.
 

 

GENE-RELATED RESEARCH

Or another example re: gene research/applications; an excerpt from:

Turning On Genes, Systematically, with CRISPR/Cas9 — from by genengnews.com
Scientists based at MIT assert that they can reliably turn on any gene of their choosing in living cells.

Excerpt:

It was also suggested that large-scale screens such as the one demonstrated in the current study could help researchers discover new cancer drugs that prevent tumors from becoming resistant.

From DSC:
Sounds like there could be some excellent, useful, positive uses for this technology.  But who is to say which genes should be turned on and under what circumstances? In the wrong hands, there could be some dangerous uses involved in such concepts as well.  Again, it goes back to those involved with designing, developing, selling, using these technologies and services.

 

ROBOTICS

Will robots be used for positive or negative applications?

The mechanized future of warfare — from theweek.com
OR
Atlas Unplugged: The six-foot-two humanoid robot that might just save your life — from zdnet.com
Summary:From the people who brought you the internet, the latest version of the Atlas robot will be used in its disaster-fighting robotic challenge.

 

atlasunpluggedtorso

 

AUTONOMOUS CARS

How Uber’s autonomous cars will destroy 10 million jobs and reshape the economy by 2025 — from sanfrancisco.cbslocal.com by

Excerpt:

Autonomous cars will be commonplace by 2025 and have a near monopoly by 2030, and the sweeping change they bring will eclipse every other innovation our society has experienced. They will cause unprecedented job loss and a fundamental restructuring of our economy, solve large portions of our environmental problems, prevent tens of thousands of deaths per year, save millions of hours with increased productivity, and create entire new industries that we cannot even imagine from our current vantage point.

One can see the potential for good and for bad from the above excerpt alone.

Or Ford developing cross country automotive remote control — from spectrum.ieee.org

 

Ford-RemoteCtrl-Feb-2015

Or Germany has approved the use of self driving cars on Autobahn A9 Route — from wtvox.com

While the above items list mostly positive elements, there are those who fear that autonomous cars could be used by terrorists. That is, could a terrorist organization make some adjustments to such self-driving cars and load them up with explosives, then remotely control them in order to drive them to a certain building or event and cause them to explode?

Again, it depends upon whether the designers and users of a system care about other people.

 

BIG DATA / AI / COGNITIVE COMPUTING

The rise of machines that learn — from infoworld.com by Eric Knorr; with thanks to Oliver Hansen for his tweet on this
A new big data analytics startup, Adatao, reminds us that we’re just at the beginning of a new phase of computing when systems become much, much smarter

Excerpt:

“Our warm and creepy future,” is how Miko refers to the first-order effect of applying machine learning to big data. In other words, through artificially intelligent analysis of whatever Internet data is available about us — including the much more detailed, personal stuff collected by mobile devices and wearables — websites and merchants of all kinds will become extraordinarily helpful. And it will give us the willies, because it will be the sort of personalized help that can come only from knowing us all too well.

 

Privacy is dead: How Twitter and Facebook are exposing you — from finance.yahoo.com

Excerpt:

They know who you are, what you like, and how you buy things. Researchers at MIT have matched up your Facebook (FB) likes, tweets, and social media activity with the products you buy. The results are a highly detailed and accurate profile of how much money you have, where you go to spend it and exactly who you are.

The study spanned three months and used the anonymous credit card data of 1.1 million people. After gathering the data, analysts would marry the findings to a person’s public online profile. By checking things like tweets and Facebook activity, researchers found out the anonymous person’s actual name 90% of the time.

 

iBeacon, video analysis top 2015 tech trends — from progressivegrocer.com

Excerpt:

Using digital to engage consumers will make the store a more interesting and – dare I say – fun place to shop. Such an enhanced in-store experience leads to more customer loyalty and a bigger basket at checkout. It also gives supermarkets a competitive edge over nearby stores not equipped with the latest technology.

Using video cameras in the ceilings of supermarkets to record shopper behavior is not new. But more retailers will analyze and use the resulting data this year. They will move displays around the store and perhaps deploy new traffic patterns that follow a shopper’s true path to purchase. The result will be increased sales.

Another interesting part of this video analysis that will become more important this year is facial recognition. The most sophisticated cameras are able to detect the approximate age and ethnicity of shoppers. Retailers will benefit from knowing, say, that their shopper base includes more Millennials and Hispanics than last year. Such valuable information will change product assortments.

Scientists join Elon Musk & Stephen Hawking, warn of dangerous AI — from rt.com

Excerpt:

Hundreds of leading scientists and technologists have joined Stephen Hawking and Elon Musk in warning of the potential dangers of sophisticated artificial intelligence, signing an open letter calling for research on how to avoid harming humanity.

The open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

 

 

SMART/ CONNECTED TVs

 



Though there are many other examples, I think you get the point.

That biblical idea of loving our neighbors as ourselves…well, as you can see,
that idea is as highly applicable, important, and relevant today as it ever was.



 

 

Addendum on 3/19/15 that gets at exactly the same thing here:

  • Teaching robots to be moral — from newyorker.com by Gary Marcus
    Excerpt:
    Robots and advanced A.I. could truly transform the world for the better—helping to cure cancer, reduce hunger, slow climate change, and give all of us more leisure time. But they could also make things vastly worse, starting with the displacement of jobs and then growing into something closer to what we see in dystopian films. When we think about our future, it is vital that we try to understand how to make robots a force for good rather than evil.

 

 

Addendum on 3/20/15:

 

Jennifer A. Doudna, an inventor of a new genome-editing technique, in her office at the University of California, Berkeley. Dr. Doudna is the lead author of an article calling for a worldwide moratorium on the use of the new method, to give scientists, ethicists and the public time to fully understand the issues surrounding the breakthrough.
Credit Elizabeth D. Herman for The New York Times

 

The most extraordinary speech ever by a graduating MBA — from LinkedIn.com by John Byrne

Excerpt (emphasis DSC):

Gerald spoke movingly about a near-death experience with armed gunmen in his hometown of Dallas, and how that changed his life forever. “A strange thing happened as I accepted that I was about to die: I stopped being afraid.” He then decided to “give my life to a cause greater than myself.”

After arriving at Harvard Business School from Yale, Gerald said that HBS “changed who we were; it reminded us who we could be. It reminded us that we didn’t have to wait until we were rich or powerful, or until we actually knew finance, to make a difference. We could act right now.”

With three classmates, Casey founded a non-profit, MBAs Across America, which is a movement of MBAs and entrepreneurs working together to revitalize America. “We saw the signs for hope in entrepreneurs who were on the front lines of change. They showed us that the new ‘bottom line’ in business is the impact you have on your community and the world around you — that no amount of profit could make up for purpose.”

 

 

See also:

CaseyGerald-HBS-Commencement-2014

 

From DSC:
Though the use of the word “ever” in John Byrne’s posting on LinkedIn.com may be a stretch for some, Casey Gerald did give an incredibly powerful, deep, well-articulated message at Harvard Business School’s 2014 Commencement. 

I really appreciated what Casey was getting at — a higher calling for business.  A higher calling for one’s life.  If it’s only about making a living — vs making a life and a contribution — it comes up short.  We can do better.  Businesses can do better.  Wall Street can do better.  With corporations sitting on a trillion+ dollars, how might those massive resources be put towards helping society at large?  Here are 2 ideas:

  1. Don’t lay people off so quickly.  Take some of those funds and use them to retrain/reinvent people.  Keep America’s households running. Help keep peoples’ skillsets relevant, and help keep people employed.  Better yet, do this now for those people that you know you will be replacing in the future with algorithms and/or with robotics.
    .
  2. Fund/outfit educational institutions.  For example, it would benefit society greatly if the large tech companies would outfit the K-12 classrooms across the country (yes, I’m mainly thinking of you Apple, Google, & Cisco).  Many districts are struggling to implement ed tech and this would be of huge service to the country.

 

 

See also:

 

MBAsAcrossAmerica-June2014

 

When is Big Learning Data too Big? — from Learning TRENDS by Elliott Masie 

Excerpt from Update #822:

1) An interesting question arose in our conversations about Big Learning Data:

When is Big Learning Data too Big?
The question is framed around the ability of an individual or an organization to process really large amounts of data. Can a learning designer or even a learner, “handle” really large amounts of data? When is someone (or even an organization) handicapped by the size, scope and variety of data that is available to reflect learning patterns and outcomes? When do we want a tight summary vs. when we want to see a scattergram of many data points?

As we grow the size, volume and variety of Big Learning Data elements – we will also need to respect the ability (or challenge) of people to process the data. A parent may hear that their kid is a B- in mathematics – and want a lot more data. But, the same parent does not want 1,000 data elements covering 500 sub-competencies. The goal is to find a way to reflect Big Learning Data to an individual in a fashion that enables them to make better sense of the process – and have a “Continuum” that they can move to get more or less data as a situational choice.

 

Also related, an excerpt from Three Archetypes of the Future Post-Secondary Instructor — from evoLLLution.com by Chris Proulx

The Course Hacker
The last and perhaps most speculative role of the future online instructor will be the person who digs deep into the data that will be available from next generation learning systems to target specific learning interventions to specific students — at scale. The idea of the Course Hacker is based on the emerging role of the Growth Hacker at high-growth web businesses. Mining data from web traffic, social media, email campaigns, etc., the Growth Hacker is constantly iterating a web product or marketing campaign to seek rapid growth in users or revenue. Adapted to online education, the Course Hacker would be a faculty member with strong technical and statistical skills who would study data about which course assets were being used and by whom, which students worked more quickly or slowly, which questions caused the most problems on a quiz, who were the most socially active students in the course, who were the lurkers but getting high marks, etc.  Armed with those deep insights, they would be continually adapting course content, providing support and remedial help to targeted students, creating incentives to motivate people past critical blocks in the course, etc.

 

 

Added later on:

What do the ethical models look like? How are these models deployed rapidly — at the speed of technology? How are these models refined with time? We distilled the group discussions into a series of topics, including student awareness (or lack of awareness) of analytics, future algorithmic science, and the future of learning analytics as defined by business practices, student and faculty access to the data, and a redefinition of failure.

The arguments put forward here often take the form of rhetorical questions; the methodological purpose in presenting the argument in this way is to frame how ethical questioning might guide future developments.

 

 

 

From DSC:
This posting is especially meant for two audiences (but also has wider ramifications for the vast majority of us living in the United States)

  1. Those students who are majoring in economics
  2. Those of us working within higher education

 


To the students studying economics out there:

  • What parts from the articles listed below are true? False? Is anything being minimized or exaggerated — or is the information factual and accurate?
  • How are the topics of these articles/discussions relevant to your lives today? In the future?.
  • Do ethics come into play here? If so, how?
    .

 

Fed to the Sharks, Part 2: Housing & the Death of the Middle Class  — from oftwominds.com by Charles Hugh Smith

Excerpts:

The Fed sacrificed the foundation of middle class wealth — stable housing values — to boost bank profits.

Lest you think the phrase “death of the middle class” is hyperbole, please examine these two charts, keeping in mind the middle class by definition must be in the middle of income/wealth distribution — conventionally, between 40% and 80%, i.e. the 40% between the bottom 40% and the top 20%.

 

See that little red wedge?
That’s the bottom 80% — the entire middle class
and everyone below the middle class.

 

 

 

Fed to the Sharks, Part 1: The Fed takes our money, gives it to banks who loan it back to us at 16%  — from oftwominds.com by Charles Hugh Smith

Excerpt:

We’re being Fed to the sharks, every day, one morsel at a time. What a way to go….

What can we say about the Federal Reserve’s policies that hasn’t been said a million times? How about simplifying the two primary purposes of Fed policies? I will cover one today and the second one tomorrow. Both involve feeding the 99.5% to the financier/ Wall Street/bank sharks.

 

 

 

 

 


To institutions of higher education:

  • If what Charles Hugh Smith is saying is true and the middle class continues to be hollowed out, how does — or should — this affect us?
  • How might this impact our strategies? Our offerings? Our pricing structures?

 

A new digital ecology is evolving, and humans are being left behind — from io9.com by George Dvorsky

 

Excerpt (emphasis DSC):

Incomprehensible computer behaviors (<– Can we use the word behavior here? It seems an odd word to describe computer-related actions…) have evolved out of high-frequency stock trading, and humans aren’t sure why. Eventually, it could start affecting high-tech warfare, too. We spoke with a researcher at University of Miami who thinks humans will be outpaced by a new “machine ecology.”

For all intents and purposes, this genesis of this new world began in 2006 with the introduction of legislation which made high frequency stock trading a viable option. This form of rapid-fire trading involves algorithms, or bots, that can make decisions on the order of milliseconds (ms). By contrast, it takes a human at least one full second to both recognize and react to potential danger. Consequently, humans are progressively being left out of the trading loop.

“What we see with the new ultrafast computer algorithms is predatory trading,” he says. “In this case, the predator acts before the prey even knows it’s there.”

Johnson describes this new ecology as one consisting of mobs of ultrafast bots that frequently overwhelm the system. When events last less than a second, the financial world transitions to a new one inhabited by packs of aggressively trading algorithms.

.

From DSC:
I’m getting concerned about the power of emerging technologies and who is using these technologies — and how they are using them.  It took humans to program these algorithms.  It still takes humans to oversee these issues/trends (at least at this point in time!).  Therefore, values — and hearts — come into play here — with very real effects.  Quoting from the article:

“There is real money being gained and lost here — even a few thousand dollars every millisecond, which is a tiny amount on the market, is a million dollars per second,” he told us. “This money could be pension fund money, and so on. So somebody needs to understand what is going on, and if it is ‘fair’.”

Who’s involved here? Who’s making sure things are “fair?” Also…what are MBA programs teaching along these lines?  Computer Science teachers/professors?  What values are we instilling in the people who will be programming the algorithms that overlook such processes? That are/will be creating this new “machine ecology?”

 

From DSC:
The two items mentioned below — which I recently ran across — took me back to a nagging thought: 

In the United States, we need for our businesses to pursue a higher calling and purpose. We need businesses to ask how they might best be serving society/others; and I, as an individual, need to be asking the same thing.  

It’s tough to do. It’s easy to loose our footing here.  But if culture eats strategy for breakfast — and if strategies are so key in navigating/surviving in a quickly-changing world — then why don’t we work more on our cultures?  Our hearts?  Our reasons for existing and working?

My guess is that employees would also find their work more meaningful if they saw how their companies were making significant contributions and differences in the world.  For example, when I worked at Kraft (Foods) in the 90’s, we did some things like sending food to areas in crisis; but it wasn’t highlighted that much and it certainly wasn’t our reason for being.  Can you imagine how we would have felt if it was one of our top goals to provide food to every single person in the world?  I wonder how much more energy, commitment, creativity, innovation, etc. would have been generated with that sort of aim in mind? How would such a perspective/drive have affected the company’s culture?  (Instead, Philip Morris purchased Kraft and had a negative affect on the company’s culture.)

 


 

The new marketing strategy: Company culture — from kristakotrla.com on March 17, 2013

Excerpt:

Dear Corporate Leadership
Please get back to being a business of people… serving people. Sounds a tad cheesy but seriously. Stop trying to be a big “corporatey,” over-processed, over-mechanized, over-bureaucratic, over-org-charted machine. Smoke and mirrors and perfection is out. Authentic, human, collaboration and innovation from real-time engagement is in.

If you treat your business like a machine then don’t be surprised when your employees act like passionless robots. Ever find yourself scratching your head wondering why on earth your machine-like, killer strategy isn’t thriving? Check your culture (and check your heart).

 

This one tweet reveals what’s wrong with American business — from LinkedIn.com by Henry Blodget

Excerpt (emphasis DSC):

The real problem is that American corporations, which are richer and more profitable than they have ever been in history (see chart below), have become so obsessed with “maximizing short-term profits” that they are no longer investing in their future, their people, and the country.

This short-term greed can be seen in many aspects of corporate behavior, from scrimping on investment to obsessing about quarterly earnings to fretting about daily fluctuations in stock prices. But it is most visible in the general cultural attitude toward average employees.

Employees are human beings. They devote their lives to creating value for customers, shareholders, and colleagues. And, in return, at least in theory, they share in the rewards of the value created by their team.

In theory.

In practice, American business culture has become so obsessed with maximizing short-term profits that employees aren’t regarded as people who are members of a team.

Rather, they are regarded as “costs.”

 

Corporate profits and profit margins are at the highest level in history…


 

From DSC:
After being introduced, technologies often have a life of their own; they go in directions — for better or for worse — that the original developers didn’t really envision .  Below is a good example of this:

  • Clever hacks give Google Glass many unintended powers — from npr.org by Steve Henn
    Excerpt:
    “Essentially what I am building is an alternative operating system that runs on Glass but is not controlled by Google,” he said.

    But hackers are proving it’s possible to re-engineer Google Glass in any number of creative ways. And in the process, they’ve put Google in an awkward position. The company needs to embrace their creative talents if it hopes to build a software ecosystem around its new device that might one day attract millions of consumers. But at the same time, Google wants to try to rein in uses for Glass or spook politicians pointed questions about privacy.

 

 

 

Addendum/also see:

 

From DSC:
My dad sent me a link to this piece by Bill Moyers called The ‘Crony Capitalist Blowout’.  If you aren’t angry, sad, and/or depressed after watching it, you either don’t have a pulse or you run and live in the circles that Bill Moyers is talking about.

But before we become too discouraged with our situation here in the United States, take solace in one of the most dreaded verses in all of scripture — to be dreaded, at least, by those who:

  • are arrogant, proud, and/or wicked
  • think that the LORD doesn’t see or care what happens on the Earth
  • think that they will never be held accountable for their actions

It’s from Psalm 73 (specifically verse 17)  and it says:

…till I entered the sanctuary of God;
    then I understood their final destiny.

 

In other words, there will be justice.

 

Tagged with:  

5 ways online education can keep its students honest — from gigaom.com by Ki Mae Heussner
As online learning platforms like Coursera, Udacity and edX raise the stakes for students with increased partnerships with traditional universities and credit-bearing classes, here are five technologies that can help them thwart cheating.

 

Tagged with:  

Opinion: Sandy Hook shows teachers’ enduring values — from courant.com by David Bosso

Excerpt:

To so many, the educators at Sandy Hook Elementary School demonstrate that the core values of education mirror the greatest ideals of humanity, and they are exemplars in this regard. They offer us hope, and reinforce our belief in the goodness of others and the power of education. In an era of accountability, standards, testing and data, they affirm that what ultimately matters most are the immeasurable lessons and the enduring relationships teachers cultivate with their students.

To the educators of Sandy Hook Elementary School, thank you for the powerful, inspiring example of dedication and compassion you have given us. You have made, and continue to make, a difference to so many. In the midst of this unfathomable loss and profound sorrow, you have buoyed our spirits and given us hope. Because of your passion, courage, sacrifice, and devotion, I am once again reassured to proudly declare to educators everywhere: Never again say, “I am just a teacher.”

— I originally saw this on twitter as posted by
Sarah Brown Wessling (@SarahWessling)

© 2024 | Daniel Christian